Apr 30 03:27:21.917200 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:27:21.917241 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:27:21.917262 kernel: BIOS-provided physical RAM map: Apr 30 03:27:21.917274 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 03:27:21.917285 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 30 03:27:21.917298 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 30 03:27:21.917313 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 30 03:27:21.917327 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 30 03:27:21.917340 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 30 03:27:21.917356 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 30 03:27:21.917369 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 30 03:27:21.917382 kernel: NX (Execute Disable) protection: active Apr 30 03:27:21.917395 kernel: APIC: Static calls initialized Apr 30 03:27:21.917408 kernel: efi: EFI v2.7 by EDK II Apr 30 03:27:21.917425 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Apr 30 03:27:21.917443 kernel: SMBIOS 2.7 present. Apr 30 03:27:21.917458 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 30 03:27:21.917472 kernel: Hypervisor detected: KVM Apr 30 03:27:21.917487 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:27:21.917501 kernel: kvm-clock: using sched offset of 4221310340 cycles Apr 30 03:27:21.917517 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:27:21.917532 kernel: tsc: Detected 2499.994 MHz processor Apr 30 03:27:21.917547 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:27:21.917562 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:27:21.917577 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 30 03:27:21.917596 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 03:27:21.917611 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:27:21.917625 kernel: Using GB pages for direct mapping Apr 30 03:27:21.917640 kernel: Secure boot disabled Apr 30 03:27:21.917654 kernel: ACPI: Early table checksum verification disabled Apr 30 03:27:21.917669 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 30 03:27:21.917684 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 30 03:27:21.917698 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 30 03:27:21.917713 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 30 03:27:21.917731 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 30 03:27:21.917746 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 30 03:27:21.917761 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 30 03:27:21.917776 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 30 03:27:21.917790 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 30 03:27:21.917805 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 30 03:27:21.917827 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 30 03:27:21.917846 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 30 03:27:21.917862 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 30 03:27:21.917878 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 30 03:27:21.917894 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 30 03:27:21.917910 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 30 03:27:21.917925 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 30 03:27:21.917966 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 30 03:27:21.917980 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 30 03:27:21.917995 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 30 03:27:21.918009 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 30 03:27:21.918023 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 30 03:27:21.918037 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Apr 30 03:27:21.918051 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 30 03:27:21.918066 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 03:27:21.918080 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 03:27:21.918095 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 30 03:27:21.918112 kernel: NUMA: Initialized distance table, cnt=1 Apr 30 03:27:21.918126 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Apr 30 03:27:21.918141 kernel: Zone ranges: Apr 30 03:27:21.918156 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:27:21.918170 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 30 03:27:21.918184 kernel: Normal empty Apr 30 03:27:21.918198 kernel: Movable zone start for each node Apr 30 03:27:21.918210 kernel: Early memory node ranges Apr 30 03:27:21.918222 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 03:27:21.918237 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 30 03:27:21.918247 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 30 03:27:21.918259 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 30 03:27:21.918274 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:27:21.918287 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 03:27:21.918299 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 30 03:27:21.918314 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 30 03:27:21.918327 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 30 03:27:21.918341 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:27:21.918355 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 30 03:27:21.918373 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:27:21.918388 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:27:21.918404 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:27:21.918419 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:27:21.918435 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:27:21.918450 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:27:21.918465 kernel: TSC deadline timer available Apr 30 03:27:21.918481 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:27:21.918498 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 03:27:21.918514 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 30 03:27:21.918528 kernel: Booting paravirtualized kernel on KVM Apr 30 03:27:21.918541 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:27:21.918553 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:27:21.918568 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:27:21.918583 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:27:21.918597 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:27:21.918611 kernel: kvm-guest: PV spinlocks enabled Apr 30 03:27:21.918628 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 03:27:21.918649 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:27:21.918667 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:27:21.918683 kernel: random: crng init done Apr 30 03:27:21.918699 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:27:21.918716 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:27:21.918732 kernel: Fallback order for Node 0: 0 Apr 30 03:27:21.918750 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 30 03:27:21.918766 kernel: Policy zone: DMA32 Apr 30 03:27:21.918784 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:27:21.918799 kernel: Memory: 1874608K/2037804K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 162936K reserved, 0K cma-reserved) Apr 30 03:27:21.918814 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:27:21.918829 kernel: Kernel/User page tables isolation: enabled Apr 30 03:27:21.918844 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:27:21.918859 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:27:21.918871 kernel: Dynamic Preempt: voluntary Apr 30 03:27:21.918885 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:27:21.918901 kernel: rcu: RCU event tracing is enabled. Apr 30 03:27:21.918966 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:27:21.918978 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:27:21.918991 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:27:21.919003 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:27:21.919016 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:27:21.919029 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:27:21.919042 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 03:27:21.919069 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:27:21.919084 kernel: Console: colour dummy device 80x25 Apr 30 03:27:21.919097 kernel: printk: console [tty0] enabled Apr 30 03:27:21.919111 kernel: printk: console [ttyS0] enabled Apr 30 03:27:21.919125 kernel: ACPI: Core revision 20230628 Apr 30 03:27:21.919142 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 30 03:27:21.919171 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:27:21.919185 kernel: x2apic enabled Apr 30 03:27:21.919200 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:27:21.919215 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Apr 30 03:27:21.919233 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Apr 30 03:27:21.919247 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 03:27:21.919262 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 03:27:21.919277 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:27:21.919292 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:27:21.919307 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:27:21.919322 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:27:21.919337 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 30 03:27:21.919351 kernel: RETBleed: Vulnerable Apr 30 03:27:21.919366 kernel: Speculative Store Bypass: Vulnerable Apr 30 03:27:21.919384 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:27:21.919399 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:27:21.919413 kernel: GDS: Unknown: Dependent on hypervisor status Apr 30 03:27:21.919428 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:27:21.919443 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:27:21.919458 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:27:21.919473 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 30 03:27:21.919489 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 30 03:27:21.919504 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 30 03:27:21.919519 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 30 03:27:21.919534 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 30 03:27:21.919552 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 30 03:27:21.919567 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:27:21.919582 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 30 03:27:21.919598 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 30 03:27:21.919613 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 30 03:27:21.919629 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 30 03:27:21.919644 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 30 03:27:21.919659 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 30 03:27:21.919674 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 30 03:27:21.919689 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:27:21.919704 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:27:21.919720 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:27:21.919735 kernel: landlock: Up and running. Apr 30 03:27:21.919751 kernel: SELinux: Initializing. Apr 30 03:27:21.919768 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:27:21.919785 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:27:21.919801 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 30 03:27:21.919819 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:27:21.919836 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:27:21.919854 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:27:21.919871 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 30 03:27:21.919892 kernel: signal: max sigframe size: 3632 Apr 30 03:27:21.919910 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:27:21.919948 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:27:21.919965 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 03:27:21.919982 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:27:21.919995 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:27:21.920011 kernel: .... node #0, CPUs: #1 Apr 30 03:27:21.920028 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 30 03:27:21.920046 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 03:27:21.920067 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:27:21.920082 kernel: smpboot: Max logical packages: 1 Apr 30 03:27:21.920099 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Apr 30 03:27:21.920116 kernel: devtmpfs: initialized Apr 30 03:27:21.920132 kernel: x86/mm: Memory block size: 128MB Apr 30 03:27:21.920149 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 30 03:27:21.920166 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:27:21.920182 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:27:21.920198 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:27:21.920218 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:27:21.920234 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:27:21.920250 kernel: audit: type=2000 audit(1745983641.513:1): state=initialized audit_enabled=0 res=1 Apr 30 03:27:21.920266 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:27:21.920281 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:27:21.920298 kernel: cpuidle: using governor menu Apr 30 03:27:21.920314 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:27:21.920330 kernel: dca service started, version 1.12.1 Apr 30 03:27:21.920344 kernel: PCI: Using configuration type 1 for base access Apr 30 03:27:21.920363 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:27:21.920379 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:27:21.920394 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:27:21.920410 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:27:21.920425 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:27:21.920441 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:27:21.920457 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:27:21.920473 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:27:21.920487 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:27:21.920506 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 30 03:27:21.920522 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:27:21.920538 kernel: ACPI: Interpreter enabled Apr 30 03:27:21.920555 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:27:21.920573 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:27:21.920590 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:27:21.920608 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 03:27:21.920625 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 30 03:27:21.920643 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:27:21.920893 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:27:21.921085 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 30 03:27:21.921234 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 30 03:27:21.921256 kernel: acpiphp: Slot [3] registered Apr 30 03:27:21.921273 kernel: acpiphp: Slot [4] registered Apr 30 03:27:21.921290 kernel: acpiphp: Slot [5] registered Apr 30 03:27:21.921306 kernel: acpiphp: Slot [6] registered Apr 30 03:27:21.921326 kernel: acpiphp: Slot [7] registered Apr 30 03:27:21.921341 kernel: acpiphp: Slot [8] registered Apr 30 03:27:21.921356 kernel: acpiphp: Slot [9] registered Apr 30 03:27:21.921370 kernel: acpiphp: Slot [10] registered Apr 30 03:27:21.921385 kernel: acpiphp: Slot [11] registered Apr 30 03:27:21.921400 kernel: acpiphp: Slot [12] registered Apr 30 03:27:21.921416 kernel: acpiphp: Slot [13] registered Apr 30 03:27:21.921431 kernel: acpiphp: Slot [14] registered Apr 30 03:27:21.921446 kernel: acpiphp: Slot [15] registered Apr 30 03:27:21.921460 kernel: acpiphp: Slot [16] registered Apr 30 03:27:21.921478 kernel: acpiphp: Slot [17] registered Apr 30 03:27:21.921493 kernel: acpiphp: Slot [18] registered Apr 30 03:27:21.921508 kernel: acpiphp: Slot [19] registered Apr 30 03:27:21.921523 kernel: acpiphp: Slot [20] registered Apr 30 03:27:21.921538 kernel: acpiphp: Slot [21] registered Apr 30 03:27:21.921552 kernel: acpiphp: Slot [22] registered Apr 30 03:27:21.921568 kernel: acpiphp: Slot [23] registered Apr 30 03:27:21.921583 kernel: acpiphp: Slot [24] registered Apr 30 03:27:21.921599 kernel: acpiphp: Slot [25] registered Apr 30 03:27:21.921617 kernel: acpiphp: Slot [26] registered Apr 30 03:27:21.921632 kernel: acpiphp: Slot [27] registered Apr 30 03:27:21.921647 kernel: acpiphp: Slot [28] registered Apr 30 03:27:21.921663 kernel: acpiphp: Slot [29] registered Apr 30 03:27:21.921678 kernel: acpiphp: Slot [30] registered Apr 30 03:27:21.921694 kernel: acpiphp: Slot [31] registered Apr 30 03:27:21.921709 kernel: PCI host bridge to bus 0000:00 Apr 30 03:27:21.921864 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:27:21.922035 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:27:21.922161 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:27:21.922277 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 30 03:27:21.922393 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 30 03:27:21.922513 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:27:21.922664 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 30 03:27:21.922811 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 30 03:27:21.923007 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 30 03:27:21.923159 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 30 03:27:21.923298 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 30 03:27:21.923434 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 30 03:27:21.923566 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 30 03:27:21.923695 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 30 03:27:21.923824 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 30 03:27:21.924019 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 30 03:27:21.924152 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 30 03:27:21.924279 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 30 03:27:21.924405 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 30 03:27:21.924528 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 30 03:27:21.924656 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:27:21.924788 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 30 03:27:21.924916 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 30 03:27:21.925060 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 30 03:27:21.925186 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 30 03:27:21.925205 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:27:21.925221 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:27:21.925237 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:27:21.925252 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:27:21.925272 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 30 03:27:21.925288 kernel: iommu: Default domain type: Translated Apr 30 03:27:21.925303 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:27:21.925318 kernel: efivars: Registered efivars operations Apr 30 03:27:21.925333 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:27:21.925349 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:27:21.925365 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 30 03:27:21.925380 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 30 03:27:21.925513 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 30 03:27:21.925654 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 30 03:27:21.925789 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:27:21.925809 kernel: vgaarb: loaded Apr 30 03:27:21.925824 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 30 03:27:21.925839 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 30 03:27:21.925855 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:27:21.925869 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:27:21.925884 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:27:21.925902 kernel: pnp: PnP ACPI init Apr 30 03:27:21.925917 kernel: pnp: PnP ACPI: found 5 devices Apr 30 03:27:21.925953 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:27:21.925968 kernel: NET: Registered PF_INET protocol family Apr 30 03:27:21.925984 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 03:27:21.926000 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 03:27:21.926015 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:27:21.926031 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:27:21.926048 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 03:27:21.926067 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 03:27:21.926084 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:27:21.926099 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:27:21.926114 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:27:21.926129 kernel: NET: Registered PF_XDP protocol family Apr 30 03:27:21.926269 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:27:21.926399 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:27:21.926532 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:27:21.926671 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 30 03:27:21.926801 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 30 03:27:21.927017 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 30 03:27:21.927039 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:27:21.927056 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 03:27:21.927071 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Apr 30 03:27:21.927088 kernel: clocksource: Switched to clocksource tsc Apr 30 03:27:21.927103 kernel: Initialise system trusted keyrings Apr 30 03:27:21.927118 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 03:27:21.927139 kernel: Key type asymmetric registered Apr 30 03:27:21.927156 kernel: Asymmetric key parser 'x509' registered Apr 30 03:27:21.927170 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:27:21.927185 kernel: io scheduler mq-deadline registered Apr 30 03:27:21.927200 kernel: io scheduler kyber registered Apr 30 03:27:21.927216 kernel: io scheduler bfq registered Apr 30 03:27:21.927231 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:27:21.927247 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:27:21.927263 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:27:21.927283 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:27:21.927298 kernel: i8042: Warning: Keylock active Apr 30 03:27:21.927313 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:27:21.927327 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:27:21.927485 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 30 03:27:21.927615 kernel: rtc_cmos 00:00: registered as rtc0 Apr 30 03:27:21.927740 kernel: rtc_cmos 00:00: setting system clock to 2025-04-30T03:27:21 UTC (1745983641) Apr 30 03:27:21.927865 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 30 03:27:21.927890 kernel: intel_pstate: CPU model not supported Apr 30 03:27:21.927906 kernel: efifb: probing for efifb Apr 30 03:27:21.927923 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 30 03:27:21.927963 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 30 03:27:21.927981 kernel: efifb: scrolling: redraw Apr 30 03:27:21.927998 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 03:27:21.928014 kernel: Console: switching to colour frame buffer device 100x37 Apr 30 03:27:21.928031 kernel: fb0: EFI VGA frame buffer device Apr 30 03:27:21.928048 kernel: pstore: Using crash dump compression: deflate Apr 30 03:27:21.928068 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 03:27:21.928085 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:27:21.928102 kernel: Segment Routing with IPv6 Apr 30 03:27:21.928118 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:27:21.928135 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:27:21.928152 kernel: Key type dns_resolver registered Apr 30 03:27:21.928195 kernel: IPI shorthand broadcast: enabled Apr 30 03:27:21.928216 kernel: sched_clock: Marking stable (445003157, 118774501)->(650579334, -86801676) Apr 30 03:27:21.928233 kernel: registered taskstats version 1 Apr 30 03:27:21.928254 kernel: Loading compiled-in X.509 certificates Apr 30 03:27:21.928271 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:27:21.928289 kernel: Key type .fscrypt registered Apr 30 03:27:21.928306 kernel: Key type fscrypt-provisioning registered Apr 30 03:27:21.928323 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:27:21.928344 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:27:21.928362 kernel: ima: No architecture policies found Apr 30 03:27:21.928379 kernel: clk: Disabling unused clocks Apr 30 03:27:21.928397 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:27:21.928417 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:27:21.928435 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:27:21.928452 kernel: Run /init as init process Apr 30 03:27:21.928470 kernel: with arguments: Apr 30 03:27:21.928487 kernel: /init Apr 30 03:27:21.928504 kernel: with environment: Apr 30 03:27:21.928521 kernel: HOME=/ Apr 30 03:27:21.928538 kernel: TERM=linux Apr 30 03:27:21.928555 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:27:21.928579 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:27:21.928600 systemd[1]: Detected virtualization amazon. Apr 30 03:27:21.928618 systemd[1]: Detected architecture x86-64. Apr 30 03:27:21.928635 systemd[1]: Running in initrd. Apr 30 03:27:21.928653 systemd[1]: No hostname configured, using default hostname. Apr 30 03:27:21.928671 systemd[1]: Hostname set to . Apr 30 03:27:21.928693 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:27:21.928711 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:27:21.928728 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:27:21.928746 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:27:21.928766 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:27:21.928784 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:27:21.928803 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:27:21.928825 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:27:21.928845 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:27:21.928864 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:27:21.928882 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:27:21.928901 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:27:21.928923 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:27:21.929004 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:27:21.929021 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:27:21.929038 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:27:21.929055 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:27:21.929071 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:27:21.929087 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:27:21.929105 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:27:21.929123 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:27:21.929146 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:27:21.929166 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:27:21.929183 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:27:21.929202 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:27:21.929222 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:27:21.929240 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:27:21.929258 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:27:21.929291 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:27:21.929310 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:27:21.929333 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:27:21.929381 systemd-journald[178]: Collecting audit messages is disabled. Apr 30 03:27:21.929422 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:27:21.929444 systemd-journald[178]: Journal started Apr 30 03:27:21.929474 systemd-journald[178]: Runtime Journal (/run/log/journal/ec21978c30efb16344ef78e9d69e1fce) is 4.7M, max 38.2M, 33.4M free. Apr 30 03:27:21.941526 systemd-modules-load[179]: Inserted module 'overlay' Apr 30 03:27:21.942465 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:27:21.944978 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:27:21.947290 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:27:21.960314 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:27:21.972172 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:27:21.974862 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:27:21.979358 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:27:21.989988 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:27:21.990659 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:27:21.998878 kernel: Bridge firewalling registered Apr 30 03:27:21.994065 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:27:21.995487 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 30 03:27:21.998024 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:27:22.009059 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 03:27:22.009886 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:27:22.012404 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:27:22.023331 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:27:22.029640 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:27:22.041184 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:27:22.043156 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:27:22.046189 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:27:22.066893 dracut-cmdline[214]: dracut-dracut-053 Apr 30 03:27:22.071466 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:27:22.081749 systemd-resolved[211]: Positive Trust Anchors: Apr 30 03:27:22.081770 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:27:22.081837 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:27:22.090414 systemd-resolved[211]: Defaulting to hostname 'linux'. Apr 30 03:27:22.092050 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:27:22.093584 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:27:22.154983 kernel: SCSI subsystem initialized Apr 30 03:27:22.164955 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:27:22.176971 kernel: iscsi: registered transport (tcp) Apr 30 03:27:22.198214 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:27:22.198310 kernel: QLogic iSCSI HBA Driver Apr 30 03:27:22.237336 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:27:22.245209 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:27:22.270303 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:27:22.270384 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:27:22.270407 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:27:22.312959 kernel: raid6: avx512x4 gen() 17680 MB/s Apr 30 03:27:22.329961 kernel: raid6: avx512x2 gen() 18024 MB/s Apr 30 03:27:22.346955 kernel: raid6: avx512x1 gen() 18216 MB/s Apr 30 03:27:22.363956 kernel: raid6: avx2x4 gen() 18288 MB/s Apr 30 03:27:22.380956 kernel: raid6: avx2x2 gen() 18310 MB/s Apr 30 03:27:22.398148 kernel: raid6: avx2x1 gen() 13857 MB/s Apr 30 03:27:22.398193 kernel: raid6: using algorithm avx2x2 gen() 18310 MB/s Apr 30 03:27:22.416961 kernel: raid6: .... xor() 17462 MB/s, rmw enabled Apr 30 03:27:22.417008 kernel: raid6: using avx512x2 recovery algorithm Apr 30 03:27:22.437962 kernel: xor: automatically using best checksumming function avx Apr 30 03:27:22.600963 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:27:22.611767 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:27:22.621213 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:27:22.634606 systemd-udevd[396]: Using default interface naming scheme 'v255'. Apr 30 03:27:22.639623 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:27:22.646166 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:27:22.668407 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Apr 30 03:27:22.698535 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:27:22.704156 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:27:22.753887 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:27:22.765147 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:27:22.789488 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:27:22.792368 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:27:22.794110 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:27:22.795263 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:27:22.801161 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:27:22.837710 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:27:22.860954 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:27:22.879086 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 30 03:27:22.917171 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 30 03:27:22.917382 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:27:22.917405 kernel: AES CTR mode by8 optimization enabled Apr 30 03:27:22.917432 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 30 03:27:22.917608 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:00:ec:a0:ce:95 Apr 30 03:27:22.894997 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:27:22.895253 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:27:22.896158 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:27:22.896769 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:27:22.897059 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:27:22.901620 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:27:22.931620 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 30 03:27:22.931863 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 30 03:27:22.911590 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:27:22.921332 (udev-worker)[456]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:27:22.924738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:27:22.925135 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:27:22.938295 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:27:22.953965 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 30 03:27:22.958207 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:27:22.963200 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:27:22.963267 kernel: GPT:9289727 != 16777215 Apr 30 03:27:22.963288 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:27:22.965555 kernel: GPT:9289727 != 16777215 Apr 30 03:27:22.966208 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:27:22.966249 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:27:22.970147 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:27:22.987459 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:27:23.066961 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (443) Apr 30 03:27:23.094959 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (442) Apr 30 03:27:23.096984 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 30 03:27:23.134271 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 03:27:23.148329 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 30 03:27:23.166518 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 30 03:27:23.167237 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 30 03:27:23.174127 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:27:23.181295 disk-uuid[629]: Primary Header is updated. Apr 30 03:27:23.181295 disk-uuid[629]: Secondary Entries is updated. Apr 30 03:27:23.181295 disk-uuid[629]: Secondary Header is updated. Apr 30 03:27:23.188951 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:27:23.194399 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:27:24.205954 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:27:24.206156 disk-uuid[630]: The operation has completed successfully. Apr 30 03:27:24.343135 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:27:24.343257 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:27:24.371210 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:27:24.375852 sh[973]: Success Apr 30 03:27:24.403959 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:27:24.532035 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:27:24.540048 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:27:24.541995 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:27:24.577373 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:27:24.577438 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:27:24.577462 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:27:24.579203 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:27:24.580399 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:27:24.695969 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 03:27:24.709599 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:27:24.710656 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:27:24.722203 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:27:24.726147 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:27:24.757225 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:27:24.757307 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:27:24.757331 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 03:27:24.765964 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 03:27:24.777509 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:27:24.780434 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:27:24.787397 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:27:24.795237 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:27:24.824950 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:27:24.830155 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:27:24.861442 systemd-networkd[1165]: lo: Link UP Apr 30 03:27:24.861452 systemd-networkd[1165]: lo: Gained carrier Apr 30 03:27:24.863329 systemd-networkd[1165]: Enumeration completed Apr 30 03:27:24.863453 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:27:24.864082 systemd-networkd[1165]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:27:24.864087 systemd-networkd[1165]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:27:24.864165 systemd[1]: Reached target network.target - Network. Apr 30 03:27:24.867624 systemd-networkd[1165]: eth0: Link UP Apr 30 03:27:24.867630 systemd-networkd[1165]: eth0: Gained carrier Apr 30 03:27:24.867646 systemd-networkd[1165]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:27:24.884045 systemd-networkd[1165]: eth0: DHCPv4 address 172.31.16.5/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 03:27:25.223497 ignition[1122]: Ignition 2.19.0 Apr 30 03:27:25.223509 ignition[1122]: Stage: fetch-offline Apr 30 03:27:25.223707 ignition[1122]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:27:25.223716 ignition[1122]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:27:25.223986 ignition[1122]: Ignition finished successfully Apr 30 03:27:25.225645 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:27:25.230180 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:27:25.245619 ignition[1174]: Ignition 2.19.0 Apr 30 03:27:25.245634 ignition[1174]: Stage: fetch Apr 30 03:27:25.246114 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:27:25.246128 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:27:25.246260 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:27:25.292566 ignition[1174]: PUT result: OK Apr 30 03:27:25.295225 ignition[1174]: parsed url from cmdline: "" Apr 30 03:27:25.295236 ignition[1174]: no config URL provided Apr 30 03:27:25.295243 ignition[1174]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:27:25.295255 ignition[1174]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:27:25.295275 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:27:25.295864 ignition[1174]: PUT result: OK Apr 30 03:27:25.295904 ignition[1174]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 30 03:27:25.297333 ignition[1174]: GET result: OK Apr 30 03:27:25.297417 ignition[1174]: parsing config with SHA512: f141a1f3185926665a9c6c455691cf2171a403e316e09cc65b0c6832ac85810195e7ca476272e918307cec8948bd5cb0701c1209f9a2bafc234d2bc3722a00da Apr 30 03:27:25.301359 unknown[1174]: fetched base config from "system" Apr 30 03:27:25.301374 unknown[1174]: fetched base config from "system" Apr 30 03:27:25.302739 ignition[1174]: fetch: fetch complete Apr 30 03:27:25.301384 unknown[1174]: fetched user config from "aws" Apr 30 03:27:25.302748 ignition[1174]: fetch: fetch passed Apr 30 03:27:25.302819 ignition[1174]: Ignition finished successfully Apr 30 03:27:25.306357 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:27:25.311191 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:27:25.326837 ignition[1180]: Ignition 2.19.0 Apr 30 03:27:25.326851 ignition[1180]: Stage: kargs Apr 30 03:27:25.327420 ignition[1180]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:27:25.327435 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:27:25.327555 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:27:25.328432 ignition[1180]: PUT result: OK Apr 30 03:27:25.330792 ignition[1180]: kargs: kargs passed Apr 30 03:27:25.330865 ignition[1180]: Ignition finished successfully Apr 30 03:27:25.332680 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:27:25.336238 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:27:25.352729 ignition[1186]: Ignition 2.19.0 Apr 30 03:27:25.352743 ignition[1186]: Stage: disks Apr 30 03:27:25.353218 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:27:25.353232 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:27:25.353343 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:27:25.354188 ignition[1186]: PUT result: OK Apr 30 03:27:25.357121 ignition[1186]: disks: disks passed Apr 30 03:27:25.357198 ignition[1186]: Ignition finished successfully Apr 30 03:27:25.359020 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:27:25.359666 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:27:25.360048 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:27:25.360566 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:27:25.361127 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:27:25.361657 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:27:25.367163 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:27:25.400411 systemd-fsck[1194]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 03:27:25.403236 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:27:25.408125 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:27:25.509949 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:27:25.509989 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:27:25.510884 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:27:25.528069 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:27:25.530024 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:27:25.531187 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 03:27:25.531233 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:27:25.531257 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:27:25.537623 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:27:25.539321 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:27:25.552949 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1213) Apr 30 03:27:25.557017 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:27:25.557068 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:27:25.557091 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 03:27:25.571000 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 03:27:25.572757 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:27:25.867544 initrd-setup-root[1237]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:27:25.884661 initrd-setup-root[1244]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:27:25.889088 initrd-setup-root[1251]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:27:25.894209 initrd-setup-root[1258]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:27:26.067132 systemd-networkd[1165]: eth0: Gained IPv6LL Apr 30 03:27:26.145513 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:27:26.150085 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:27:26.152425 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:27:26.164481 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:27:26.166273 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:27:26.191963 ignition[1325]: INFO : Ignition 2.19.0 Apr 30 03:27:26.191963 ignition[1325]: INFO : Stage: mount Apr 30 03:27:26.194150 ignition[1325]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:27:26.194150 ignition[1325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:27:26.194150 ignition[1325]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:27:26.196144 ignition[1325]: INFO : PUT result: OK Apr 30 03:27:26.198459 ignition[1325]: INFO : mount: mount passed Apr 30 03:27:26.202010 ignition[1325]: INFO : Ignition finished successfully Apr 30 03:27:26.200736 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:27:26.207076 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:27:26.210952 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:27:26.216712 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:27:26.251959 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1337) Apr 30 03:27:26.255843 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:27:26.255910 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:27:26.255924 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 03:27:26.263951 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 03:27:26.266563 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:27:26.286429 ignition[1354]: INFO : Ignition 2.19.0 Apr 30 03:27:26.286429 ignition[1354]: INFO : Stage: files Apr 30 03:27:26.287727 ignition[1354]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:27:26.287727 ignition[1354]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:27:26.287727 ignition[1354]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:27:26.288765 ignition[1354]: INFO : PUT result: OK Apr 30 03:27:26.290522 ignition[1354]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:27:26.302531 ignition[1354]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:27:26.302531 ignition[1354]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:27:26.348479 ignition[1354]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:27:26.349264 ignition[1354]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:27:26.349264 ignition[1354]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:27:26.348966 unknown[1354]: wrote ssh authorized keys file for user: core Apr 30 03:27:26.351802 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:27:26.352438 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:27:26.458659 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:27:26.757040 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:27:26.757040 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:27:26.759180 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:27:26.759180 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:27:26.759180 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:27:26.759180 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:27:26.759180 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:27:26.759180 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:27:26.759180 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:27:26.759180 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:27:26.759180 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:27:26.759180 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:27:26.759180 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:27:26.759180 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:27:26.759180 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 03:27:27.153648 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 03:27:28.059512 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:27:28.059512 ignition[1354]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 03:27:28.061595 ignition[1354]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:27:28.061595 ignition[1354]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:27:28.061595 ignition[1354]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 03:27:28.061595 ignition[1354]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:27:28.061595 ignition[1354]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:27:28.061595 ignition[1354]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:27:28.066532 ignition[1354]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:27:28.066532 ignition[1354]: INFO : files: files passed Apr 30 03:27:28.066532 ignition[1354]: INFO : Ignition finished successfully Apr 30 03:27:28.063025 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:27:28.078188 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:27:28.080832 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:27:28.084217 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:27:28.084322 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:27:28.094750 initrd-setup-root-after-ignition[1382]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:27:28.094750 initrd-setup-root-after-ignition[1382]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:27:28.097405 initrd-setup-root-after-ignition[1386]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:27:28.098717 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:27:28.099751 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:27:28.105125 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:27:28.130518 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:27:28.130660 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:27:28.132013 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:27:28.133169 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:27:28.133950 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:27:28.141169 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:27:28.155119 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:27:28.160161 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:27:28.173715 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:27:28.174522 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:27:28.175567 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:27:28.176406 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:27:28.176586 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:27:28.177714 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:27:28.178533 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:27:28.179408 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:27:28.180161 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:27:28.180898 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:27:28.181672 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:27:28.182425 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:27:28.183327 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:27:28.184446 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:27:28.185198 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:27:28.185887 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:27:28.186088 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:27:28.187241 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:27:28.188041 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:27:28.188696 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:27:28.188843 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:27:28.189485 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:27:28.189656 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:27:28.191056 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:27:28.191236 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:27:28.191993 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:27:28.192147 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:27:28.200700 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:27:28.202112 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:27:28.202328 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:27:28.208337 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:27:28.209753 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:27:28.210631 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:27:28.212253 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:27:28.213125 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:27:28.222522 ignition[1406]: INFO : Ignition 2.19.0 Apr 30 03:27:28.224342 ignition[1406]: INFO : Stage: umount Apr 30 03:27:28.224342 ignition[1406]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:27:28.224342 ignition[1406]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:27:28.224342 ignition[1406]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:27:28.227699 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:27:28.230033 ignition[1406]: INFO : PUT result: OK Apr 30 03:27:28.227834 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:27:28.234237 ignition[1406]: INFO : umount: umount passed Apr 30 03:27:28.234237 ignition[1406]: INFO : Ignition finished successfully Apr 30 03:27:28.235152 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:27:28.235297 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:27:28.235991 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:27:28.236081 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:27:28.237330 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:27:28.237391 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:27:28.239974 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:27:28.240067 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:27:28.240819 systemd[1]: Stopped target network.target - Network. Apr 30 03:27:28.242084 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:27:28.242157 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:27:28.242661 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:27:28.244011 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:27:28.245166 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:27:28.245656 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:27:28.246598 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:27:28.248043 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:27:28.248100 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:27:28.248598 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:27:28.248641 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:27:28.249121 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:27:28.249186 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:27:28.250420 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:27:28.250478 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:27:28.251269 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:27:28.251774 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:27:28.253967 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:27:28.259150 systemd-networkd[1165]: eth0: DHCPv6 lease lost Apr 30 03:27:28.261145 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:27:28.261290 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:27:28.262440 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:27:28.262484 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:27:28.266154 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:27:28.266644 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:27:28.266716 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:27:28.268178 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:27:28.272732 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:27:28.272862 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:27:28.275600 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:27:28.275704 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:27:28.277818 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:27:28.277883 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:27:28.278438 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:27:28.278498 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:27:28.288455 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:27:28.288653 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:27:28.292140 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:27:28.292204 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:27:28.292840 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:27:28.292886 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:27:28.293427 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:27:28.293488 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:27:28.294673 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:27:28.294731 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:27:28.295937 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:27:28.296001 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:27:28.303233 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:27:28.304778 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:27:28.305541 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:27:28.306222 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:27:28.306286 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:27:28.310442 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:27:28.310579 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:27:28.313851 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:27:28.314029 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:27:28.446396 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:27:28.446533 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:27:28.448250 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:27:28.448776 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:27:28.448862 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:27:28.454162 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:27:28.464740 systemd[1]: Switching root. Apr 30 03:27:28.506533 systemd-journald[178]: Journal stopped Apr 30 03:27:30.530792 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Apr 30 03:27:30.530913 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:27:30.532978 kernel: SELinux: policy capability open_perms=1 Apr 30 03:27:30.533013 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:27:30.533042 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:27:30.533063 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:27:30.533090 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:27:30.533112 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:27:30.533133 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:27:30.533155 kernel: audit: type=1403 audit(1745983649.096:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:27:30.533178 systemd[1]: Successfully loaded SELinux policy in 82.267ms. Apr 30 03:27:30.533210 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.601ms. Apr 30 03:27:30.533238 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:27:30.533260 systemd[1]: Detected virtualization amazon. Apr 30 03:27:30.533284 systemd[1]: Detected architecture x86-64. Apr 30 03:27:30.533306 systemd[1]: Detected first boot. Apr 30 03:27:30.533328 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:27:30.533350 zram_generator::config[1448]: No configuration found. Apr 30 03:27:30.533379 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:27:30.533402 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 03:27:30.533427 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 03:27:30.533449 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 03:27:30.533474 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:27:30.533496 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:27:30.533518 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:27:30.533541 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:27:30.533564 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:27:30.533587 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:27:30.533613 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:27:30.533636 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:27:30.533658 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:27:30.533680 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:27:30.533702 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:27:30.533724 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:27:30.533748 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:27:30.533771 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:27:30.533793 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:27:30.533818 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:27:30.533840 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 03:27:30.533862 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 03:27:30.533885 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 03:27:30.533907 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:27:30.536567 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:27:30.536628 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:27:30.536651 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:27:30.536678 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:27:30.536700 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:27:30.536721 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:27:30.536744 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:27:30.536765 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:27:30.536787 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:27:30.536808 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:27:30.536830 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:27:30.536851 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:27:30.536876 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:27:30.536898 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:27:30.536919 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:27:30.536956 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:27:30.536983 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:27:30.537006 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:27:30.537028 systemd[1]: Reached target machines.target - Containers. Apr 30 03:27:30.537049 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:27:30.537071 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:27:30.537097 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:27:30.537118 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:27:30.537140 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:27:30.537162 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:27:30.537183 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:27:30.537205 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:27:30.537226 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:27:30.537248 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:27:30.537272 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 03:27:30.537293 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 03:27:30.537315 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 03:27:30.537336 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 03:27:30.537358 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:27:30.537380 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:27:30.537400 kernel: fuse: init (API version 7.39) Apr 30 03:27:30.537422 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:27:30.537444 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:27:30.537470 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:27:30.537491 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 03:27:30.537513 systemd[1]: Stopped verity-setup.service. Apr 30 03:27:30.537534 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:27:30.537556 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:27:30.537578 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:27:30.537599 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:27:30.537656 systemd-journald[1533]: Collecting audit messages is disabled. Apr 30 03:27:30.537700 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:27:30.537721 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:27:30.537742 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:27:30.537764 kernel: loop: module loaded Apr 30 03:27:30.537784 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:27:30.537810 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:27:30.537831 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:27:30.537853 systemd-journald[1533]: Journal started Apr 30 03:27:30.537894 systemd-journald[1533]: Runtime Journal (/run/log/journal/ec21978c30efb16344ef78e9d69e1fce) is 4.7M, max 38.2M, 33.4M free. Apr 30 03:27:30.541724 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:27:30.541779 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:27:30.153508 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:27:30.544714 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:27:30.206311 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 30 03:27:30.206768 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 03:27:30.547319 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:27:30.547998 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:27:30.548888 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:27:30.550041 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:27:30.551918 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:27:30.552176 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:27:30.554418 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:27:30.555746 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:27:30.560518 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:27:30.603725 kernel: ACPI: bus type drm_connector registered Apr 30 03:27:30.606239 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:27:30.607004 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:27:30.609823 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:27:30.615044 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:27:30.623029 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:27:30.630528 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:27:30.632020 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:27:30.632067 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:27:30.635677 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:27:30.644140 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:27:30.655081 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:27:30.656105 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:27:30.662744 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:27:30.664948 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:27:30.665724 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:27:30.672354 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:27:30.673048 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:27:30.675118 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:27:30.679104 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:27:30.690292 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:27:30.698700 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:27:30.699790 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:27:30.707028 systemd-journald[1533]: Time spent on flushing to /var/log/journal/ec21978c30efb16344ef78e9d69e1fce is 69.821ms for 982 entries. Apr 30 03:27:30.707028 systemd-journald[1533]: System Journal (/var/log/journal/ec21978c30efb16344ef78e9d69e1fce) is 8.0M, max 195.6M, 187.6M free. Apr 30 03:27:30.794453 systemd-journald[1533]: Received client request to flush runtime journal. Apr 30 03:27:30.794521 kernel: loop0: detected capacity change from 0 to 210664 Apr 30 03:27:30.702161 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:27:30.703076 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:27:30.704198 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:27:30.714343 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:27:30.724364 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:27:30.735651 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:27:30.759430 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:27:30.792584 udevadm[1587]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 03:27:30.795792 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:27:30.823772 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:27:30.825303 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:27:30.835962 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:27:30.848410 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:27:30.857637 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:27:30.879062 kernel: loop1: detected capacity change from 0 to 140768 Apr 30 03:27:30.886172 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Apr 30 03:27:30.886201 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Apr 30 03:27:30.893033 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:27:31.035961 kernel: loop2: detected capacity change from 0 to 142488 Apr 30 03:27:31.182100 kernel: loop3: detected capacity change from 0 to 61336 Apr 30 03:27:31.306971 kernel: loop4: detected capacity change from 0 to 210664 Apr 30 03:27:31.350965 kernel: loop5: detected capacity change from 0 to 140768 Apr 30 03:27:31.379953 kernel: loop6: detected capacity change from 0 to 142488 Apr 30 03:27:31.414981 kernel: loop7: detected capacity change from 0 to 61336 Apr 30 03:27:31.433289 (sd-merge)[1604]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 30 03:27:31.436115 (sd-merge)[1604]: Merged extensions into '/usr'. Apr 30 03:27:31.442078 systemd[1]: Reloading requested from client PID 1577 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:27:31.442240 systemd[1]: Reloading... Apr 30 03:27:31.530959 zram_generator::config[1626]: No configuration found. Apr 30 03:27:31.736354 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:27:31.812196 systemd[1]: Reloading finished in 369 ms. Apr 30 03:27:31.841917 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:27:31.842727 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:27:31.851248 systemd[1]: Starting ensure-sysext.service... Apr 30 03:27:31.854714 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:27:31.860477 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:27:31.871201 systemd[1]: Reloading requested from client PID 1682 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:27:31.871217 systemd[1]: Reloading... Apr 30 03:27:31.905900 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:27:31.906411 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:27:31.913179 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:27:31.913642 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Apr 30 03:27:31.913738 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Apr 30 03:27:31.922151 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:27:31.922170 systemd-tmpfiles[1683]: Skipping /boot Apr 30 03:27:31.937896 systemd-udevd[1684]: Using default interface naming scheme 'v255'. Apr 30 03:27:31.951254 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:27:31.951271 systemd-tmpfiles[1683]: Skipping /boot Apr 30 03:27:32.019961 zram_generator::config[1711]: No configuration found. Apr 30 03:27:32.140986 (udev-worker)[1729]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:27:32.301959 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 30 03:27:32.322982 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 30 03:27:32.332776 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Apr 30 03:27:32.345992 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:27:32.358178 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:27:32.358296 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1728) Apr 30 03:27:32.358980 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Apr 30 03:27:32.379804 ldconfig[1572]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:27:32.393960 kernel: ACPI: button: Sleep Button [SLPF] Apr 30 03:27:32.507707 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 03:27:32.507950 systemd[1]: Reloading finished in 636 ms. Apr 30 03:27:32.518953 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:27:32.527878 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:27:32.531809 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:27:32.536621 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:27:32.588816 systemd[1]: Finished ensure-sysext.service. Apr 30 03:27:32.594740 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:27:32.608529 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 03:27:32.609550 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:27:32.614196 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:27:32.620175 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:27:32.622875 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:27:32.626170 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:27:32.631967 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:27:32.635152 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:27:32.638781 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:27:32.643570 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:27:32.644752 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:27:32.655026 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:27:32.658433 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:27:32.672173 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:27:32.675861 lvm[1879]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:27:32.677705 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:27:32.678658 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:27:32.683451 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:27:32.691149 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:27:32.692748 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:27:32.695264 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:27:32.695495 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:27:32.735574 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:27:32.737191 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:27:32.740810 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:27:32.751713 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:27:32.780253 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:27:32.780468 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:27:32.782514 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:27:32.782780 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:27:32.785151 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:27:32.787309 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:27:32.796182 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:27:32.801277 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:27:32.803020 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:27:32.805081 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:27:32.813967 lvm[1903]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:27:32.833172 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:27:32.843063 augenrules[1915]: No rules Apr 30 03:27:32.845190 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:27:32.847595 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:27:32.849178 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:27:32.867153 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:27:32.875346 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:27:32.878434 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:27:32.885921 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:27:32.957062 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:27:32.964343 systemd-resolved[1892]: Positive Trust Anchors: Apr 30 03:27:32.964364 systemd-resolved[1892]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:27:32.964413 systemd-resolved[1892]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:27:32.970618 systemd-networkd[1891]: lo: Link UP Apr 30 03:27:32.970628 systemd-networkd[1891]: lo: Gained carrier Apr 30 03:27:32.971224 systemd-resolved[1892]: Defaulting to hostname 'linux'. Apr 30 03:27:32.972506 systemd-networkd[1891]: Enumeration completed Apr 30 03:27:32.972645 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:27:32.973507 systemd-networkd[1891]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:27:32.973521 systemd-networkd[1891]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:27:32.975071 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:27:32.975714 systemd[1]: Reached target network.target - Network. Apr 30 03:27:32.976102 systemd-networkd[1891]: eth0: Link UP Apr 30 03:27:32.976516 systemd-networkd[1891]: eth0: Gained carrier Apr 30 03:27:32.976609 systemd-networkd[1891]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:27:32.976953 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:27:32.977435 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:27:32.977973 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:27:32.978383 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:27:32.978917 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:27:32.979617 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:27:32.979996 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:27:32.980347 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:27:32.980389 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:27:32.980739 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:27:32.983283 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:27:32.985531 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:27:32.993077 systemd-networkd[1891]: eth0: DHCPv4 address 172.31.16.5/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 03:27:32.996224 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:27:32.998435 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:27:33.000214 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:27:33.001197 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:27:33.001959 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:27:33.002681 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:27:33.002807 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:27:33.009676 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:27:33.012760 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:27:33.015413 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:27:33.025080 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:27:33.028654 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:27:33.029546 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:27:33.039069 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:27:33.044172 systemd[1]: Started ntpd.service - Network Time Service. Apr 30 03:27:33.053170 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:27:33.058088 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 30 03:27:33.066130 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:27:33.069829 jq[1942]: false Apr 30 03:27:33.078161 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:27:33.086163 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:27:33.087519 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:27:33.089363 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:27:33.096226 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:27:33.109230 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:27:33.116376 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:27:33.116613 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:27:33.128106 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:27:33.128539 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:27:33.140519 extend-filesystems[1943]: Found loop4 Apr 30 03:27:33.142359 extend-filesystems[1943]: Found loop5 Apr 30 03:27:33.142359 extend-filesystems[1943]: Found loop6 Apr 30 03:27:33.142359 extend-filesystems[1943]: Found loop7 Apr 30 03:27:33.142359 extend-filesystems[1943]: Found nvme0n1 Apr 30 03:27:33.142359 extend-filesystems[1943]: Found nvme0n1p1 Apr 30 03:27:33.142359 extend-filesystems[1943]: Found nvme0n1p2 Apr 30 03:27:33.142359 extend-filesystems[1943]: Found nvme0n1p3 Apr 30 03:27:33.142359 extend-filesystems[1943]: Found usr Apr 30 03:27:33.142359 extend-filesystems[1943]: Found nvme0n1p4 Apr 30 03:27:33.142359 extend-filesystems[1943]: Found nvme0n1p6 Apr 30 03:27:33.142359 extend-filesystems[1943]: Found nvme0n1p7 Apr 30 03:27:33.142359 extend-filesystems[1943]: Found nvme0n1p9 Apr 30 03:27:33.142359 extend-filesystems[1943]: Checking size of /dev/nvme0n1p9 Apr 30 03:27:33.184662 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:27:33.184923 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:27:33.195584 dbus-daemon[1941]: [system] SELinux support is enabled Apr 30 03:27:33.196069 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:27:33.206969 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:27:33.207021 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:27:33.210182 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:27:33.210222 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:27:33.210481 extend-filesystems[1943]: Resized partition /dev/nvme0n1p9 Apr 30 03:27:33.212084 ntpd[1945]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 22:12:23 UTC 2025 (1): Starting Apr 30 03:27:33.213212 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 22:12:23 UTC 2025 (1): Starting Apr 30 03:27:33.213212 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 03:27:33.213212 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: ---------------------------------------------------- Apr 30 03:27:33.213212 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: ntp-4 is maintained by Network Time Foundation, Apr 30 03:27:33.213212 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 03:27:33.213212 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: corporation. Support and training for ntp-4 are Apr 30 03:27:33.213212 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: available at https://www.nwtime.org/support Apr 30 03:27:33.213212 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: ---------------------------------------------------- Apr 30 03:27:33.212110 ntpd[1945]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 03:27:33.212120 ntpd[1945]: ---------------------------------------------------- Apr 30 03:27:33.232088 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: proto: precision = 0.080 usec (-23) Apr 30 03:27:33.232088 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: basedate set to 2025-04-17 Apr 30 03:27:33.232088 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: gps base set to 2025-04-20 (week 2363) Apr 30 03:27:33.232264 extend-filesystems[1984]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:27:33.212129 ntpd[1945]: ntp-4 is maintained by Network Time Foundation, Apr 30 03:27:33.212139 ntpd[1945]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 03:27:33.252995 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Apr 30 03:27:33.212147 ntpd[1945]: corporation. Support and training for ntp-4 are Apr 30 03:27:33.212157 ntpd[1945]: available at https://www.nwtime.org/support Apr 30 03:27:33.212166 ntpd[1945]: ---------------------------------------------------- Apr 30 03:27:33.221398 dbus-daemon[1941]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1891 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 30 03:27:33.228497 ntpd[1945]: proto: precision = 0.080 usec (-23) Apr 30 03:27:33.254244 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 30 03:27:33.228820 ntpd[1945]: basedate set to 2025-04-17 Apr 30 03:27:33.228834 ntpd[1945]: gps base set to 2025-04-20 (week 2363) Apr 30 03:27:33.237734 dbus-daemon[1941]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 03:27:33.268708 jq[1958]: true Apr 30 03:27:33.269530 ntpd[1945]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 03:27:33.274460 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 03:27:33.274460 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 03:27:33.274352 ntpd[1945]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 03:27:33.284723 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 03:27:33.284723 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: Listen normally on 3 eth0 172.31.16.5:123 Apr 30 03:27:33.284723 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: Listen normally on 4 lo [::1]:123 Apr 30 03:27:33.284723 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: bind(21) AF_INET6 fe80::400:ecff:fea0:ce95%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 03:27:33.284723 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: unable to create socket on eth0 (5) for fe80::400:ecff:fea0:ce95%2#123 Apr 30 03:27:33.284723 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: failed to init interface for address fe80::400:ecff:fea0:ce95%2 Apr 30 03:27:33.284723 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: Listening on routing socket on fd #21 for interface updates Apr 30 03:27:33.284723 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:27:33.284723 ntpd[1945]: 30 Apr 03:27:33 ntpd[1945]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:27:33.285324 tar[1961]: linux-amd64/helm Apr 30 03:27:33.276150 ntpd[1945]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 03:27:33.285512 (ntainerd)[1987]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:27:33.276201 ntpd[1945]: Listen normally on 3 eth0 172.31.16.5:123 Apr 30 03:27:33.276246 ntpd[1945]: Listen normally on 4 lo [::1]:123 Apr 30 03:27:33.276298 ntpd[1945]: bind(21) AF_INET6 fe80::400:ecff:fea0:ce95%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 03:27:33.276321 ntpd[1945]: unable to create socket on eth0 (5) for fe80::400:ecff:fea0:ce95%2#123 Apr 30 03:27:33.276337 ntpd[1945]: failed to init interface for address fe80::400:ecff:fea0:ce95%2 Apr 30 03:27:33.276370 ntpd[1945]: Listening on routing socket on fd #21 for interface updates Apr 30 03:27:33.277822 ntpd[1945]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:27:33.277852 ntpd[1945]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:27:33.296788 update_engine[1957]: I20250430 03:27:33.289788 1957 main.cc:92] Flatcar Update Engine starting Apr 30 03:27:33.298746 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:27:33.310194 update_engine[1957]: I20250430 03:27:33.308632 1957 update_check_scheduler.cc:74] Next update check in 5m38s Apr 30 03:27:33.309053 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:27:33.317547 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 30 03:27:33.346955 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1729) Apr 30 03:27:33.352447 coreos-metadata[1940]: Apr 30 03:27:33.352 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 03:27:33.354407 jq[1993]: true Apr 30 03:27:33.357758 coreos-metadata[1940]: Apr 30 03:27:33.355 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 30 03:27:33.357758 coreos-metadata[1940]: Apr 30 03:27:33.355 INFO Fetch successful Apr 30 03:27:33.357758 coreos-metadata[1940]: Apr 30 03:27:33.355 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 30 03:27:33.357758 coreos-metadata[1940]: Apr 30 03:27:33.356 INFO Fetch successful Apr 30 03:27:33.357758 coreos-metadata[1940]: Apr 30 03:27:33.356 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 30 03:27:33.357758 coreos-metadata[1940]: Apr 30 03:27:33.357 INFO Fetch successful Apr 30 03:27:33.357758 coreos-metadata[1940]: Apr 30 03:27:33.357 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 30 03:27:33.370821 coreos-metadata[1940]: Apr 30 03:27:33.370 INFO Fetch successful Apr 30 03:27:33.370821 coreos-metadata[1940]: Apr 30 03:27:33.370 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 30 03:27:33.372181 coreos-metadata[1940]: Apr 30 03:27:33.371 INFO Fetch failed with 404: resource not found Apr 30 03:27:33.372181 coreos-metadata[1940]: Apr 30 03:27:33.371 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 30 03:27:33.372337 coreos-metadata[1940]: Apr 30 03:27:33.372 INFO Fetch successful Apr 30 03:27:33.372337 coreos-metadata[1940]: Apr 30 03:27:33.372 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 30 03:27:33.375321 coreos-metadata[1940]: Apr 30 03:27:33.374 INFO Fetch successful Apr 30 03:27:33.375321 coreos-metadata[1940]: Apr 30 03:27:33.374 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 30 03:27:33.376532 coreos-metadata[1940]: Apr 30 03:27:33.376 INFO Fetch successful Apr 30 03:27:33.376532 coreos-metadata[1940]: Apr 30 03:27:33.376 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 30 03:27:33.377452 coreos-metadata[1940]: Apr 30 03:27:33.377 INFO Fetch successful Apr 30 03:27:33.377558 coreos-metadata[1940]: Apr 30 03:27:33.377 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 30 03:27:33.381008 coreos-metadata[1940]: Apr 30 03:27:33.380 INFO Fetch successful Apr 30 03:27:33.406957 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Apr 30 03:27:33.430745 systemd-logind[1956]: Watching system buttons on /dev/input/event2 (Power Button) Apr 30 03:27:33.430779 systemd-logind[1956]: Watching system buttons on /dev/input/event3 (Sleep Button) Apr 30 03:27:33.430801 systemd-logind[1956]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:27:33.433083 systemd-logind[1956]: New seat seat0. Apr 30 03:27:33.436898 extend-filesystems[1984]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 30 03:27:33.436898 extend-filesystems[1984]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 03:27:33.436898 extend-filesystems[1984]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Apr 30 03:27:33.454575 extend-filesystems[1943]: Resized filesystem in /dev/nvme0n1p9 Apr 30 03:27:33.447690 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:27:33.447927 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:27:33.461165 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:27:33.530392 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:27:33.532392 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:27:33.635064 bash[2062]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:27:33.642894 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:27:33.660424 systemd[1]: Starting sshkeys.service... Apr 30 03:27:33.746619 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 03:27:33.756296 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 03:27:33.775287 locksmithd[1999]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:27:33.867376 dbus-daemon[1941]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 30 03:27:33.868292 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 30 03:27:33.878806 dbus-daemon[1941]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1991 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 30 03:27:33.891348 systemd[1]: Starting polkit.service - Authorization Manager... Apr 30 03:27:33.950185 polkitd[2124]: Started polkitd version 121 Apr 30 03:27:33.977137 coreos-metadata[2109]: Apr 30 03:27:33.967 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 03:27:33.977137 coreos-metadata[2109]: Apr 30 03:27:33.970 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 30 03:27:33.977137 coreos-metadata[2109]: Apr 30 03:27:33.970 INFO Fetch successful Apr 30 03:27:33.977137 coreos-metadata[2109]: Apr 30 03:27:33.970 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 30 03:27:33.977137 coreos-metadata[2109]: Apr 30 03:27:33.972 INFO Fetch successful Apr 30 03:27:33.976488 systemd[1]: Started polkit.service - Authorization Manager. Apr 30 03:27:33.973545 polkitd[2124]: Loading rules from directory /etc/polkit-1/rules.d Apr 30 03:27:33.973622 polkitd[2124]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 30 03:27:33.975286 polkitd[2124]: Finished loading, compiling and executing 2 rules Apr 30 03:27:33.976281 dbus-daemon[1941]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 30 03:27:33.981021 polkitd[2124]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 30 03:27:33.981473 unknown[2109]: wrote ssh authorized keys file for user: core Apr 30 03:27:34.021136 sshd_keygen[1998]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:27:34.025292 systemd-hostnamed[1991]: Hostname set to (transient) Apr 30 03:27:34.025422 systemd-resolved[1892]: System hostname changed to 'ip-172-31-16-5'. Apr 30 03:27:34.032375 update-ssh-keys[2137]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:27:34.033977 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 03:27:34.037367 systemd[1]: Finished sshkeys.service. Apr 30 03:27:34.067535 systemd-networkd[1891]: eth0: Gained IPv6LL Apr 30 03:27:34.073141 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:27:34.074634 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:27:34.077522 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:27:34.091044 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 30 03:27:34.097047 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:27:34.100330 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:27:34.107047 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:27:34.153140 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:27:34.153414 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:27:34.165805 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:27:34.209419 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:27:34.221577 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:27:34.236059 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:27:34.237634 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:27:34.249509 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:27:34.260602 amazon-ssm-agent[2151]: Initializing new seelog logger Apr 30 03:27:34.260602 amazon-ssm-agent[2151]: New Seelog Logger Creation Complete Apr 30 03:27:34.260602 amazon-ssm-agent[2151]: 2025/04/30 03:27:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:27:34.260602 amazon-ssm-agent[2151]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:27:34.260602 amazon-ssm-agent[2151]: 2025/04/30 03:27:34 processing appconfig overrides Apr 30 03:27:34.260602 amazon-ssm-agent[2151]: 2025/04/30 03:27:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:27:34.260602 amazon-ssm-agent[2151]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:27:34.261183 amazon-ssm-agent[2151]: 2025/04/30 03:27:34 processing appconfig overrides Apr 30 03:27:34.261183 amazon-ssm-agent[2151]: 2025/04/30 03:27:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:27:34.261183 amazon-ssm-agent[2151]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:27:34.261183 amazon-ssm-agent[2151]: 2025/04/30 03:27:34 processing appconfig overrides Apr 30 03:27:34.265101 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO Proxy environment variables: Apr 30 03:27:34.271150 amazon-ssm-agent[2151]: 2025/04/30 03:27:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:27:34.271150 amazon-ssm-agent[2151]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:27:34.271150 amazon-ssm-agent[2151]: 2025/04/30 03:27:34 processing appconfig overrides Apr 30 03:27:34.294840 containerd[1987]: time="2025-04-30T03:27:34.293288570Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:27:34.363925 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO https_proxy: Apr 30 03:27:34.365315 containerd[1987]: time="2025-04-30T03:27:34.365261704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:27:34.369882 containerd[1987]: time="2025-04-30T03:27:34.369433780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:27:34.369882 containerd[1987]: time="2025-04-30T03:27:34.369486368Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:27:34.369882 containerd[1987]: time="2025-04-30T03:27:34.369511651Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:27:34.369882 containerd[1987]: time="2025-04-30T03:27:34.369693107Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:27:34.369882 containerd[1987]: time="2025-04-30T03:27:34.369720264Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:27:34.369882 containerd[1987]: time="2025-04-30T03:27:34.369790994Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:27:34.369882 containerd[1987]: time="2025-04-30T03:27:34.369808910Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:27:34.372983 containerd[1987]: time="2025-04-30T03:27:34.372172279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:27:34.372983 containerd[1987]: time="2025-04-30T03:27:34.372211235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:27:34.372983 containerd[1987]: time="2025-04-30T03:27:34.372237454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:27:34.372983 containerd[1987]: time="2025-04-30T03:27:34.372253571Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:27:34.372983 containerd[1987]: time="2025-04-30T03:27:34.372398230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:27:34.372983 containerd[1987]: time="2025-04-30T03:27:34.372639822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:27:34.372983 containerd[1987]: time="2025-04-30T03:27:34.372814038Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:27:34.372983 containerd[1987]: time="2025-04-30T03:27:34.372836302Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:27:34.373656 containerd[1987]: time="2025-04-30T03:27:34.373352797Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:27:34.373656 containerd[1987]: time="2025-04-30T03:27:34.373436432Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:27:34.384283 containerd[1987]: time="2025-04-30T03:27:34.383726726Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:27:34.384283 containerd[1987]: time="2025-04-30T03:27:34.383815299Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:27:34.384283 containerd[1987]: time="2025-04-30T03:27:34.383909898Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:27:34.384283 containerd[1987]: time="2025-04-30T03:27:34.383945171Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:27:34.384283 containerd[1987]: time="2025-04-30T03:27:34.383976224Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:27:34.384283 containerd[1987]: time="2025-04-30T03:27:34.384173296Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:27:34.386954 containerd[1987]: time="2025-04-30T03:27:34.386317036Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:27:34.386954 containerd[1987]: time="2025-04-30T03:27:34.386494405Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:27:34.386954 containerd[1987]: time="2025-04-30T03:27:34.386520134Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:27:34.386954 containerd[1987]: time="2025-04-30T03:27:34.386540443Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:27:34.386954 containerd[1987]: time="2025-04-30T03:27:34.386560242Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:27:34.386954 containerd[1987]: time="2025-04-30T03:27:34.386580705Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:27:34.386954 containerd[1987]: time="2025-04-30T03:27:34.386610149Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:27:34.386954 containerd[1987]: time="2025-04-30T03:27:34.386634105Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:27:34.386954 containerd[1987]: time="2025-04-30T03:27:34.386658237Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:27:34.386954 containerd[1987]: time="2025-04-30T03:27:34.386678724Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:27:34.386954 containerd[1987]: time="2025-04-30T03:27:34.386698040Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:27:34.386954 containerd[1987]: time="2025-04-30T03:27:34.386717168Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:27:34.386954 containerd[1987]: time="2025-04-30T03:27:34.386746600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:27:34.386954 containerd[1987]: time="2025-04-30T03:27:34.386766065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:27:34.387512 containerd[1987]: time="2025-04-30T03:27:34.386788587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:27:34.387512 containerd[1987]: time="2025-04-30T03:27:34.386808224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:27:34.387512 containerd[1987]: time="2025-04-30T03:27:34.386826364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:27:34.387512 containerd[1987]: time="2025-04-30T03:27:34.386846318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:27:34.387512 containerd[1987]: time="2025-04-30T03:27:34.386878434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:27:34.387512 containerd[1987]: time="2025-04-30T03:27:34.386898975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:27:34.389344 containerd[1987]: time="2025-04-30T03:27:34.386920495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:27:34.389344 containerd[1987]: time="2025-04-30T03:27:34.387773405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:27:34.389344 containerd[1987]: time="2025-04-30T03:27:34.387805647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:27:34.389344 containerd[1987]: time="2025-04-30T03:27:34.387825721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:27:34.389344 containerd[1987]: time="2025-04-30T03:27:34.387850204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:27:34.389344 containerd[1987]: time="2025-04-30T03:27:34.387873696Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:27:34.389344 containerd[1987]: time="2025-04-30T03:27:34.387906427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:27:34.389344 containerd[1987]: time="2025-04-30T03:27:34.387925993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:27:34.389344 containerd[1987]: time="2025-04-30T03:27:34.387957357Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:27:34.389344 containerd[1987]: time="2025-04-30T03:27:34.388030266Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:27:34.389344 containerd[1987]: time="2025-04-30T03:27:34.388055936Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:27:34.389344 containerd[1987]: time="2025-04-30T03:27:34.388144771Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:27:34.389344 containerd[1987]: time="2025-04-30T03:27:34.388166831Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:27:34.389875 containerd[1987]: time="2025-04-30T03:27:34.388182864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:27:34.389875 containerd[1987]: time="2025-04-30T03:27:34.388201401Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:27:34.389875 containerd[1987]: time="2025-04-30T03:27:34.388214295Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:27:34.389875 containerd[1987]: time="2025-04-30T03:27:34.388229362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:27:34.390046 containerd[1987]: time="2025-04-30T03:27:34.388649170Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:27:34.390046 containerd[1987]: time="2025-04-30T03:27:34.388736828Z" level=info msg="Connect containerd service" Apr 30 03:27:34.390046 containerd[1987]: time="2025-04-30T03:27:34.388779875Z" level=info msg="using legacy CRI server" Apr 30 03:27:34.390046 containerd[1987]: time="2025-04-30T03:27:34.388790043Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:27:34.390046 containerd[1987]: time="2025-04-30T03:27:34.388922903Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:27:34.394304 containerd[1987]: time="2025-04-30T03:27:34.392348031Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:27:34.394304 containerd[1987]: time="2025-04-30T03:27:34.392678754Z" level=info msg="Start subscribing containerd event" Apr 30 03:27:34.394304 containerd[1987]: time="2025-04-30T03:27:34.392747219Z" level=info msg="Start recovering state" Apr 30 03:27:34.394304 containerd[1987]: time="2025-04-30T03:27:34.392822483Z" level=info msg="Start event monitor" Apr 30 03:27:34.394304 containerd[1987]: time="2025-04-30T03:27:34.392845041Z" level=info msg="Start snapshots syncer" Apr 30 03:27:34.394304 containerd[1987]: time="2025-04-30T03:27:34.392857930Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:27:34.394304 containerd[1987]: time="2025-04-30T03:27:34.392868926Z" level=info msg="Start streaming server" Apr 30 03:27:34.394304 containerd[1987]: time="2025-04-30T03:27:34.394150822Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:27:34.394304 containerd[1987]: time="2025-04-30T03:27:34.394275533Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:27:34.395116 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:27:34.398505 containerd[1987]: time="2025-04-30T03:27:34.396165082Z" level=info msg="containerd successfully booted in 0.103919s" Apr 30 03:27:34.462763 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO http_proxy: Apr 30 03:27:34.562725 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO no_proxy: Apr 30 03:27:34.661073 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO Checking if agent identity type OnPrem can be assumed Apr 30 03:27:34.695907 tar[1961]: linux-amd64/LICENSE Apr 30 03:27:34.699528 tar[1961]: linux-amd64/README.md Apr 30 03:27:34.711125 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:27:34.759468 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO Checking if agent identity type EC2 can be assumed Apr 30 03:27:34.858216 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO Agent will take identity from EC2 Apr 30 03:27:34.873709 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 03:27:34.873877 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 03:27:34.873921 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 03:27:34.873984 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 30 03:27:34.874021 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 30 03:27:34.874077 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO [amazon-ssm-agent] Starting Core Agent Apr 30 03:27:34.874113 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 30 03:27:34.874152 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO [Registrar] Starting registrar module Apr 30 03:27:34.874186 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 30 03:27:34.874230 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO [EC2Identity] EC2 registration was successful. Apr 30 03:27:34.874287 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO [CredentialRefresher] credentialRefresher has started Apr 30 03:27:34.874287 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO [CredentialRefresher] Starting credentials refresher loop Apr 30 03:27:34.874287 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 30 03:27:34.957087 amazon-ssm-agent[2151]: 2025-04-30 03:27:34 INFO [CredentialRefresher] Next credential rotation will be in 30.10831939565 minutes Apr 30 03:27:35.293207 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:27:35.301322 systemd[1]: Started sshd@0-172.31.16.5:22-147.75.109.163:38300.service - OpenSSH per-connection server daemon (147.75.109.163:38300). Apr 30 03:27:35.551904 sshd[2185]: Accepted publickey for core from 147.75.109.163 port 38300 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:27:35.554711 sshd[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:35.563761 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:27:35.571414 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:27:35.574511 systemd-logind[1956]: New session 1 of user core. Apr 30 03:27:35.589774 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:27:35.597636 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:27:35.604435 (systemd)[2189]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:27:35.705230 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:27:35.707749 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:27:35.709689 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:27:35.758088 systemd[2189]: Queued start job for default target default.target. Apr 30 03:27:35.764833 systemd[2189]: Created slice app.slice - User Application Slice. Apr 30 03:27:35.764875 systemd[2189]: Reached target paths.target - Paths. Apr 30 03:27:35.764897 systemd[2189]: Reached target timers.target - Timers. Apr 30 03:27:35.767540 systemd[2189]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:27:35.781216 systemd[2189]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:27:35.781366 systemd[2189]: Reached target sockets.target - Sockets. Apr 30 03:27:35.781387 systemd[2189]: Reached target basic.target - Basic System. Apr 30 03:27:35.781436 systemd[2189]: Reached target default.target - Main User Target. Apr 30 03:27:35.781473 systemd[2189]: Startup finished in 168ms. Apr 30 03:27:35.781834 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:27:35.787290 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:27:35.789470 systemd[1]: Startup finished in 573ms (kernel) + 7.362s (initrd) + 6.772s (userspace) = 14.709s. Apr 30 03:27:35.887054 amazon-ssm-agent[2151]: 2025-04-30 03:27:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 30 03:27:35.989079 amazon-ssm-agent[2151]: 2025-04-30 03:27:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2209) started Apr 30 03:27:36.014313 systemd[1]: Started sshd@1-172.31.16.5:22-147.75.109.163:38308.service - OpenSSH per-connection server daemon (147.75.109.163:38308). Apr 30 03:27:36.089618 amazon-ssm-agent[2151]: 2025-04-30 03:27:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 30 03:27:36.212558 ntpd[1945]: Listen normally on 6 eth0 [fe80::400:ecff:fea0:ce95%2]:123 Apr 30 03:27:36.213125 ntpd[1945]: 30 Apr 03:27:36 ntpd[1945]: Listen normally on 6 eth0 [fe80::400:ecff:fea0:ce95%2]:123 Apr 30 03:27:36.280419 sshd[2223]: Accepted publickey for core from 147.75.109.163 port 38308 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:27:36.282246 sshd[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:36.288239 systemd-logind[1956]: New session 2 of user core. Apr 30 03:27:36.294192 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:27:36.476377 sshd[2223]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:36.480025 systemd[1]: sshd@1-172.31.16.5:22-147.75.109.163:38308.service: Deactivated successfully. Apr 30 03:27:36.481866 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:27:36.482766 systemd-logind[1956]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:27:36.483952 systemd-logind[1956]: Removed session 2. Apr 30 03:27:36.520692 systemd[1]: Started sshd@2-172.31.16.5:22-147.75.109.163:51834.service - OpenSSH per-connection server daemon (147.75.109.163:51834). Apr 30 03:27:36.766203 sshd[2234]: Accepted publickey for core from 147.75.109.163 port 51834 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:27:36.768084 sshd[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:36.773200 systemd-logind[1956]: New session 3 of user core. Apr 30 03:27:36.780522 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:27:36.791653 kubelet[2200]: E0430 03:27:36.791609 2200 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:27:36.794210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:27:36.794417 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:27:36.794769 systemd[1]: kubelet.service: Consumed 1.081s CPU time. Apr 30 03:27:36.953033 sshd[2234]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:36.956120 systemd[1]: sshd@2-172.31.16.5:22-147.75.109.163:51834.service: Deactivated successfully. Apr 30 03:27:36.958190 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:27:36.959927 systemd-logind[1956]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:27:36.961210 systemd-logind[1956]: Removed session 3. Apr 30 03:27:36.999711 systemd[1]: Started sshd@3-172.31.16.5:22-147.75.109.163:51844.service - OpenSSH per-connection server daemon (147.75.109.163:51844). Apr 30 03:27:37.244982 sshd[2243]: Accepted publickey for core from 147.75.109.163 port 51844 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:27:37.246494 sshd[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:37.251043 systemd-logind[1956]: New session 4 of user core. Apr 30 03:27:37.258177 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:27:37.437540 sshd[2243]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:37.440221 systemd[1]: sshd@3-172.31.16.5:22-147.75.109.163:51844.service: Deactivated successfully. Apr 30 03:27:37.441855 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:27:37.443203 systemd-logind[1956]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:27:37.444389 systemd-logind[1956]: Removed session 4. Apr 30 03:27:37.488009 systemd[1]: Started sshd@4-172.31.16.5:22-147.75.109.163:51858.service - OpenSSH per-connection server daemon (147.75.109.163:51858). Apr 30 03:27:37.736015 sshd[2250]: Accepted publickey for core from 147.75.109.163 port 51858 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:27:37.737394 sshd[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:37.742677 systemd-logind[1956]: New session 5 of user core. Apr 30 03:27:37.749194 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:27:37.908221 sudo[2253]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:27:37.908534 sudo[2253]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:27:37.925457 sudo[2253]: pam_unix(sudo:session): session closed for user root Apr 30 03:27:37.963716 sshd[2250]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:37.967720 systemd[1]: sshd@4-172.31.16.5:22-147.75.109.163:51858.service: Deactivated successfully. Apr 30 03:27:37.969755 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:27:37.971494 systemd-logind[1956]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:27:37.972982 systemd-logind[1956]: Removed session 5. Apr 30 03:27:38.009022 systemd[1]: Started sshd@5-172.31.16.5:22-147.75.109.163:51868.service - OpenSSH per-connection server daemon (147.75.109.163:51868). Apr 30 03:27:38.268193 sshd[2258]: Accepted publickey for core from 147.75.109.163 port 51868 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:27:38.269511 sshd[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:38.274072 systemd-logind[1956]: New session 6 of user core. Apr 30 03:27:38.282199 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:27:38.423978 sudo[2262]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:27:38.424371 sudo[2262]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:27:38.428305 sudo[2262]: pam_unix(sudo:session): session closed for user root Apr 30 03:27:38.433805 sudo[2261]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:27:38.434208 sudo[2261]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:27:38.453284 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:27:38.454742 auditctl[2265]: No rules Apr 30 03:27:38.455197 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:27:38.455385 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:27:38.458018 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:27:38.488502 augenrules[2283]: No rules Apr 30 03:27:38.489982 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:27:38.491297 sudo[2261]: pam_unix(sudo:session): session closed for user root Apr 30 03:27:38.529087 sshd[2258]: pam_unix(sshd:session): session closed for user core Apr 30 03:27:38.533027 systemd[1]: sshd@5-172.31.16.5:22-147.75.109.163:51868.service: Deactivated successfully. Apr 30 03:27:38.534710 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:27:38.535493 systemd-logind[1956]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:27:38.536288 systemd-logind[1956]: Removed session 6. Apr 30 03:27:38.573827 systemd[1]: Started sshd@6-172.31.16.5:22-147.75.109.163:51882.service - OpenSSH per-connection server daemon (147.75.109.163:51882). Apr 30 03:27:38.833341 sshd[2291]: Accepted publickey for core from 147.75.109.163 port 51882 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:27:38.834919 sshd[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:27:38.840028 systemd-logind[1956]: New session 7 of user core. Apr 30 03:27:38.849162 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:27:38.988490 sudo[2294]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:27:38.988778 sudo[2294]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:27:39.379649 (dockerd)[2311]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:27:39.380314 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:27:39.794719 dockerd[2311]: time="2025-04-30T03:27:39.794413400Z" level=info msg="Starting up" Apr 30 03:27:40.087602 dockerd[2311]: time="2025-04-30T03:27:40.087265082Z" level=info msg="Loading containers: start." Apr 30 03:27:40.806951 systemd-resolved[1892]: Clock change detected. Flushing caches. Apr 30 03:27:40.812042 kernel: Initializing XFRM netlink socket Apr 30 03:27:40.841734 (udev-worker)[2333]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:27:40.898501 systemd-networkd[1891]: docker0: Link UP Apr 30 03:27:40.923690 dockerd[2311]: time="2025-04-30T03:27:40.923641285Z" level=info msg="Loading containers: done." Apr 30 03:27:40.945521 dockerd[2311]: time="2025-04-30T03:27:40.945207460Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:27:40.945521 dockerd[2311]: time="2025-04-30T03:27:40.945321216Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:27:40.945521 dockerd[2311]: time="2025-04-30T03:27:40.945427359Z" level=info msg="Daemon has completed initialization" Apr 30 03:27:40.995425 dockerd[2311]: time="2025-04-30T03:27:40.995289846Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:27:40.995660 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:27:42.226118 containerd[1987]: time="2025-04-30T03:27:42.225965925Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 03:27:42.870339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3208877203.mount: Deactivated successfully. Apr 30 03:27:45.236394 containerd[1987]: time="2025-04-30T03:27:45.236328469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:45.237974 containerd[1987]: time="2025-04-30T03:27:45.237907950Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" Apr 30 03:27:45.241468 containerd[1987]: time="2025-04-30T03:27:45.241423621Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:45.245391 containerd[1987]: time="2025-04-30T03:27:45.245321924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:45.247089 containerd[1987]: time="2025-04-30T03:27:45.246466798Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 3.020358045s" Apr 30 03:27:45.247089 containerd[1987]: time="2025-04-30T03:27:45.246520302Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 03:27:45.271761 containerd[1987]: time="2025-04-30T03:27:45.271716860Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 03:27:47.638993 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:27:47.645322 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:27:48.004232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:27:48.019531 (kubelet)[2526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:27:48.102635 kubelet[2526]: E0430 03:27:48.102117 2526 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:27:48.107411 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:27:48.107774 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:27:48.223507 containerd[1987]: time="2025-04-30T03:27:48.223458055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:48.226965 containerd[1987]: time="2025-04-30T03:27:48.226912232Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" Apr 30 03:27:48.230298 containerd[1987]: time="2025-04-30T03:27:48.229371538Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:48.232568 containerd[1987]: time="2025-04-30T03:27:48.232536426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:48.234501 containerd[1987]: time="2025-04-30T03:27:48.233621849Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.961869589s" Apr 30 03:27:48.234501 containerd[1987]: time="2025-04-30T03:27:48.233656097Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 03:27:48.259629 containerd[1987]: time="2025-04-30T03:27:48.259517511Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 03:27:50.032690 containerd[1987]: time="2025-04-30T03:27:50.032629530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:50.034065 containerd[1987]: time="2025-04-30T03:27:50.034005059Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" Apr 30 03:27:50.036535 containerd[1987]: time="2025-04-30T03:27:50.036478998Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:50.041599 containerd[1987]: time="2025-04-30T03:27:50.041521006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:50.042742 containerd[1987]: time="2025-04-30T03:27:50.042388743Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.782615743s" Apr 30 03:27:50.042742 containerd[1987]: time="2025-04-30T03:27:50.042427546Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 03:27:50.066858 containerd[1987]: time="2025-04-30T03:27:50.066819675Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 03:27:51.193797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1277728387.mount: Deactivated successfully. Apr 30 03:27:51.712332 containerd[1987]: time="2025-04-30T03:27:51.712275472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:51.714498 containerd[1987]: time="2025-04-30T03:27:51.714441724Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" Apr 30 03:27:51.716487 containerd[1987]: time="2025-04-30T03:27:51.716425511Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:51.719392 containerd[1987]: time="2025-04-30T03:27:51.719330969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:51.720051 containerd[1987]: time="2025-04-30T03:27:51.719917405Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.65305305s" Apr 30 03:27:51.720051 containerd[1987]: time="2025-04-30T03:27:51.719951269Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 03:27:51.741922 containerd[1987]: time="2025-04-30T03:27:51.741884605Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 03:27:52.319869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1526230050.mount: Deactivated successfully. Apr 30 03:27:53.387563 containerd[1987]: time="2025-04-30T03:27:53.387489979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:53.388566 containerd[1987]: time="2025-04-30T03:27:53.388517877Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Apr 30 03:27:53.389868 containerd[1987]: time="2025-04-30T03:27:53.389822782Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:53.398589 containerd[1987]: time="2025-04-30T03:27:53.398533094Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.656604115s" Apr 30 03:27:53.398914 containerd[1987]: time="2025-04-30T03:27:53.398769647Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 03:27:53.400785 containerd[1987]: time="2025-04-30T03:27:53.399768910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:53.424619 containerd[1987]: time="2025-04-30T03:27:53.424582340Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 03:27:53.899292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount297084507.mount: Deactivated successfully. Apr 30 03:27:53.906928 containerd[1987]: time="2025-04-30T03:27:53.906873622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:53.908193 containerd[1987]: time="2025-04-30T03:27:53.907996909Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Apr 30 03:27:53.911149 containerd[1987]: time="2025-04-30T03:27:53.909935239Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:53.913326 containerd[1987]: time="2025-04-30T03:27:53.912325383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:53.913326 containerd[1987]: time="2025-04-30T03:27:53.913197701Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 488.579051ms" Apr 30 03:27:53.913326 containerd[1987]: time="2025-04-30T03:27:53.913225715Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 03:27:53.937968 containerd[1987]: time="2025-04-30T03:27:53.937696313Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 03:27:54.502776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3733244447.mount: Deactivated successfully. Apr 30 03:27:57.654389 containerd[1987]: time="2025-04-30T03:27:57.654330837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:57.668286 containerd[1987]: time="2025-04-30T03:27:57.668210551Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Apr 30 03:27:57.688497 containerd[1987]: time="2025-04-30T03:27:57.688399223Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:57.706856 containerd[1987]: time="2025-04-30T03:27:57.706801458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:27:57.708587 containerd[1987]: time="2025-04-30T03:27:57.708537911Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.770750122s" Apr 30 03:27:57.708587 containerd[1987]: time="2025-04-30T03:27:57.708577981Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 03:27:58.261227 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 03:27:58.267407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:27:58.553296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:27:58.560672 (kubelet)[2707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:27:58.638072 kubelet[2707]: E0430 03:27:58.638009 2707 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:27:58.642992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:27:58.643877 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:01.142309 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:01.148403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:01.173095 systemd[1]: Reloading requested from client PID 2745 ('systemctl') (unit session-7.scope)... Apr 30 03:28:01.173116 systemd[1]: Reloading... Apr 30 03:28:01.284900 zram_generator::config[2785]: No configuration found. Apr 30 03:28:01.450032 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:01.589828 systemd[1]: Reloading finished in 416 ms. Apr 30 03:28:01.703139 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 03:28:01.703361 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 03:28:01.707106 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:01.728509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:02.211961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:02.244688 (kubelet)[2849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:28:02.382859 kubelet[2849]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:28:02.383540 kubelet[2849]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:28:02.383540 kubelet[2849]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:28:02.383652 kubelet[2849]: I0430 03:28:02.383614 2849 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:28:02.812613 kubelet[2849]: I0430 03:28:02.812568 2849 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:28:02.812613 kubelet[2849]: I0430 03:28:02.812600 2849 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:28:02.813065 kubelet[2849]: I0430 03:28:02.813011 2849 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:28:02.851378 kubelet[2849]: I0430 03:28:02.851305 2849 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:28:02.869133 kubelet[2849]: E0430 03:28:02.868767 2849 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.16.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.16.5:6443: connect: connection refused Apr 30 03:28:02.882352 kubelet[2849]: I0430 03:28:02.882322 2849 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:28:02.882607 kubelet[2849]: I0430 03:28:02.882567 2849 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:28:02.885963 kubelet[2849]: I0430 03:28:02.882604 2849 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:28:02.886187 kubelet[2849]: I0430 03:28:02.885980 2849 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:28:02.886187 kubelet[2849]: I0430 03:28:02.886001 2849 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:28:02.886187 kubelet[2849]: I0430 03:28:02.886180 2849 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:28:02.887646 kubelet[2849]: I0430 03:28:02.887615 2849 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:28:02.887646 kubelet[2849]: I0430 03:28:02.887647 2849 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:28:02.887799 kubelet[2849]: I0430 03:28:02.887681 2849 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:28:02.887799 kubelet[2849]: I0430 03:28:02.887713 2849 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:28:02.893635 kubelet[2849]: W0430 03:28:02.893288 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.5:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.5:6443: connect: connection refused Apr 30 03:28:02.893635 kubelet[2849]: E0430 03:28:02.893381 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.5:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.5:6443: connect: connection refused Apr 30 03:28:02.893635 kubelet[2849]: W0430 03:28:02.893459 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-5&limit=500&resourceVersion=0": dial tcp 172.31.16.5:6443: connect: connection refused Apr 30 03:28:02.893635 kubelet[2849]: E0430 03:28:02.893500 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-5&limit=500&resourceVersion=0": dial tcp 172.31.16.5:6443: connect: connection refused Apr 30 03:28:02.893995 kubelet[2849]: I0430 03:28:02.893966 2849 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:28:02.896315 kubelet[2849]: I0430 03:28:02.896278 2849 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:28:02.896442 kubelet[2849]: W0430 03:28:02.896362 2849 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:28:02.897900 kubelet[2849]: I0430 03:28:02.897514 2849 server.go:1264] "Started kubelet" Apr 30 03:28:02.899942 kubelet[2849]: I0430 03:28:02.899273 2849 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:28:02.905845 kubelet[2849]: I0430 03:28:02.905427 2849 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:28:02.913039 kubelet[2849]: I0430 03:28:02.912462 2849 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:28:02.913326 kubelet[2849]: I0430 03:28:02.913304 2849 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:28:02.913882 kubelet[2849]: I0430 03:28:02.913819 2849 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:28:02.914089 kubelet[2849]: I0430 03:28:02.914070 2849 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:28:02.921462 kubelet[2849]: I0430 03:28:02.921158 2849 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:28:02.921462 kubelet[2849]: I0430 03:28:02.921280 2849 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:28:02.924699 kubelet[2849]: E0430 03:28:02.924652 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-5?timeout=10s\": dial tcp 172.31.16.5:6443: connect: connection refused" interval="200ms" Apr 30 03:28:02.925990 kubelet[2849]: I0430 03:28:02.925955 2849 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:28:02.926189 kubelet[2849]: I0430 03:28:02.926075 2849 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:28:02.930740 kubelet[2849]: E0430 03:28:02.929581 2849 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.5:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-5.183afae8b2bb8ba4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-5,UID:ip-172-31-16-5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-5,},FirstTimestamp:2025-04-30 03:28:02.897480612 +0000 UTC m=+0.638759421,LastTimestamp:2025-04-30 03:28:02.897480612 +0000 UTC m=+0.638759421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-5,}" Apr 30 03:28:02.932589 kubelet[2849]: W0430 03:28:02.931616 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.5:6443: connect: connection refused Apr 30 03:28:02.932589 kubelet[2849]: E0430 03:28:02.931693 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.5:6443: connect: connection refused Apr 30 03:28:02.934249 kubelet[2849]: I0430 03:28:02.933128 2849 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:28:02.940138 kubelet[2849]: I0430 03:28:02.939178 2849 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:28:02.950923 kubelet[2849]: E0430 03:28:02.950884 2849 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:28:02.951075 kubelet[2849]: I0430 03:28:02.950986 2849 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:28:02.951075 kubelet[2849]: I0430 03:28:02.951025 2849 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:28:02.951075 kubelet[2849]: I0430 03:28:02.951064 2849 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:28:02.951217 kubelet[2849]: E0430 03:28:02.951112 2849 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:28:02.955586 kubelet[2849]: W0430 03:28:02.955522 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.5:6443: connect: connection refused Apr 30 03:28:02.955586 kubelet[2849]: E0430 03:28:02.955583 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.5:6443: connect: connection refused Apr 30 03:28:02.980137 kubelet[2849]: I0430 03:28:02.980107 2849 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:28:02.980137 kubelet[2849]: I0430 03:28:02.980126 2849 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:28:02.980137 kubelet[2849]: I0430 03:28:02.980148 2849 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:28:02.983285 kubelet[2849]: I0430 03:28:02.983257 2849 policy_none.go:49] "None policy: Start" Apr 30 03:28:02.984248 kubelet[2849]: I0430 03:28:02.984197 2849 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:28:02.984248 kubelet[2849]: I0430 03:28:02.984231 2849 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:28:02.991033 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 03:28:03.006354 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 03:28:03.010264 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 03:28:03.014975 kubelet[2849]: I0430 03:28:03.014945 2849 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-5" Apr 30 03:28:03.015288 kubelet[2849]: E0430 03:28:03.015264 2849 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.5:6443/api/v1/nodes\": dial tcp 172.31.16.5:6443: connect: connection refused" node="ip-172-31-16-5" Apr 30 03:28:03.018007 kubelet[2849]: I0430 03:28:03.017974 2849 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:28:03.018237 kubelet[2849]: I0430 03:28:03.018197 2849 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:28:03.018336 kubelet[2849]: I0430 03:28:03.018314 2849 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:28:03.020094 kubelet[2849]: E0430 03:28:03.020070 2849 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-5\" not found" Apr 30 03:28:03.052299 kubelet[2849]: I0430 03:28:03.052225 2849 topology_manager.go:215] "Topology Admit Handler" podUID="d16d52574f6b62d5a9923a433c8dab83" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-16-5" Apr 30 03:28:03.054267 kubelet[2849]: I0430 03:28:03.054232 2849 topology_manager.go:215] "Topology Admit Handler" podUID="2f17f24a7944cdbb59437140e26e9d93" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-16-5" Apr 30 03:28:03.056343 kubelet[2849]: I0430 03:28:03.056089 2849 topology_manager.go:215] "Topology Admit Handler" podUID="4a1f527a2567eb8014f0f79b193697b5" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-16-5" Apr 30 03:28:03.063931 systemd[1]: Created slice kubepods-burstable-podd16d52574f6b62d5a9923a433c8dab83.slice - libcontainer container kubepods-burstable-podd16d52574f6b62d5a9923a433c8dab83.slice. Apr 30 03:28:03.083555 systemd[1]: Created slice kubepods-burstable-pod2f17f24a7944cdbb59437140e26e9d93.slice - libcontainer container kubepods-burstable-pod2f17f24a7944cdbb59437140e26e9d93.slice. Apr 30 03:28:03.089220 systemd[1]: Created slice kubepods-burstable-pod4a1f527a2567eb8014f0f79b193697b5.slice - libcontainer container kubepods-burstable-pod4a1f527a2567eb8014f0f79b193697b5.slice. Apr 30 03:28:03.125771 kubelet[2849]: E0430 03:28:03.125699 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-5?timeout=10s\": dial tcp 172.31.16.5:6443: connect: connection refused" interval="400ms" Apr 30 03:28:03.217749 kubelet[2849]: I0430 03:28:03.217649 2849 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-5" Apr 30 03:28:03.218163 kubelet[2849]: E0430 03:28:03.218128 2849 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.5:6443/api/v1/nodes\": dial tcp 172.31.16.5:6443: connect: connection refused" node="ip-172-31-16-5" Apr 30 03:28:03.221629 kubelet[2849]: I0430 03:28:03.221579 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a1f527a2567eb8014f0f79b193697b5-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-5\" (UID: \"4a1f527a2567eb8014f0f79b193697b5\") " pod="kube-system/kube-apiserver-ip-172-31-16-5" Apr 30 03:28:03.221629 kubelet[2849]: I0430 03:28:03.221634 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d16d52574f6b62d5a9923a433c8dab83-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-5\" (UID: \"d16d52574f6b62d5a9923a433c8dab83\") " pod="kube-system/kube-controller-manager-ip-172-31-16-5" Apr 30 03:28:03.221947 kubelet[2849]: I0430 03:28:03.221664 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d16d52574f6b62d5a9923a433c8dab83-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-5\" (UID: \"d16d52574f6b62d5a9923a433c8dab83\") " pod="kube-system/kube-controller-manager-ip-172-31-16-5" Apr 30 03:28:03.221947 kubelet[2849]: I0430 03:28:03.221690 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d16d52574f6b62d5a9923a433c8dab83-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-5\" (UID: \"d16d52574f6b62d5a9923a433c8dab83\") " pod="kube-system/kube-controller-manager-ip-172-31-16-5" Apr 30 03:28:03.221947 kubelet[2849]: I0430 03:28:03.221716 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d16d52574f6b62d5a9923a433c8dab83-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-5\" (UID: \"d16d52574f6b62d5a9923a433c8dab83\") " pod="kube-system/kube-controller-manager-ip-172-31-16-5" Apr 30 03:28:03.221947 kubelet[2849]: I0430 03:28:03.221739 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f17f24a7944cdbb59437140e26e9d93-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-5\" (UID: \"2f17f24a7944cdbb59437140e26e9d93\") " pod="kube-system/kube-scheduler-ip-172-31-16-5" Apr 30 03:28:03.221947 kubelet[2849]: I0430 03:28:03.221759 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d16d52574f6b62d5a9923a433c8dab83-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-5\" (UID: \"d16d52574f6b62d5a9923a433c8dab83\") " pod="kube-system/kube-controller-manager-ip-172-31-16-5" Apr 30 03:28:03.222100 kubelet[2849]: I0430 03:28:03.221785 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a1f527a2567eb8014f0f79b193697b5-ca-certs\") pod \"kube-apiserver-ip-172-31-16-5\" (UID: \"4a1f527a2567eb8014f0f79b193697b5\") " pod="kube-system/kube-apiserver-ip-172-31-16-5" Apr 30 03:28:03.222100 kubelet[2849]: I0430 03:28:03.221808 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a1f527a2567eb8014f0f79b193697b5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-5\" (UID: \"4a1f527a2567eb8014f0f79b193697b5\") " pod="kube-system/kube-apiserver-ip-172-31-16-5" Apr 30 03:28:03.381628 containerd[1987]: time="2025-04-30T03:28:03.381501793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-5,Uid:d16d52574f6b62d5a9923a433c8dab83,Namespace:kube-system,Attempt:0,}" Apr 30 03:28:03.398634 containerd[1987]: time="2025-04-30T03:28:03.398586098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-5,Uid:2f17f24a7944cdbb59437140e26e9d93,Namespace:kube-system,Attempt:0,}" Apr 30 03:28:03.399068 containerd[1987]: time="2025-04-30T03:28:03.398586395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-5,Uid:4a1f527a2567eb8014f0f79b193697b5,Namespace:kube-system,Attempt:0,}" Apr 30 03:28:03.526513 kubelet[2849]: E0430 03:28:03.526473 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-5?timeout=10s\": dial tcp 172.31.16.5:6443: connect: connection refused" interval="800ms" Apr 30 03:28:03.620294 kubelet[2849]: I0430 03:28:03.620260 2849 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-5" Apr 30 03:28:03.620646 kubelet[2849]: E0430 03:28:03.620613 2849 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.5:6443/api/v1/nodes\": dial tcp 172.31.16.5:6443: connect: connection refused" node="ip-172-31-16-5" Apr 30 03:28:03.871491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4080020211.mount: Deactivated successfully. Apr 30 03:28:03.883621 containerd[1987]: time="2025-04-30T03:28:03.883563228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:28:03.885058 containerd[1987]: time="2025-04-30T03:28:03.884994345Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:28:03.886568 containerd[1987]: time="2025-04-30T03:28:03.886168208Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 03:28:03.887693 containerd[1987]: time="2025-04-30T03:28:03.887606783Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:28:03.890168 containerd[1987]: time="2025-04-30T03:28:03.890125568Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:28:03.891034 containerd[1987]: time="2025-04-30T03:28:03.890974936Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:28:03.892450 containerd[1987]: time="2025-04-30T03:28:03.892400770Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:28:03.895695 containerd[1987]: time="2025-04-30T03:28:03.895661427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:28:03.897198 containerd[1987]: time="2025-04-30T03:28:03.896504294Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 514.913587ms" Apr 30 03:28:03.899068 containerd[1987]: time="2025-04-30T03:28:03.898925530Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 499.972347ms" Apr 30 03:28:03.902543 containerd[1987]: time="2025-04-30T03:28:03.902356717Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 503.509899ms" Apr 30 03:28:03.939506 kubelet[2849]: W0430 03:28:03.939450 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.5:6443: connect: connection refused Apr 30 03:28:03.939506 kubelet[2849]: E0430 03:28:03.939511 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.5:6443: connect: connection refused Apr 30 03:28:04.096084 containerd[1987]: time="2025-04-30T03:28:04.095872417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:28:04.096084 containerd[1987]: time="2025-04-30T03:28:04.095935821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:28:04.098129 containerd[1987]: time="2025-04-30T03:28:04.096178406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:04.098898 containerd[1987]: time="2025-04-30T03:28:04.098807921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:04.103263 containerd[1987]: time="2025-04-30T03:28:04.103096375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:28:04.104883 containerd[1987]: time="2025-04-30T03:28:04.104614740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:28:04.105912 containerd[1987]: time="2025-04-30T03:28:04.105680184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:28:04.105912 containerd[1987]: time="2025-04-30T03:28:04.105711488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:04.105912 containerd[1987]: time="2025-04-30T03:28:04.105818658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:04.106513 containerd[1987]: time="2025-04-30T03:28:04.106307122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:28:04.106513 containerd[1987]: time="2025-04-30T03:28:04.106336207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:04.106513 containerd[1987]: time="2025-04-30T03:28:04.106435710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:04.134003 systemd[1]: Started cri-containerd-ec16c8ab9d1760a6ba4c2535ccd368708eb8b546114a1bc717864c38079b41f6.scope - libcontainer container ec16c8ab9d1760a6ba4c2535ccd368708eb8b546114a1bc717864c38079b41f6. Apr 30 03:28:04.153737 systemd[1]: Started cri-containerd-51eaa2f39c19fdf004fd7df112b4b6d61ea03978b06e8569112a47d98f5f9606.scope - libcontainer container 51eaa2f39c19fdf004fd7df112b4b6d61ea03978b06e8569112a47d98f5f9606. Apr 30 03:28:04.159620 systemd[1]: Started cri-containerd-4b333d445da86d0298823b637ed97402c27fcb819c2ade3522351354097317f6.scope - libcontainer container 4b333d445da86d0298823b637ed97402c27fcb819c2ade3522351354097317f6. Apr 30 03:28:04.231670 containerd[1987]: time="2025-04-30T03:28:04.231628153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-5,Uid:4a1f527a2567eb8014f0f79b193697b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec16c8ab9d1760a6ba4c2535ccd368708eb8b546114a1bc717864c38079b41f6\"" Apr 30 03:28:04.242135 containerd[1987]: time="2025-04-30T03:28:04.241964149Z" level=info msg="CreateContainer within sandbox \"ec16c8ab9d1760a6ba4c2535ccd368708eb8b546114a1bc717864c38079b41f6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:28:04.254908 containerd[1987]: time="2025-04-30T03:28:04.254860360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-5,Uid:2f17f24a7944cdbb59437140e26e9d93,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b333d445da86d0298823b637ed97402c27fcb819c2ade3522351354097317f6\"" Apr 30 03:28:04.259945 containerd[1987]: time="2025-04-30T03:28:04.259902875Z" level=info msg="CreateContainer within sandbox \"4b333d445da86d0298823b637ed97402c27fcb819c2ade3522351354097317f6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:28:04.276047 kubelet[2849]: W0430 03:28:04.275938 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-5&limit=500&resourceVersion=0": dial tcp 172.31.16.5:6443: connect: connection refused Apr 30 03:28:04.276047 kubelet[2849]: E0430 03:28:04.276047 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-5&limit=500&resourceVersion=0": dial tcp 172.31.16.5:6443: connect: connection refused Apr 30 03:28:04.290002 containerd[1987]: time="2025-04-30T03:28:04.289936707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-5,Uid:d16d52574f6b62d5a9923a433c8dab83,Namespace:kube-system,Attempt:0,} returns sandbox id \"51eaa2f39c19fdf004fd7df112b4b6d61ea03978b06e8569112a47d98f5f9606\"" Apr 30 03:28:04.297119 containerd[1987]: time="2025-04-30T03:28:04.296440061Z" level=info msg="CreateContainer within sandbox \"ec16c8ab9d1760a6ba4c2535ccd368708eb8b546114a1bc717864c38079b41f6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"db8cbe24f70c0111fb065b428f421272be5d521fe66727d78eb9c46aa139f87d\"" Apr 30 03:28:04.297497 containerd[1987]: time="2025-04-30T03:28:04.297258073Z" level=info msg="CreateContainer within sandbox \"51eaa2f39c19fdf004fd7df112b4b6d61ea03978b06e8569112a47d98f5f9606\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:28:04.297497 containerd[1987]: time="2025-04-30T03:28:04.297325541Z" level=info msg="CreateContainer within sandbox \"4b333d445da86d0298823b637ed97402c27fcb819c2ade3522351354097317f6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"671482f102454f582483da29a6e10d7965a0ace21469d1f526523ec5ed729da9\"" Apr 30 03:28:04.297497 containerd[1987]: time="2025-04-30T03:28:04.297469206Z" level=info msg="StartContainer for \"db8cbe24f70c0111fb065b428f421272be5d521fe66727d78eb9c46aa139f87d\"" Apr 30 03:28:04.298182 containerd[1987]: time="2025-04-30T03:28:04.298154099Z" level=info msg="StartContainer for \"671482f102454f582483da29a6e10d7965a0ace21469d1f526523ec5ed729da9\"" Apr 30 03:28:04.327832 kubelet[2849]: E0430 03:28:04.327758 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-5?timeout=10s\": dial tcp 172.31.16.5:6443: connect: connection refused" interval="1.6s" Apr 30 03:28:04.334520 containerd[1987]: time="2025-04-30T03:28:04.334394910Z" level=info msg="CreateContainer within sandbox \"51eaa2f39c19fdf004fd7df112b4b6d61ea03978b06e8569112a47d98f5f9606\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c25f7009aa134439961e8fd026d40bf9f586263592823407b8bbe05643c2d59c\"" Apr 30 03:28:04.336352 containerd[1987]: time="2025-04-30T03:28:04.336309795Z" level=info msg="StartContainer for \"c25f7009aa134439961e8fd026d40bf9f586263592823407b8bbe05643c2d59c\"" Apr 30 03:28:04.348293 systemd[1]: Started cri-containerd-db8cbe24f70c0111fb065b428f421272be5d521fe66727d78eb9c46aa139f87d.scope - libcontainer container db8cbe24f70c0111fb065b428f421272be5d521fe66727d78eb9c46aa139f87d. Apr 30 03:28:04.358240 systemd[1]: Started cri-containerd-671482f102454f582483da29a6e10d7965a0ace21469d1f526523ec5ed729da9.scope - libcontainer container 671482f102454f582483da29a6e10d7965a0ace21469d1f526523ec5ed729da9. Apr 30 03:28:04.396261 systemd[1]: Started cri-containerd-c25f7009aa134439961e8fd026d40bf9f586263592823407b8bbe05643c2d59c.scope - libcontainer container c25f7009aa134439961e8fd026d40bf9f586263592823407b8bbe05643c2d59c. Apr 30 03:28:04.410868 kubelet[2849]: W0430 03:28:04.410798 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.5:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.5:6443: connect: connection refused Apr 30 03:28:04.410868 kubelet[2849]: E0430 03:28:04.410878 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.5:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.5:6443: connect: connection refused Apr 30 03:28:04.424046 kubelet[2849]: I0430 03:28:04.424000 2849 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-5" Apr 30 03:28:04.426013 kubelet[2849]: E0430 03:28:04.425972 2849 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.5:6443/api/v1/nodes\": dial tcp 172.31.16.5:6443: connect: connection refused" node="ip-172-31-16-5" Apr 30 03:28:04.446615 containerd[1987]: time="2025-04-30T03:28:04.446452404Z" level=info msg="StartContainer for \"db8cbe24f70c0111fb065b428f421272be5d521fe66727d78eb9c46aa139f87d\" returns successfully" Apr 30 03:28:04.449421 kubelet[2849]: W0430 03:28:04.449385 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.5:6443: connect: connection refused Apr 30 03:28:04.449559 kubelet[2849]: E0430 03:28:04.449432 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.5:6443: connect: connection refused Apr 30 03:28:04.489158 containerd[1987]: time="2025-04-30T03:28:04.489107312Z" level=info msg="StartContainer for \"671482f102454f582483da29a6e10d7965a0ace21469d1f526523ec5ed729da9\" returns successfully" Apr 30 03:28:04.504043 containerd[1987]: time="2025-04-30T03:28:04.503983470Z" level=info msg="StartContainer for \"c25f7009aa134439961e8fd026d40bf9f586263592823407b8bbe05643c2d59c\" returns successfully" Apr 30 03:28:04.655978 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 30 03:28:04.912549 kubelet[2849]: E0430 03:28:04.912315 2849 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.16.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.16.5:6443: connect: connection refused Apr 30 03:28:06.029791 kubelet[2849]: I0430 03:28:06.029272 2849 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-5" Apr 30 03:28:07.126699 kubelet[2849]: E0430 03:28:07.126662 2849 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-5\" not found" node="ip-172-31-16-5" Apr 30 03:28:07.198726 kubelet[2849]: I0430 03:28:07.197701 2849 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-16-5" Apr 30 03:28:07.211908 kubelet[2849]: E0430 03:28:07.211801 2849 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-16-5\" not found" Apr 30 03:28:07.312440 kubelet[2849]: E0430 03:28:07.312373 2849 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-16-5\" not found" Apr 30 03:28:07.857868 kubelet[2849]: E0430 03:28:07.857812 2849 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-16-5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-5" Apr 30 03:28:07.896241 kubelet[2849]: I0430 03:28:07.896188 2849 apiserver.go:52] "Watching apiserver" Apr 30 03:28:07.921418 kubelet[2849]: I0430 03:28:07.921378 2849 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:28:09.220282 systemd[1]: Reloading requested from client PID 3128 ('systemctl') (unit session-7.scope)... Apr 30 03:28:09.220302 systemd[1]: Reloading... Apr 30 03:28:09.320047 zram_generator::config[3164]: No configuration found. Apr 30 03:28:09.461287 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:09.563740 systemd[1]: Reloading finished in 342 ms. Apr 30 03:28:09.607173 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:09.618278 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:28:09.618583 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:09.625423 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:09.898125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:09.908501 (kubelet)[3229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:28:09.985483 kubelet[3229]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:28:09.985483 kubelet[3229]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:28:09.985483 kubelet[3229]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:28:09.985964 kubelet[3229]: I0430 03:28:09.985546 3229 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:28:09.993716 kubelet[3229]: I0430 03:28:09.993680 3229 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:28:09.993716 kubelet[3229]: I0430 03:28:09.993706 3229 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:28:09.994128 kubelet[3229]: I0430 03:28:09.994095 3229 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:28:09.998128 kubelet[3229]: I0430 03:28:09.998085 3229 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:28:10.000964 kubelet[3229]: I0430 03:28:10.000920 3229 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:28:10.009374 kubelet[3229]: I0430 03:28:10.009346 3229 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:28:10.009652 kubelet[3229]: I0430 03:28:10.009610 3229 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:28:10.009892 kubelet[3229]: I0430 03:28:10.009644 3229 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:28:10.010051 kubelet[3229]: I0430 03:28:10.009902 3229 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:28:10.010051 kubelet[3229]: I0430 03:28:10.009917 3229 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:28:10.012855 kubelet[3229]: I0430 03:28:10.012763 3229 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:28:10.014101 kubelet[3229]: I0430 03:28:10.014083 3229 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:28:10.014170 kubelet[3229]: I0430 03:28:10.014109 3229 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:28:10.014170 kubelet[3229]: I0430 03:28:10.014137 3229 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:28:10.014170 kubelet[3229]: I0430 03:28:10.014156 3229 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:28:10.025052 kubelet[3229]: I0430 03:28:10.024703 3229 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:28:10.025052 kubelet[3229]: I0430 03:28:10.024958 3229 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:28:10.026167 kubelet[3229]: I0430 03:28:10.026151 3229 server.go:1264] "Started kubelet" Apr 30 03:28:10.028230 kubelet[3229]: I0430 03:28:10.027061 3229 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:28:10.028230 kubelet[3229]: I0430 03:28:10.027334 3229 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:28:10.028230 kubelet[3229]: I0430 03:28:10.027364 3229 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:28:10.028532 kubelet[3229]: I0430 03:28:10.028519 3229 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:28:10.028940 kubelet[3229]: I0430 03:28:10.028922 3229 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:28:10.034812 kubelet[3229]: E0430 03:28:10.034791 3229 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:28:10.037569 kubelet[3229]: I0430 03:28:10.037551 3229 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:28:10.038605 kubelet[3229]: I0430 03:28:10.038570 3229 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:28:10.038938 kubelet[3229]: I0430 03:28:10.038926 3229 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:28:10.042611 kubelet[3229]: I0430 03:28:10.042583 3229 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:28:10.042727 kubelet[3229]: I0430 03:28:10.042705 3229 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:28:10.044636 kubelet[3229]: I0430 03:28:10.044617 3229 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:28:10.044738 kubelet[3229]: I0430 03:28:10.044731 3229 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:28:10.044800 kubelet[3229]: I0430 03:28:10.044794 3229 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:28:10.044891 kubelet[3229]: E0430 03:28:10.044878 3229 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:28:10.048376 kubelet[3229]: I0430 03:28:10.048339 3229 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:28:10.048515 kubelet[3229]: I0430 03:28:10.048395 3229 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:28:10.095384 kubelet[3229]: I0430 03:28:10.095357 3229 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:28:10.095565 kubelet[3229]: I0430 03:28:10.095552 3229 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:28:10.095643 kubelet[3229]: I0430 03:28:10.095636 3229 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:28:10.095828 kubelet[3229]: I0430 03:28:10.095818 3229 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:28:10.095971 kubelet[3229]: I0430 03:28:10.095875 3229 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:28:10.095971 kubelet[3229]: I0430 03:28:10.095896 3229 policy_none.go:49] "None policy: Start" Apr 30 03:28:10.097053 kubelet[3229]: I0430 03:28:10.096669 3229 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:28:10.097053 kubelet[3229]: I0430 03:28:10.096695 3229 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:28:10.097053 kubelet[3229]: I0430 03:28:10.096965 3229 state_mem.go:75] "Updated machine memory state" Apr 30 03:28:10.108038 kubelet[3229]: I0430 03:28:10.107997 3229 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:28:10.108257 kubelet[3229]: I0430 03:28:10.108217 3229 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:28:10.108353 kubelet[3229]: I0430 03:28:10.108337 3229 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:28:10.141327 kubelet[3229]: I0430 03:28:10.141295 3229 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-5" Apr 30 03:28:10.146699 kubelet[3229]: I0430 03:28:10.145494 3229 topology_manager.go:215] "Topology Admit Handler" podUID="4a1f527a2567eb8014f0f79b193697b5" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-16-5" Apr 30 03:28:10.146699 kubelet[3229]: I0430 03:28:10.145596 3229 topology_manager.go:215] "Topology Admit Handler" podUID="d16d52574f6b62d5a9923a433c8dab83" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-16-5" Apr 30 03:28:10.146699 kubelet[3229]: I0430 03:28:10.145716 3229 topology_manager.go:215] "Topology Admit Handler" podUID="2f17f24a7944cdbb59437140e26e9d93" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-16-5" Apr 30 03:28:10.160091 kubelet[3229]: I0430 03:28:10.159729 3229 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-16-5" Apr 30 03:28:10.160091 kubelet[3229]: I0430 03:28:10.159824 3229 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-16-5" Apr 30 03:28:10.241518 kubelet[3229]: I0430 03:28:10.241109 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d16d52574f6b62d5a9923a433c8dab83-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-5\" (UID: \"d16d52574f6b62d5a9923a433c8dab83\") " pod="kube-system/kube-controller-manager-ip-172-31-16-5" Apr 30 03:28:10.241518 kubelet[3229]: I0430 03:28:10.241172 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d16d52574f6b62d5a9923a433c8dab83-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-5\" (UID: \"d16d52574f6b62d5a9923a433c8dab83\") " pod="kube-system/kube-controller-manager-ip-172-31-16-5" Apr 30 03:28:10.241518 kubelet[3229]: I0430 03:28:10.241211 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d16d52574f6b62d5a9923a433c8dab83-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-5\" (UID: \"d16d52574f6b62d5a9923a433c8dab83\") " pod="kube-system/kube-controller-manager-ip-172-31-16-5" Apr 30 03:28:10.241518 kubelet[3229]: I0430 03:28:10.241253 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f17f24a7944cdbb59437140e26e9d93-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-5\" (UID: \"2f17f24a7944cdbb59437140e26e9d93\") " pod="kube-system/kube-scheduler-ip-172-31-16-5" Apr 30 03:28:10.241518 kubelet[3229]: I0430 03:28:10.241305 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d16d52574f6b62d5a9923a433c8dab83-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-5\" (UID: \"d16d52574f6b62d5a9923a433c8dab83\") " pod="kube-system/kube-controller-manager-ip-172-31-16-5" Apr 30 03:28:10.241862 kubelet[3229]: I0430 03:28:10.241348 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a1f527a2567eb8014f0f79b193697b5-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-5\" (UID: \"4a1f527a2567eb8014f0f79b193697b5\") " pod="kube-system/kube-apiserver-ip-172-31-16-5" Apr 30 03:28:10.241862 kubelet[3229]: I0430 03:28:10.241372 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a1f527a2567eb8014f0f79b193697b5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-5\" (UID: \"4a1f527a2567eb8014f0f79b193697b5\") " pod="kube-system/kube-apiserver-ip-172-31-16-5" Apr 30 03:28:10.241862 kubelet[3229]: I0430 03:28:10.241395 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d16d52574f6b62d5a9923a433c8dab83-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-5\" (UID: \"d16d52574f6b62d5a9923a433c8dab83\") " pod="kube-system/kube-controller-manager-ip-172-31-16-5" Apr 30 03:28:10.241862 kubelet[3229]: I0430 03:28:10.241453 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a1f527a2567eb8014f0f79b193697b5-ca-certs\") pod \"kube-apiserver-ip-172-31-16-5\" (UID: \"4a1f527a2567eb8014f0f79b193697b5\") " pod="kube-system/kube-apiserver-ip-172-31-16-5" Apr 30 03:28:11.023119 kubelet[3229]: I0430 03:28:11.023079 3229 apiserver.go:52] "Watching apiserver" Apr 30 03:28:11.039741 kubelet[3229]: I0430 03:28:11.039675 3229 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:28:11.131308 kubelet[3229]: I0430 03:28:11.131048 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-5" podStartSLOduration=1.131011788 podStartE2EDuration="1.131011788s" podCreationTimestamp="2025-04-30 03:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:28:11.101545367 +0000 UTC m=+1.183974075" watchObservedRunningTime="2025-04-30 03:28:11.131011788 +0000 UTC m=+1.213440497" Apr 30 03:28:11.144955 kubelet[3229]: I0430 03:28:11.144532 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-5" podStartSLOduration=1.144511127 podStartE2EDuration="1.144511127s" podCreationTimestamp="2025-04-30 03:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:28:11.14430073 +0000 UTC m=+1.226729441" watchObservedRunningTime="2025-04-30 03:28:11.144511127 +0000 UTC m=+1.226939839" Apr 30 03:28:11.144955 kubelet[3229]: I0430 03:28:11.144704 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-5" podStartSLOduration=1.144691289 podStartE2EDuration="1.144691289s" podCreationTimestamp="2025-04-30 03:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:28:11.131845114 +0000 UTC m=+1.214273826" watchObservedRunningTime="2025-04-30 03:28:11.144691289 +0000 UTC m=+1.227120002" Apr 30 03:28:15.838668 sudo[2294]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:15.876744 sshd[2291]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:15.879368 systemd[1]: sshd@6-172.31.16.5:22-147.75.109.163:51882.service: Deactivated successfully. Apr 30 03:28:15.881604 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:28:15.881757 systemd[1]: session-7.scope: Consumed 5.497s CPU time, 187.9M memory peak, 0B memory swap peak. Apr 30 03:28:15.882772 systemd-logind[1956]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:28:15.884825 systemd-logind[1956]: Removed session 7. Apr 30 03:28:18.675527 update_engine[1957]: I20250430 03:28:18.675426 1957 update_attempter.cc:509] Updating boot flags... Apr 30 03:28:18.741141 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3320) Apr 30 03:28:18.911119 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3321) Apr 30 03:28:19.096167 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3321) Apr 30 03:28:24.954835 kubelet[3229]: I0430 03:28:24.954810 3229 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:28:24.957778 containerd[1987]: time="2025-04-30T03:28:24.957446862Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:28:24.958932 kubelet[3229]: I0430 03:28:24.958155 3229 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:28:25.544596 kubelet[3229]: I0430 03:28:25.544549 3229 topology_manager.go:215] "Topology Admit Handler" podUID="db50d795-1bfa-4354-b6d1-09f7c08343be" podNamespace="kube-system" podName="kube-proxy-cpnzc" Apr 30 03:28:25.563669 systemd[1]: Created slice kubepods-besteffort-poddb50d795_1bfa_4354_b6d1_09f7c08343be.slice - libcontainer container kubepods-besteffort-poddb50d795_1bfa_4354_b6d1_09f7c08343be.slice. Apr 30 03:28:25.657499 kubelet[3229]: I0430 03:28:25.657431 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db50d795-1bfa-4354-b6d1-09f7c08343be-xtables-lock\") pod \"kube-proxy-cpnzc\" (UID: \"db50d795-1bfa-4354-b6d1-09f7c08343be\") " pod="kube-system/kube-proxy-cpnzc" Apr 30 03:28:25.657499 kubelet[3229]: I0430 03:28:25.657473 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db50d795-1bfa-4354-b6d1-09f7c08343be-lib-modules\") pod \"kube-proxy-cpnzc\" (UID: \"db50d795-1bfa-4354-b6d1-09f7c08343be\") " pod="kube-system/kube-proxy-cpnzc" Apr 30 03:28:25.657499 kubelet[3229]: I0430 03:28:25.657498 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzzz2\" (UniqueName: \"kubernetes.io/projected/db50d795-1bfa-4354-b6d1-09f7c08343be-kube-api-access-fzzz2\") pod \"kube-proxy-cpnzc\" (UID: \"db50d795-1bfa-4354-b6d1-09f7c08343be\") " pod="kube-system/kube-proxy-cpnzc" Apr 30 03:28:25.657721 kubelet[3229]: I0430 03:28:25.657519 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db50d795-1bfa-4354-b6d1-09f7c08343be-kube-proxy\") pod \"kube-proxy-cpnzc\" (UID: \"db50d795-1bfa-4354-b6d1-09f7c08343be\") " pod="kube-system/kube-proxy-cpnzc" Apr 30 03:28:25.770841 kubelet[3229]: E0430 03:28:25.769356 3229 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 30 03:28:25.770841 kubelet[3229]: E0430 03:28:25.769405 3229 projected.go:200] Error preparing data for projected volume kube-api-access-fzzz2 for pod kube-system/kube-proxy-cpnzc: configmap "kube-root-ca.crt" not found Apr 30 03:28:25.770841 kubelet[3229]: E0430 03:28:25.769488 3229 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db50d795-1bfa-4354-b6d1-09f7c08343be-kube-api-access-fzzz2 podName:db50d795-1bfa-4354-b6d1-09f7c08343be nodeName:}" failed. No retries permitted until 2025-04-30 03:28:26.269461667 +0000 UTC m=+16.351890380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fzzz2" (UniqueName: "kubernetes.io/projected/db50d795-1bfa-4354-b6d1-09f7c08343be-kube-api-access-fzzz2") pod "kube-proxy-cpnzc" (UID: "db50d795-1bfa-4354-b6d1-09f7c08343be") : configmap "kube-root-ca.crt" not found Apr 30 03:28:25.970744 kubelet[3229]: I0430 03:28:25.970619 3229 topology_manager.go:215] "Topology Admit Handler" podUID="294c5976-486a-46a3-a011-603850cbb1ff" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-nnsk8" Apr 30 03:28:25.979276 systemd[1]: Created slice kubepods-besteffort-pod294c5976_486a_46a3_a011_603850cbb1ff.slice - libcontainer container kubepods-besteffort-pod294c5976_486a_46a3_a011_603850cbb1ff.slice. Apr 30 03:28:26.060846 kubelet[3229]: I0430 03:28:26.060701 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6pmg\" (UniqueName: \"kubernetes.io/projected/294c5976-486a-46a3-a011-603850cbb1ff-kube-api-access-w6pmg\") pod \"tigera-operator-797db67f8-nnsk8\" (UID: \"294c5976-486a-46a3-a011-603850cbb1ff\") " pod="tigera-operator/tigera-operator-797db67f8-nnsk8" Apr 30 03:28:26.060846 kubelet[3229]: I0430 03:28:26.060820 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/294c5976-486a-46a3-a011-603850cbb1ff-var-lib-calico\") pod \"tigera-operator-797db67f8-nnsk8\" (UID: \"294c5976-486a-46a3-a011-603850cbb1ff\") " pod="tigera-operator/tigera-operator-797db67f8-nnsk8" Apr 30 03:28:26.282450 containerd[1987]: time="2025-04-30T03:28:26.282409780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-nnsk8,Uid:294c5976-486a-46a3-a011-603850cbb1ff,Namespace:tigera-operator,Attempt:0,}" Apr 30 03:28:26.313551 containerd[1987]: time="2025-04-30T03:28:26.312863364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:28:26.313551 containerd[1987]: time="2025-04-30T03:28:26.313311058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:28:26.313551 containerd[1987]: time="2025-04-30T03:28:26.313326938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:26.313551 containerd[1987]: time="2025-04-30T03:28:26.313417191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:26.338235 systemd[1]: Started cri-containerd-b919c4b8238dc7f2d66f8d9b5e2ddf05c2c7d95e3aceb764a683c0db21d11a73.scope - libcontainer container b919c4b8238dc7f2d66f8d9b5e2ddf05c2c7d95e3aceb764a683c0db21d11a73. Apr 30 03:28:26.390618 containerd[1987]: time="2025-04-30T03:28:26.390572241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-nnsk8,Uid:294c5976-486a-46a3-a011-603850cbb1ff,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b919c4b8238dc7f2d66f8d9b5e2ddf05c2c7d95e3aceb764a683c0db21d11a73\"" Apr 30 03:28:26.398909 containerd[1987]: time="2025-04-30T03:28:26.398872356Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 03:28:26.483155 containerd[1987]: time="2025-04-30T03:28:26.482778763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cpnzc,Uid:db50d795-1bfa-4354-b6d1-09f7c08343be,Namespace:kube-system,Attempt:0,}" Apr 30 03:28:26.508413 containerd[1987]: time="2025-04-30T03:28:26.508277957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:28:26.509222 containerd[1987]: time="2025-04-30T03:28:26.508548296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:28:26.509565 containerd[1987]: time="2025-04-30T03:28:26.509204155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:26.509565 containerd[1987]: time="2025-04-30T03:28:26.509406652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:26.526200 systemd[1]: Started cri-containerd-304e7cf4219e820ef8c37bbfa3e718b79b1266af3bafe482e94f22f8ac97376f.scope - libcontainer container 304e7cf4219e820ef8c37bbfa3e718b79b1266af3bafe482e94f22f8ac97376f. Apr 30 03:28:26.548603 containerd[1987]: time="2025-04-30T03:28:26.548488834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cpnzc,Uid:db50d795-1bfa-4354-b6d1-09f7c08343be,Namespace:kube-system,Attempt:0,} returns sandbox id \"304e7cf4219e820ef8c37bbfa3e718b79b1266af3bafe482e94f22f8ac97376f\"" Apr 30 03:28:26.553238 containerd[1987]: time="2025-04-30T03:28:26.553202667Z" level=info msg="CreateContainer within sandbox \"304e7cf4219e820ef8c37bbfa3e718b79b1266af3bafe482e94f22f8ac97376f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:28:26.572789 containerd[1987]: time="2025-04-30T03:28:26.572682190Z" level=info msg="CreateContainer within sandbox \"304e7cf4219e820ef8c37bbfa3e718b79b1266af3bafe482e94f22f8ac97376f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"450b883f86376ffc4079d8544cc8159d05f019f46ef5660ce3412d03be3898fe\"" Apr 30 03:28:26.573498 containerd[1987]: time="2025-04-30T03:28:26.573459475Z" level=info msg="StartContainer for \"450b883f86376ffc4079d8544cc8159d05f019f46ef5660ce3412d03be3898fe\"" Apr 30 03:28:26.602226 systemd[1]: Started cri-containerd-450b883f86376ffc4079d8544cc8159d05f019f46ef5660ce3412d03be3898fe.scope - libcontainer container 450b883f86376ffc4079d8544cc8159d05f019f46ef5660ce3412d03be3898fe. Apr 30 03:28:26.633027 containerd[1987]: time="2025-04-30T03:28:26.632964144Z" level=info msg="StartContainer for \"450b883f86376ffc4079d8544cc8159d05f019f46ef5660ce3412d03be3898fe\" returns successfully" Apr 30 03:28:27.124251 kubelet[3229]: I0430 03:28:27.121744 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cpnzc" podStartSLOduration=2.1217255919999998 podStartE2EDuration="2.121725592s" podCreationTimestamp="2025-04-30 03:28:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:28:27.121624975 +0000 UTC m=+17.204053687" watchObservedRunningTime="2025-04-30 03:28:27.121725592 +0000 UTC m=+17.204154301" Apr 30 03:28:30.178522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1664266540.mount: Deactivated successfully. Apr 30 03:28:30.832360 containerd[1987]: time="2025-04-30T03:28:30.832309625Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:30.834037 containerd[1987]: time="2025-04-30T03:28:30.833926322Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" Apr 30 03:28:30.835855 containerd[1987]: time="2025-04-30T03:28:30.835806323Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:30.839387 containerd[1987]: time="2025-04-30T03:28:30.838773421Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:30.839387 containerd[1987]: time="2025-04-30T03:28:30.839277850Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 4.440365446s" Apr 30 03:28:30.839387 containerd[1987]: time="2025-04-30T03:28:30.839306415Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" Apr 30 03:28:30.841511 containerd[1987]: time="2025-04-30T03:28:30.841477576Z" level=info msg="CreateContainer within sandbox \"b919c4b8238dc7f2d66f8d9b5e2ddf05c2c7d95e3aceb764a683c0db21d11a73\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 03:28:30.864165 containerd[1987]: time="2025-04-30T03:28:30.864113317Z" level=info msg="CreateContainer within sandbox \"b919c4b8238dc7f2d66f8d9b5e2ddf05c2c7d95e3aceb764a683c0db21d11a73\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b45513be812a5548f6cd480f04fd71be3312524aa313563d7730654cf8a0d31e\"" Apr 30 03:28:30.865043 containerd[1987]: time="2025-04-30T03:28:30.864986892Z" level=info msg="StartContainer for \"b45513be812a5548f6cd480f04fd71be3312524aa313563d7730654cf8a0d31e\"" Apr 30 03:28:30.900303 systemd[1]: Started cri-containerd-b45513be812a5548f6cd480f04fd71be3312524aa313563d7730654cf8a0d31e.scope - libcontainer container b45513be812a5548f6cd480f04fd71be3312524aa313563d7730654cf8a0d31e. Apr 30 03:28:30.930622 containerd[1987]: time="2025-04-30T03:28:30.930552599Z" level=info msg="StartContainer for \"b45513be812a5548f6cd480f04fd71be3312524aa313563d7730654cf8a0d31e\" returns successfully" Apr 30 03:28:34.055754 kubelet[3229]: I0430 03:28:34.054986 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-nnsk8" podStartSLOduration=4.613127979 podStartE2EDuration="9.054961054s" podCreationTimestamp="2025-04-30 03:28:25 +0000 UTC" firstStartedPulling="2025-04-30 03:28:26.398412954 +0000 UTC m=+16.480841643" lastFinishedPulling="2025-04-30 03:28:30.840246027 +0000 UTC m=+20.922674718" observedRunningTime="2025-04-30 03:28:31.162205435 +0000 UTC m=+21.244634146" watchObservedRunningTime="2025-04-30 03:28:34.054961054 +0000 UTC m=+24.137389762" Apr 30 03:28:34.084543 kubelet[3229]: I0430 03:28:34.083818 3229 topology_manager.go:215] "Topology Admit Handler" podUID="a1e42e16-140a-4044-801d-30e2d6bbb343" podNamespace="calico-system" podName="calico-typha-766c846665-8m6ks" Apr 30 03:28:34.096871 systemd[1]: Created slice kubepods-besteffort-poda1e42e16_140a_4044_801d_30e2d6bbb343.slice - libcontainer container kubepods-besteffort-poda1e42e16_140a_4044_801d_30e2d6bbb343.slice. Apr 30 03:28:34.127686 kubelet[3229]: I0430 03:28:34.127527 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a1e42e16-140a-4044-801d-30e2d6bbb343-typha-certs\") pod \"calico-typha-766c846665-8m6ks\" (UID: \"a1e42e16-140a-4044-801d-30e2d6bbb343\") " pod="calico-system/calico-typha-766c846665-8m6ks" Apr 30 03:28:34.127686 kubelet[3229]: I0430 03:28:34.127583 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1e42e16-140a-4044-801d-30e2d6bbb343-tigera-ca-bundle\") pod \"calico-typha-766c846665-8m6ks\" (UID: \"a1e42e16-140a-4044-801d-30e2d6bbb343\") " pod="calico-system/calico-typha-766c846665-8m6ks" Apr 30 03:28:34.127686 kubelet[3229]: I0430 03:28:34.127624 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmjzz\" (UniqueName: \"kubernetes.io/projected/a1e42e16-140a-4044-801d-30e2d6bbb343-kube-api-access-nmjzz\") pod \"calico-typha-766c846665-8m6ks\" (UID: \"a1e42e16-140a-4044-801d-30e2d6bbb343\") " pod="calico-system/calico-typha-766c846665-8m6ks" Apr 30 03:28:34.220365 kubelet[3229]: I0430 03:28:34.220309 3229 topology_manager.go:215] "Topology Admit Handler" podUID="d76778f1-83ce-4e9e-8d18-b99aeac99167" podNamespace="calico-system" podName="calico-node-mvv2q" Apr 30 03:28:34.230456 systemd[1]: Created slice kubepods-besteffort-podd76778f1_83ce_4e9e_8d18_b99aeac99167.slice - libcontainer container kubepods-besteffort-podd76778f1_83ce_4e9e_8d18_b99aeac99167.slice. Apr 30 03:28:34.332130 kubelet[3229]: I0430 03:28:34.329821 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-policysync\") pod \"calico-node-mvv2q\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " pod="calico-system/calico-node-mvv2q" Apr 30 03:28:34.332130 kubelet[3229]: I0430 03:28:34.329873 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-cni-bin-dir\") pod \"calico-node-mvv2q\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " pod="calico-system/calico-node-mvv2q" Apr 30 03:28:34.332130 kubelet[3229]: I0430 03:28:34.329899 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-var-run-calico\") pod \"calico-node-mvv2q\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " pod="calico-system/calico-node-mvv2q" Apr 30 03:28:34.332130 kubelet[3229]: I0430 03:28:34.329923 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-cni-log-dir\") pod \"calico-node-mvv2q\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " pod="calico-system/calico-node-mvv2q" Apr 30 03:28:34.332130 kubelet[3229]: I0430 03:28:34.329949 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d76778f1-83ce-4e9e-8d18-b99aeac99167-tigera-ca-bundle\") pod \"calico-node-mvv2q\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " pod="calico-system/calico-node-mvv2q" Apr 30 03:28:34.332432 kubelet[3229]: I0430 03:28:34.329972 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-var-lib-calico\") pod \"calico-node-mvv2q\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " pod="calico-system/calico-node-mvv2q" Apr 30 03:28:34.332432 kubelet[3229]: I0430 03:28:34.329994 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-flexvol-driver-host\") pod \"calico-node-mvv2q\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " pod="calico-system/calico-node-mvv2q" Apr 30 03:28:34.332432 kubelet[3229]: I0430 03:28:34.330032 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d76778f1-83ce-4e9e-8d18-b99aeac99167-node-certs\") pod \"calico-node-mvv2q\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " pod="calico-system/calico-node-mvv2q" Apr 30 03:28:34.332432 kubelet[3229]: I0430 03:28:34.330060 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpnh7\" (UniqueName: \"kubernetes.io/projected/d76778f1-83ce-4e9e-8d18-b99aeac99167-kube-api-access-wpnh7\") pod \"calico-node-mvv2q\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " pod="calico-system/calico-node-mvv2q" Apr 30 03:28:34.332432 kubelet[3229]: I0430 03:28:34.330086 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-xtables-lock\") pod \"calico-node-mvv2q\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " pod="calico-system/calico-node-mvv2q" Apr 30 03:28:34.332669 kubelet[3229]: I0430 03:28:34.330106 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-cni-net-dir\") pod \"calico-node-mvv2q\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " pod="calico-system/calico-node-mvv2q" Apr 30 03:28:34.332669 kubelet[3229]: I0430 03:28:34.330129 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-lib-modules\") pod \"calico-node-mvv2q\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " pod="calico-system/calico-node-mvv2q" Apr 30 03:28:34.389701 kubelet[3229]: I0430 03:28:34.389312 3229 topology_manager.go:215] "Topology Admit Handler" podUID="c125f289-79ba-4045-ac98-3376fc26a663" podNamespace="calico-system" podName="csi-node-driver-jbvr9" Apr 30 03:28:34.390906 kubelet[3229]: E0430 03:28:34.390687 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jbvr9" podUID="c125f289-79ba-4045-ac98-3376fc26a663" Apr 30 03:28:34.417119 containerd[1987]: time="2025-04-30T03:28:34.416170341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-766c846665-8m6ks,Uid:a1e42e16-140a-4044-801d-30e2d6bbb343,Namespace:calico-system,Attempt:0,}" Apr 30 03:28:34.432072 kubelet[3229]: I0430 03:28:34.431386 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c125f289-79ba-4045-ac98-3376fc26a663-kubelet-dir\") pod \"csi-node-driver-jbvr9\" (UID: \"c125f289-79ba-4045-ac98-3376fc26a663\") " pod="calico-system/csi-node-driver-jbvr9" Apr 30 03:28:34.432072 kubelet[3229]: I0430 03:28:34.431464 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c125f289-79ba-4045-ac98-3376fc26a663-registration-dir\") pod \"csi-node-driver-jbvr9\" (UID: \"c125f289-79ba-4045-ac98-3376fc26a663\") " pod="calico-system/csi-node-driver-jbvr9" Apr 30 03:28:34.432072 kubelet[3229]: I0430 03:28:34.431494 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c125f289-79ba-4045-ac98-3376fc26a663-socket-dir\") pod \"csi-node-driver-jbvr9\" (UID: \"c125f289-79ba-4045-ac98-3376fc26a663\") " pod="calico-system/csi-node-driver-jbvr9" Apr 30 03:28:34.432072 kubelet[3229]: I0430 03:28:34.431518 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zntlt\" (UniqueName: \"kubernetes.io/projected/c125f289-79ba-4045-ac98-3376fc26a663-kube-api-access-zntlt\") pod \"csi-node-driver-jbvr9\" (UID: \"c125f289-79ba-4045-ac98-3376fc26a663\") " pod="calico-system/csi-node-driver-jbvr9" Apr 30 03:28:34.432072 kubelet[3229]: I0430 03:28:34.431553 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c125f289-79ba-4045-ac98-3376fc26a663-varrun\") pod \"csi-node-driver-jbvr9\" (UID: \"c125f289-79ba-4045-ac98-3376fc26a663\") " pod="calico-system/csi-node-driver-jbvr9" Apr 30 03:28:34.443059 kubelet[3229]: E0430 03:28:34.441226 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.443059 kubelet[3229]: W0430 03:28:34.441278 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.443059 kubelet[3229]: E0430 03:28:34.441318 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.443665 kubelet[3229]: E0430 03:28:34.443484 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.443665 kubelet[3229]: W0430 03:28:34.443505 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.449061 kubelet[3229]: E0430 03:28:34.445152 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.451212 kubelet[3229]: E0430 03:28:34.450360 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.451212 kubelet[3229]: W0430 03:28:34.450393 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.451212 kubelet[3229]: E0430 03:28:34.450424 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.473141 kubelet[3229]: E0430 03:28:34.473027 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.473141 kubelet[3229]: W0430 03:28:34.473062 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.473141 kubelet[3229]: E0430 03:28:34.473094 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.495051 containerd[1987]: time="2025-04-30T03:28:34.493481288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:28:34.495051 containerd[1987]: time="2025-04-30T03:28:34.493564899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:28:34.495051 containerd[1987]: time="2025-04-30T03:28:34.493585575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:34.495051 containerd[1987]: time="2025-04-30T03:28:34.493704104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:34.532812 kubelet[3229]: E0430 03:28:34.532782 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.533217 kubelet[3229]: W0430 03:28:34.532998 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.533217 kubelet[3229]: E0430 03:28:34.533053 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.534487 kubelet[3229]: E0430 03:28:34.534291 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.534487 kubelet[3229]: W0430 03:28:34.534310 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.534487 kubelet[3229]: E0430 03:28:34.534344 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.535126 kubelet[3229]: E0430 03:28:34.534930 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.535126 kubelet[3229]: W0430 03:28:34.534949 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.535126 kubelet[3229]: E0430 03:28:34.534992 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.535761 kubelet[3229]: E0430 03:28:34.535624 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.535761 kubelet[3229]: W0430 03:28:34.535640 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.535761 kubelet[3229]: E0430 03:28:34.535655 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.536683 kubelet[3229]: E0430 03:28:34.536660 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.536924 kubelet[3229]: W0430 03:28:34.536738 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.536924 kubelet[3229]: E0430 03:28:34.536879 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.537397 kubelet[3229]: E0430 03:28:34.537384 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.537565 kubelet[3229]: W0430 03:28:34.537453 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.537749 kubelet[3229]: E0430 03:28:34.537616 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.538053 kubelet[3229]: E0430 03:28:34.538041 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.538233 kubelet[3229]: W0430 03:28:34.538143 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.539425 kubelet[3229]: E0430 03:28:34.538769 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.539425 kubelet[3229]: E0430 03:28:34.538992 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.539425 kubelet[3229]: W0430 03:28:34.539000 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.539425 kubelet[3229]: E0430 03:28:34.539210 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.539425 kubelet[3229]: E0430 03:28:34.539262 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.539425 kubelet[3229]: W0430 03:28:34.539272 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.539425 kubelet[3229]: E0430 03:28:34.539351 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.540062 kubelet[3229]: E0430 03:28:34.539879 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.540062 kubelet[3229]: W0430 03:28:34.539892 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.540062 kubelet[3229]: E0430 03:28:34.540039 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.540443 kubelet[3229]: E0430 03:28:34.540347 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.540443 kubelet[3229]: W0430 03:28:34.540359 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.540716 kubelet[3229]: E0430 03:28:34.540553 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.540861 kubelet[3229]: E0430 03:28:34.540849 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.541006 kubelet[3229]: W0430 03:28:34.540924 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.541103 kubelet[3229]: E0430 03:28:34.541090 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.541297 containerd[1987]: time="2025-04-30T03:28:34.541221792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mvv2q,Uid:d76778f1-83ce-4e9e-8d18-b99aeac99167,Namespace:calico-system,Attempt:0,}" Apr 30 03:28:34.541814 kubelet[3229]: E0430 03:28:34.541794 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.545833 kubelet[3229]: W0430 03:28:34.544222 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.547357 kubelet[3229]: E0430 03:28:34.547334 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.547564 kubelet[3229]: W0430 03:28:34.547528 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.549437 kubelet[3229]: E0430 03:28:34.549406 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.550353 kubelet[3229]: E0430 03:28:34.549660 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.552082 kubelet[3229]: E0430 03:28:34.552061 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.552204 kubelet[3229]: W0430 03:28:34.552186 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.557168 kubelet[3229]: E0430 03:28:34.557138 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.558994 kubelet[3229]: E0430 03:28:34.558630 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.559239 kubelet[3229]: W0430 03:28:34.559219 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.566390 kubelet[3229]: E0430 03:28:34.566077 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.568930 kubelet[3229]: E0430 03:28:34.568889 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.577858 kubelet[3229]: W0430 03:28:34.577714 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.584272 kubelet[3229]: E0430 03:28:34.583182 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.584272 kubelet[3229]: W0430 03:28:34.583225 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.586921 kubelet[3229]: E0430 03:28:34.584469 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.586921 kubelet[3229]: W0430 03:28:34.584490 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.586921 kubelet[3229]: E0430 03:28:34.585925 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.586921 kubelet[3229]: W0430 03:28:34.585941 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.586921 kubelet[3229]: E0430 03:28:34.585975 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.588401 kubelet[3229]: E0430 03:28:34.587791 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.588401 kubelet[3229]: W0430 03:28:34.587812 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.588401 kubelet[3229]: E0430 03:28:34.587846 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.587910 systemd[1]: Started cri-containerd-3843b4d78cc59b211d01df91a522597019b8d865f5180524faf2f2b798f3b37b.scope - libcontainer container 3843b4d78cc59b211d01df91a522597019b8d865f5180524faf2f2b798f3b37b. Apr 30 03:28:34.590946 kubelet[3229]: E0430 03:28:34.589417 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.590946 kubelet[3229]: W0430 03:28:34.589449 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.590946 kubelet[3229]: E0430 03:28:34.589471 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.590946 kubelet[3229]: E0430 03:28:34.590667 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.592309 kubelet[3229]: E0430 03:28:34.591992 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.592309 kubelet[3229]: W0430 03:28:34.592009 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.592309 kubelet[3229]: E0430 03:28:34.592054 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.592309 kubelet[3229]: E0430 03:28:34.592234 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.592690 kubelet[3229]: E0430 03:28:34.592659 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.594852 kubelet[3229]: E0430 03:28:34.593661 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.594852 kubelet[3229]: W0430 03:28:34.593677 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.594852 kubelet[3229]: E0430 03:28:34.593696 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.606430 kubelet[3229]: E0430 03:28:34.599577 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.609331 kubelet[3229]: W0430 03:28:34.606474 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.609331 kubelet[3229]: E0430 03:28:34.606507 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.622719 kubelet[3229]: E0430 03:28:34.622647 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:34.622719 kubelet[3229]: W0430 03:28:34.622685 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:34.622719 kubelet[3229]: E0430 03:28:34.622710 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:34.680345 containerd[1987]: time="2025-04-30T03:28:34.680079083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:28:34.681560 containerd[1987]: time="2025-04-30T03:28:34.681292195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:28:34.681560 containerd[1987]: time="2025-04-30T03:28:34.681339026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:34.681560 containerd[1987]: time="2025-04-30T03:28:34.681452382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:34.713817 systemd[1]: Started cri-containerd-5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a.scope - libcontainer container 5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a. Apr 30 03:28:34.780696 containerd[1987]: time="2025-04-30T03:28:34.780605865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-766c846665-8m6ks,Uid:a1e42e16-140a-4044-801d-30e2d6bbb343,Namespace:calico-system,Attempt:0,} returns sandbox id \"3843b4d78cc59b211d01df91a522597019b8d865f5180524faf2f2b798f3b37b\"" Apr 30 03:28:34.788752 containerd[1987]: time="2025-04-30T03:28:34.788714872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 03:28:34.792294 containerd[1987]: time="2025-04-30T03:28:34.792239180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mvv2q,Uid:d76778f1-83ce-4e9e-8d18-b99aeac99167,Namespace:calico-system,Attempt:0,} returns sandbox id \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\"" Apr 30 03:28:36.046591 kubelet[3229]: E0430 03:28:36.046543 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jbvr9" podUID="c125f289-79ba-4045-ac98-3376fc26a663" Apr 30 03:28:37.163981 containerd[1987]: time="2025-04-30T03:28:37.163926393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:37.165697 containerd[1987]: time="2025-04-30T03:28:37.165380874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" Apr 30 03:28:37.166919 containerd[1987]: time="2025-04-30T03:28:37.166886091Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:37.170469 containerd[1987]: time="2025-04-30T03:28:37.169707103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:37.170469 containerd[1987]: time="2025-04-30T03:28:37.170301412Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.379769692s" Apr 30 03:28:37.170469 containerd[1987]: time="2025-04-30T03:28:37.170338303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" Apr 30 03:28:37.173277 containerd[1987]: time="2025-04-30T03:28:37.173234916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 03:28:37.190099 containerd[1987]: time="2025-04-30T03:28:37.190011293Z" level=info msg="CreateContainer within sandbox \"3843b4d78cc59b211d01df91a522597019b8d865f5180524faf2f2b798f3b37b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 03:28:37.219260 containerd[1987]: time="2025-04-30T03:28:37.219190343Z" level=info msg="CreateContainer within sandbox \"3843b4d78cc59b211d01df91a522597019b8d865f5180524faf2f2b798f3b37b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9e2c6d9e6c692cb54ed2e27a4e262a17543cebbf91434ca50ea2328570e40dee\"" Apr 30 03:28:37.220142 containerd[1987]: time="2025-04-30T03:28:37.219952984Z" level=info msg="StartContainer for \"9e2c6d9e6c692cb54ed2e27a4e262a17543cebbf91434ca50ea2328570e40dee\"" Apr 30 03:28:37.278409 systemd[1]: Started cri-containerd-9e2c6d9e6c692cb54ed2e27a4e262a17543cebbf91434ca50ea2328570e40dee.scope - libcontainer container 9e2c6d9e6c692cb54ed2e27a4e262a17543cebbf91434ca50ea2328570e40dee. Apr 30 03:28:37.337249 containerd[1987]: time="2025-04-30T03:28:37.337197166Z" level=info msg="StartContainer for \"9e2c6d9e6c692cb54ed2e27a4e262a17543cebbf91434ca50ea2328570e40dee\" returns successfully" Apr 30 03:28:38.046195 kubelet[3229]: E0430 03:28:38.045236 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jbvr9" podUID="c125f289-79ba-4045-ac98-3376fc26a663" Apr 30 03:28:38.180235 systemd[1]: run-containerd-runc-k8s.io-9e2c6d9e6c692cb54ed2e27a4e262a17543cebbf91434ca50ea2328570e40dee-runc.skoidG.mount: Deactivated successfully. Apr 30 03:28:38.184132 kubelet[3229]: I0430 03:28:38.184075 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-766c846665-8m6ks" podStartSLOduration=1.796627955 podStartE2EDuration="4.184056759s" podCreationTimestamp="2025-04-30 03:28:34 +0000 UTC" firstStartedPulling="2025-04-30 03:28:34.784642102 +0000 UTC m=+24.867070792" lastFinishedPulling="2025-04-30 03:28:37.172070904 +0000 UTC m=+27.254499596" observedRunningTime="2025-04-30 03:28:38.182909614 +0000 UTC m=+28.265338324" watchObservedRunningTime="2025-04-30 03:28:38.184056759 +0000 UTC m=+28.266485466" Apr 30 03:28:38.244073 kubelet[3229]: E0430 03:28:38.243977 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.244073 kubelet[3229]: W0430 03:28:38.244037 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.244073 kubelet[3229]: E0430 03:28:38.244067 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.244514 kubelet[3229]: E0430 03:28:38.244350 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.244514 kubelet[3229]: W0430 03:28:38.244363 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.244514 kubelet[3229]: E0430 03:28:38.244380 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.244906 kubelet[3229]: E0430 03:28:38.244883 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.244906 kubelet[3229]: W0430 03:28:38.244910 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.245138 kubelet[3229]: E0430 03:28:38.244926 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.245228 kubelet[3229]: E0430 03:28:38.245178 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.245228 kubelet[3229]: W0430 03:28:38.245189 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.245228 kubelet[3229]: E0430 03:28:38.245205 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.245510 kubelet[3229]: E0430 03:28:38.245482 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.245510 kubelet[3229]: W0430 03:28:38.245494 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.245510 kubelet[3229]: E0430 03:28:38.245508 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.250506 kubelet[3229]: E0430 03:28:38.250471 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.250506 kubelet[3229]: W0430 03:28:38.250502 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.254118 kubelet[3229]: E0430 03:28:38.250531 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.259654 kubelet[3229]: E0430 03:28:38.259615 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.259654 kubelet[3229]: W0430 03:28:38.259649 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.260054 kubelet[3229]: E0430 03:28:38.259676 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.260114 kubelet[3229]: E0430 03:28:38.260071 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.260114 kubelet[3229]: W0430 03:28:38.260086 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.260114 kubelet[3229]: E0430 03:28:38.260105 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.260425 kubelet[3229]: E0430 03:28:38.260408 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.260487 kubelet[3229]: W0430 03:28:38.260425 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.260487 kubelet[3229]: E0430 03:28:38.260440 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.260729 kubelet[3229]: E0430 03:28:38.260711 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.260795 kubelet[3229]: W0430 03:28:38.260729 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.260795 kubelet[3229]: E0430 03:28:38.260790 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.261995 kubelet[3229]: E0430 03:28:38.261971 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.261995 kubelet[3229]: W0430 03:28:38.261992 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.262148 kubelet[3229]: E0430 03:28:38.262008 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.263029 kubelet[3229]: E0430 03:28:38.262994 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.263154 kubelet[3229]: W0430 03:28:38.263013 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.263209 kubelet[3229]: E0430 03:28:38.263165 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.263994 kubelet[3229]: E0430 03:28:38.263976 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.263994 kubelet[3229]: W0430 03:28:38.263993 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.264172 kubelet[3229]: E0430 03:28:38.264008 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.269207 kubelet[3229]: E0430 03:28:38.268348 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.269207 kubelet[3229]: W0430 03:28:38.268372 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.269207 kubelet[3229]: E0430 03:28:38.268397 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.269670 kubelet[3229]: E0430 03:28:38.269502 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.269670 kubelet[3229]: W0430 03:28:38.269535 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.269670 kubelet[3229]: E0430 03:28:38.269555 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.278251 kubelet[3229]: E0430 03:28:38.277997 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.278251 kubelet[3229]: W0430 03:28:38.278043 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.278251 kubelet[3229]: E0430 03:28:38.278070 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.278722 kubelet[3229]: E0430 03:28:38.278582 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.278722 kubelet[3229]: W0430 03:28:38.278637 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.278722 kubelet[3229]: E0430 03:28:38.278665 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.279321 kubelet[3229]: E0430 03:28:38.279155 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.279321 kubelet[3229]: W0430 03:28:38.279169 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.279321 kubelet[3229]: E0430 03:28:38.279195 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.279745 kubelet[3229]: E0430 03:28:38.279624 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.279745 kubelet[3229]: W0430 03:28:38.279639 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.279745 kubelet[3229]: E0430 03:28:38.279654 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.280006 kubelet[3229]: E0430 03:28:38.279917 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.280177 kubelet[3229]: W0430 03:28:38.279928 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.280177 kubelet[3229]: E0430 03:28:38.280120 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.280594 kubelet[3229]: E0430 03:28:38.280469 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.280594 kubelet[3229]: W0430 03:28:38.280481 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.280594 kubelet[3229]: E0430 03:28:38.280506 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.281420 kubelet[3229]: E0430 03:28:38.280913 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.281420 kubelet[3229]: W0430 03:28:38.280926 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.281420 kubelet[3229]: E0430 03:28:38.280941 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.281741 kubelet[3229]: E0430 03:28:38.281726 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.281836 kubelet[3229]: W0430 03:28:38.281823 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.281909 kubelet[3229]: E0430 03:28:38.281899 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.282208 kubelet[3229]: E0430 03:28:38.282194 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.282310 kubelet[3229]: W0430 03:28:38.282297 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.282463 kubelet[3229]: E0430 03:28:38.282372 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.282663 kubelet[3229]: E0430 03:28:38.282651 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.282822 kubelet[3229]: W0430 03:28:38.282728 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.282822 kubelet[3229]: E0430 03:28:38.282744 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.283240 kubelet[3229]: E0430 03:28:38.283141 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.283240 kubelet[3229]: W0430 03:28:38.283155 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.283240 kubelet[3229]: E0430 03:28:38.283213 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.283501 kubelet[3229]: E0430 03:28:38.283408 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.283501 kubelet[3229]: W0430 03:28:38.283417 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.283688 kubelet[3229]: E0430 03:28:38.283609 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.283943 kubelet[3229]: E0430 03:28:38.283914 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.283943 kubelet[3229]: W0430 03:28:38.283927 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.290279 kubelet[3229]: E0430 03:28:38.284128 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.293011 kubelet[3229]: E0430 03:28:38.290741 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.293011 kubelet[3229]: W0430 03:28:38.290765 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.293011 kubelet[3229]: E0430 03:28:38.290800 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.293452 kubelet[3229]: E0430 03:28:38.293429 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.294969 kubelet[3229]: W0430 03:28:38.294934 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.295139 kubelet[3229]: E0430 03:28:38.295049 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.295780 kubelet[3229]: E0430 03:28:38.295755 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.295780 kubelet[3229]: W0430 03:28:38.295778 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.295901 kubelet[3229]: E0430 03:28:38.295801 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.296335 kubelet[3229]: E0430 03:28:38.296254 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.296335 kubelet[3229]: W0430 03:28:38.296272 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.297816 kubelet[3229]: E0430 03:28:38.297565 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.300066 kubelet[3229]: E0430 03:28:38.299690 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:28:38.300066 kubelet[3229]: W0430 03:28:38.299709 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:28:38.300066 kubelet[3229]: E0430 03:28:38.299729 3229 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:28:38.629074 containerd[1987]: time="2025-04-30T03:28:38.628627477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:38.630931 containerd[1987]: time="2025-04-30T03:28:38.630693778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" Apr 30 03:28:38.632173 containerd[1987]: time="2025-04-30T03:28:38.632128376Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:38.636619 containerd[1987]: time="2025-04-30T03:28:38.636539178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:38.639061 containerd[1987]: time="2025-04-30T03:28:38.638965341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.465496407s" Apr 30 03:28:38.639737 containerd[1987]: time="2025-04-30T03:28:38.639689394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 03:28:38.644357 containerd[1987]: time="2025-04-30T03:28:38.644306182Z" level=info msg="CreateContainer within sandbox \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:28:38.676207 containerd[1987]: time="2025-04-30T03:28:38.676009986Z" level=info msg="CreateContainer within sandbox \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5c8471e5697ad99bafa14d45de8a2211fe724878f85640e9fd353cbb30710225\"" Apr 30 03:28:38.677262 containerd[1987]: time="2025-04-30T03:28:38.676952822Z" level=info msg="StartContainer for \"5c8471e5697ad99bafa14d45de8a2211fe724878f85640e9fd353cbb30710225\"" Apr 30 03:28:38.744079 systemd[1]: run-containerd-runc-k8s.io-5c8471e5697ad99bafa14d45de8a2211fe724878f85640e9fd353cbb30710225-runc.LxKpGz.mount: Deactivated successfully. Apr 30 03:28:38.754280 systemd[1]: Started cri-containerd-5c8471e5697ad99bafa14d45de8a2211fe724878f85640e9fd353cbb30710225.scope - libcontainer container 5c8471e5697ad99bafa14d45de8a2211fe724878f85640e9fd353cbb30710225. Apr 30 03:28:38.822585 containerd[1987]: time="2025-04-30T03:28:38.822324783Z" level=info msg="StartContainer for \"5c8471e5697ad99bafa14d45de8a2211fe724878f85640e9fd353cbb30710225\" returns successfully" Apr 30 03:28:38.846846 systemd[1]: cri-containerd-5c8471e5697ad99bafa14d45de8a2211fe724878f85640e9fd353cbb30710225.scope: Deactivated successfully. Apr 30 03:28:39.024907 containerd[1987]: time="2025-04-30T03:28:39.003899116Z" level=info msg="shim disconnected" id=5c8471e5697ad99bafa14d45de8a2211fe724878f85640e9fd353cbb30710225 namespace=k8s.io Apr 30 03:28:39.024907 containerd[1987]: time="2025-04-30T03:28:39.024902347Z" level=warning msg="cleaning up after shim disconnected" id=5c8471e5697ad99bafa14d45de8a2211fe724878f85640e9fd353cbb30710225 namespace=k8s.io Apr 30 03:28:39.024907 containerd[1987]: time="2025-04-30T03:28:39.024918963Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:28:39.039418 containerd[1987]: time="2025-04-30T03:28:39.039356411Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:28:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:28:39.173084 kubelet[3229]: I0430 03:28:39.173055 3229 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:28:39.174887 containerd[1987]: time="2025-04-30T03:28:39.174819290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 03:28:39.182130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c8471e5697ad99bafa14d45de8a2211fe724878f85640e9fd353cbb30710225-rootfs.mount: Deactivated successfully. Apr 30 03:28:40.046715 kubelet[3229]: E0430 03:28:40.045701 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jbvr9" podUID="c125f289-79ba-4045-ac98-3376fc26a663" Apr 30 03:28:42.045596 kubelet[3229]: E0430 03:28:42.045547 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jbvr9" podUID="c125f289-79ba-4045-ac98-3376fc26a663" Apr 30 03:28:44.045374 kubelet[3229]: E0430 03:28:44.045325 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jbvr9" podUID="c125f289-79ba-4045-ac98-3376fc26a663" Apr 30 03:28:44.379922 containerd[1987]: time="2025-04-30T03:28:44.379803724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:44.381250 containerd[1987]: time="2025-04-30T03:28:44.381042361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 03:28:44.382349 containerd[1987]: time="2025-04-30T03:28:44.382321445Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:44.395216 containerd[1987]: time="2025-04-30T03:28:44.395144101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:44.396155 containerd[1987]: time="2025-04-30T03:28:44.396043644Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.221165123s" Apr 30 03:28:44.396155 containerd[1987]: time="2025-04-30T03:28:44.396076205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 03:28:44.404000 containerd[1987]: time="2025-04-30T03:28:44.403921973Z" level=info msg="CreateContainer within sandbox \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:28:44.423275 containerd[1987]: time="2025-04-30T03:28:44.423230773Z" level=info msg="CreateContainer within sandbox \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"885c5135d32e361bc33cdee6ebb4cc70cde58f1c0b3155c490acb84d873b70b1\"" Apr 30 03:28:44.425571 containerd[1987]: time="2025-04-30T03:28:44.424104780Z" level=info msg="StartContainer for \"885c5135d32e361bc33cdee6ebb4cc70cde58f1c0b3155c490acb84d873b70b1\"" Apr 30 03:28:44.491263 systemd[1]: Started cri-containerd-885c5135d32e361bc33cdee6ebb4cc70cde58f1c0b3155c490acb84d873b70b1.scope - libcontainer container 885c5135d32e361bc33cdee6ebb4cc70cde58f1c0b3155c490acb84d873b70b1. Apr 30 03:28:44.527575 containerd[1987]: time="2025-04-30T03:28:44.527306191Z" level=info msg="StartContainer for \"885c5135d32e361bc33cdee6ebb4cc70cde58f1c0b3155c490acb84d873b70b1\" returns successfully" Apr 30 03:28:45.610954 systemd[1]: cri-containerd-885c5135d32e361bc33cdee6ebb4cc70cde58f1c0b3155c490acb84d873b70b1.scope: Deactivated successfully. Apr 30 03:28:45.648243 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-885c5135d32e361bc33cdee6ebb4cc70cde58f1c0b3155c490acb84d873b70b1-rootfs.mount: Deactivated successfully. Apr 30 03:28:45.656902 containerd[1987]: time="2025-04-30T03:28:45.656576432Z" level=info msg="shim disconnected" id=885c5135d32e361bc33cdee6ebb4cc70cde58f1c0b3155c490acb84d873b70b1 namespace=k8s.io Apr 30 03:28:45.656902 containerd[1987]: time="2025-04-30T03:28:45.656742012Z" level=warning msg="cleaning up after shim disconnected" id=885c5135d32e361bc33cdee6ebb4cc70cde58f1c0b3155c490acb84d873b70b1 namespace=k8s.io Apr 30 03:28:45.656902 containerd[1987]: time="2025-04-30T03:28:45.656755086Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:28:45.681165 kubelet[3229]: I0430 03:28:45.680284 3229 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 03:28:45.737310 kubelet[3229]: I0430 03:28:45.737253 3229 topology_manager.go:215] "Topology Admit Handler" podUID="55fd6c70-afa8-486e-b82a-44a54b2e3758" podNamespace="kube-system" podName="coredns-7db6d8ff4d-w297d" Apr 30 03:28:45.747231 kubelet[3229]: W0430 03:28:45.747198 3229 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-16-5" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-5' and this object Apr 30 03:28:45.747359 kubelet[3229]: E0430 03:28:45.747242 3229 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-16-5" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-5' and this object Apr 30 03:28:45.749338 systemd[1]: Created slice kubepods-burstable-pod55fd6c70_afa8_486e_b82a_44a54b2e3758.slice - libcontainer container kubepods-burstable-pod55fd6c70_afa8_486e_b82a_44a54b2e3758.slice. Apr 30 03:28:45.754499 kubelet[3229]: I0430 03:28:45.754462 3229 topology_manager.go:215] "Topology Admit Handler" podUID="aa805bd1-beeb-4584-9d9d-3007469a5975" podNamespace="kube-system" podName="coredns-7db6d8ff4d-t7qqn" Apr 30 03:28:45.760595 kubelet[3229]: I0430 03:28:45.760336 3229 topology_manager.go:215] "Topology Admit Handler" podUID="2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605" podNamespace="calico-apiserver" podName="calico-apiserver-6c489dd647-zhfwb" Apr 30 03:28:45.762915 kubelet[3229]: I0430 03:28:45.762868 3229 topology_manager.go:215] "Topology Admit Handler" podUID="09ed9b89-1a51-4296-b830-803e57059495" podNamespace="calico-apiserver" podName="calico-apiserver-6c489dd647-9ccl9" Apr 30 03:28:45.763556 systemd[1]: Created slice kubepods-burstable-podaa805bd1_beeb_4584_9d9d_3007469a5975.slice - libcontainer container kubepods-burstable-podaa805bd1_beeb_4584_9d9d_3007469a5975.slice. Apr 30 03:28:45.769978 kubelet[3229]: I0430 03:28:45.769005 3229 topology_manager.go:215] "Topology Admit Handler" podUID="6a4e6e55-d984-4974-8642-752d6712e827" podNamespace="calico-system" podName="calico-kube-controllers-589dd46bc6-rd5jw" Apr 30 03:28:45.778104 systemd[1]: Created slice kubepods-besteffort-pod2ca2a0d4_a8bb_4ffb_bf87_3cfe5b949605.slice - libcontainer container kubepods-besteffort-pod2ca2a0d4_a8bb_4ffb_bf87_3cfe5b949605.slice. Apr 30 03:28:45.787620 systemd[1]: Created slice kubepods-besteffort-pod09ed9b89_1a51_4296_b830_803e57059495.slice - libcontainer container kubepods-besteffort-pod09ed9b89_1a51_4296_b830_803e57059495.slice. Apr 30 03:28:45.791924 systemd[1]: Created slice kubepods-besteffort-pod6a4e6e55_d984_4974_8642_752d6712e827.slice - libcontainer container kubepods-besteffort-pod6a4e6e55_d984_4974_8642_752d6712e827.slice. Apr 30 03:28:45.840877 kubelet[3229]: I0430 03:28:45.840613 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/09ed9b89-1a51-4296-b830-803e57059495-calico-apiserver-certs\") pod \"calico-apiserver-6c489dd647-9ccl9\" (UID: \"09ed9b89-1a51-4296-b830-803e57059495\") " pod="calico-apiserver/calico-apiserver-6c489dd647-9ccl9" Apr 30 03:28:45.840877 kubelet[3229]: I0430 03:28:45.840756 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n2th\" (UniqueName: \"kubernetes.io/projected/09ed9b89-1a51-4296-b830-803e57059495-kube-api-access-9n2th\") pod \"calico-apiserver-6c489dd647-9ccl9\" (UID: \"09ed9b89-1a51-4296-b830-803e57059495\") " pod="calico-apiserver/calico-apiserver-6c489dd647-9ccl9" Apr 30 03:28:45.840877 kubelet[3229]: I0430 03:28:45.840793 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa805bd1-beeb-4584-9d9d-3007469a5975-config-volume\") pod \"coredns-7db6d8ff4d-t7qqn\" (UID: \"aa805bd1-beeb-4584-9d9d-3007469a5975\") " pod="kube-system/coredns-7db6d8ff4d-t7qqn" Apr 30 03:28:45.840877 kubelet[3229]: I0430 03:28:45.840820 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjl9q\" (UniqueName: \"kubernetes.io/projected/2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605-kube-api-access-vjl9q\") pod \"calico-apiserver-6c489dd647-zhfwb\" (UID: \"2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605\") " pod="calico-apiserver/calico-apiserver-6c489dd647-zhfwb" Apr 30 03:28:45.840877 kubelet[3229]: I0430 03:28:45.840847 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a4e6e55-d984-4974-8642-752d6712e827-tigera-ca-bundle\") pod \"calico-kube-controllers-589dd46bc6-rd5jw\" (UID: \"6a4e6e55-d984-4974-8642-752d6712e827\") " pod="calico-system/calico-kube-controllers-589dd46bc6-rd5jw" Apr 30 03:28:45.841265 kubelet[3229]: I0430 03:28:45.840892 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k6j2\" (UniqueName: \"kubernetes.io/projected/6a4e6e55-d984-4974-8642-752d6712e827-kube-api-access-5k6j2\") pod \"calico-kube-controllers-589dd46bc6-rd5jw\" (UID: \"6a4e6e55-d984-4974-8642-752d6712e827\") " pod="calico-system/calico-kube-controllers-589dd46bc6-rd5jw" Apr 30 03:28:45.841265 kubelet[3229]: I0430 03:28:45.840952 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55fd6c70-afa8-486e-b82a-44a54b2e3758-config-volume\") pod \"coredns-7db6d8ff4d-w297d\" (UID: \"55fd6c70-afa8-486e-b82a-44a54b2e3758\") " pod="kube-system/coredns-7db6d8ff4d-w297d" Apr 30 03:28:45.841265 kubelet[3229]: I0430 03:28:45.841006 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsdk7\" (UniqueName: \"kubernetes.io/projected/55fd6c70-afa8-486e-b82a-44a54b2e3758-kube-api-access-wsdk7\") pod \"coredns-7db6d8ff4d-w297d\" (UID: \"55fd6c70-afa8-486e-b82a-44a54b2e3758\") " pod="kube-system/coredns-7db6d8ff4d-w297d" Apr 30 03:28:45.841265 kubelet[3229]: I0430 03:28:45.841058 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8s9v\" (UniqueName: \"kubernetes.io/projected/aa805bd1-beeb-4584-9d9d-3007469a5975-kube-api-access-l8s9v\") pod \"coredns-7db6d8ff4d-t7qqn\" (UID: \"aa805bd1-beeb-4584-9d9d-3007469a5975\") " pod="kube-system/coredns-7db6d8ff4d-t7qqn" Apr 30 03:28:45.841265 kubelet[3229]: I0430 03:28:45.841083 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605-calico-apiserver-certs\") pod \"calico-apiserver-6c489dd647-zhfwb\" (UID: \"2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605\") " pod="calico-apiserver/calico-apiserver-6c489dd647-zhfwb" Apr 30 03:28:46.053985 systemd[1]: Created slice kubepods-besteffort-podc125f289_79ba_4045_ac98_3376fc26a663.slice - libcontainer container kubepods-besteffort-podc125f289_79ba_4045_ac98_3376fc26a663.slice. Apr 30 03:28:46.056933 containerd[1987]: time="2025-04-30T03:28:46.056884575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jbvr9,Uid:c125f289-79ba-4045-ac98-3376fc26a663,Namespace:calico-system,Attempt:0,}" Apr 30 03:28:46.087089 containerd[1987]: time="2025-04-30T03:28:46.085279501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c489dd647-zhfwb,Uid:2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:28:46.116766 containerd[1987]: time="2025-04-30T03:28:46.115873290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589dd46bc6-rd5jw,Uid:6a4e6e55-d984-4974-8642-752d6712e827,Namespace:calico-system,Attempt:0,}" Apr 30 03:28:46.135626 containerd[1987]: time="2025-04-30T03:28:46.135076476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c489dd647-9ccl9,Uid:09ed9b89-1a51-4296-b830-803e57059495,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:28:46.179324 systemd[1]: Started sshd@7-172.31.16.5:22-147.75.109.163:39540.service - OpenSSH per-connection server daemon (147.75.109.163:39540). Apr 30 03:28:46.196313 containerd[1987]: time="2025-04-30T03:28:46.196284099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 03:28:46.510842 sshd[4247]: Accepted publickey for core from 147.75.109.163 port 39540 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:28:46.514818 containerd[1987]: time="2025-04-30T03:28:46.514684997Z" level=error msg="Failed to destroy network for sandbox \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:46.524151 sshd[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:46.528359 containerd[1987]: time="2025-04-30T03:28:46.528143760Z" level=error msg="Failed to destroy network for sandbox \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:46.531106 containerd[1987]: time="2025-04-30T03:28:46.530210745Z" level=error msg="encountered an error cleaning up failed sandbox \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:46.531106 containerd[1987]: time="2025-04-30T03:28:46.530301829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jbvr9,Uid:c125f289-79ba-4045-ac98-3376fc26a663,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:46.531675 containerd[1987]: time="2025-04-30T03:28:46.531559212Z" level=error msg="encountered an error cleaning up failed sandbox \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:46.531675 containerd[1987]: time="2025-04-30T03:28:46.531636162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c489dd647-9ccl9,Uid:09ed9b89-1a51-4296-b830-803e57059495,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:46.534483 systemd-logind[1956]: New session 8 of user core. Apr 30 03:28:46.542000 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:28:46.557042 containerd[1987]: time="2025-04-30T03:28:46.547966176Z" level=error msg="Failed to destroy network for sandbox \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:46.557042 containerd[1987]: time="2025-04-30T03:28:46.548351431Z" level=error msg="encountered an error cleaning up failed sandbox \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:46.557042 containerd[1987]: time="2025-04-30T03:28:46.548411600Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589dd46bc6-rd5jw,Uid:6a4e6e55-d984-4974-8642-752d6712e827,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:46.557444 kubelet[3229]: E0430 03:28:46.553132 3229 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:46.557444 kubelet[3229]: E0430 03:28:46.553223 3229 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589dd46bc6-rd5jw" Apr 30 03:28:46.557444 kubelet[3229]: E0430 03:28:46.553251 3229 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589dd46bc6-rd5jw" Apr 30 03:28:46.557635 kubelet[3229]: E0430 03:28:46.553304 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-589dd46bc6-rd5jw_calico-system(6a4e6e55-d984-4974-8642-752d6712e827)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-589dd46bc6-rd5jw_calico-system(6a4e6e55-d984-4974-8642-752d6712e827)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-589dd46bc6-rd5jw" podUID="6a4e6e55-d984-4974-8642-752d6712e827" Apr 30 03:28:46.557635 kubelet[3229]: E0430 03:28:46.553576 3229 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:46.557635 kubelet[3229]: E0430 03:28:46.553616 3229 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jbvr9" Apr 30 03:28:46.557852 kubelet[3229]: E0430 03:28:46.553642 3229 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jbvr9" Apr 30 03:28:46.557852 kubelet[3229]: E0430 03:28:46.553682 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jbvr9_calico-system(c125f289-79ba-4045-ac98-3376fc26a663)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jbvr9_calico-system(c125f289-79ba-4045-ac98-3376fc26a663)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jbvr9" podUID="c125f289-79ba-4045-ac98-3376fc26a663" Apr 30 03:28:46.557852 kubelet[3229]: E0430 03:28:46.553733 3229 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:46.558437 kubelet[3229]: E0430 03:28:46.553757 3229 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c489dd647-9ccl9" Apr 30 03:28:46.558437 kubelet[3229]: E0430 03:28:46.553775 3229 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c489dd647-9ccl9" Apr 30 03:28:46.558437 kubelet[3229]: E0430 03:28:46.553807 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c489dd647-9ccl9_calico-apiserver(09ed9b89-1a51-4296-b830-803e57059495)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c489dd647-9ccl9_calico-apiserver(09ed9b89-1a51-4296-b830-803e57059495)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c489dd647-9ccl9" podUID="09ed9b89-1a51-4296-b830-803e57059495" Apr 30 03:28:46.570291 containerd[1987]: time="2025-04-30T03:28:46.570245537Z" level=error msg="Failed to destroy network for sandbox \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:46.572070 containerd[1987]: time="2025-04-30T03:28:46.570652289Z" level=error msg="encountered an error cleaning up failed sandbox \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:46.572070 containerd[1987]: time="2025-04-30T03:28:46.570727416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c489dd647-zhfwb,Uid:2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:46.572272 kubelet[3229]: E0430 03:28:46.571010 3229 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:46.572272 kubelet[3229]: E0430 03:28:46.571094 3229 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c489dd647-zhfwb" Apr 30 03:28:46.572272 kubelet[3229]: E0430 03:28:46.571152 3229 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c489dd647-zhfwb" Apr 30 03:28:46.572488 kubelet[3229]: E0430 03:28:46.571228 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c489dd647-zhfwb_calico-apiserver(2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c489dd647-zhfwb_calico-apiserver(2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c489dd647-zhfwb" podUID="2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605" Apr 30 03:28:46.649937 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf-shm.mount: Deactivated successfully. Apr 30 03:28:46.846199 sshd[4247]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:46.851182 systemd-logind[1956]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:28:46.851654 systemd[1]: sshd@7-172.31.16.5:22-147.75.109.163:39540.service: Deactivated successfully. Apr 30 03:28:46.853697 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:28:46.855068 systemd-logind[1956]: Removed session 8. Apr 30 03:28:46.944757 kubelet[3229]: E0430 03:28:46.944323 3229 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Apr 30 03:28:46.944757 kubelet[3229]: E0430 03:28:46.944340 3229 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Apr 30 03:28:46.944757 kubelet[3229]: E0430 03:28:46.944421 3229 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55fd6c70-afa8-486e-b82a-44a54b2e3758-config-volume podName:55fd6c70-afa8-486e-b82a-44a54b2e3758 nodeName:}" failed. No retries permitted until 2025-04-30 03:28:47.444402486 +0000 UTC m=+37.526831177 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/55fd6c70-afa8-486e-b82a-44a54b2e3758-config-volume") pod "coredns-7db6d8ff4d-w297d" (UID: "55fd6c70-afa8-486e-b82a-44a54b2e3758") : failed to sync configmap cache: timed out waiting for the condition Apr 30 03:28:46.944757 kubelet[3229]: E0430 03:28:46.944436 3229 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aa805bd1-beeb-4584-9d9d-3007469a5975-config-volume podName:aa805bd1-beeb-4584-9d9d-3007469a5975 nodeName:}" failed. No retries permitted until 2025-04-30 03:28:47.444429713 +0000 UTC m=+37.526858402 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/aa805bd1-beeb-4584-9d9d-3007469a5975-config-volume") pod "coredns-7db6d8ff4d-t7qqn" (UID: "aa805bd1-beeb-4584-9d9d-3007469a5975") : failed to sync configmap cache: timed out waiting for the condition Apr 30 03:28:47.197913 kubelet[3229]: I0430 03:28:47.197796 3229 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Apr 30 03:28:47.201130 kubelet[3229]: I0430 03:28:47.201099 3229 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Apr 30 03:28:47.207412 containerd[1987]: time="2025-04-30T03:28:47.206944319Z" level=info msg="StopPodSandbox for \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\"" Apr 30 03:28:47.209060 containerd[1987]: time="2025-04-30T03:28:47.208134625Z" level=info msg="StopPodSandbox for \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\"" Apr 30 03:28:47.209724 containerd[1987]: time="2025-04-30T03:28:47.209382288Z" level=info msg="Ensure that sandbox 4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e in task-service has been cleanup successfully" Apr 30 03:28:47.209976 containerd[1987]: time="2025-04-30T03:28:47.209946307Z" level=info msg="Ensure that sandbox b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a in task-service has been cleanup successfully" Apr 30 03:28:47.214250 kubelet[3229]: I0430 03:28:47.213597 3229 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Apr 30 03:28:47.214687 containerd[1987]: time="2025-04-30T03:28:47.214629096Z" level=info msg="StopPodSandbox for \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\"" Apr 30 03:28:47.215136 containerd[1987]: time="2025-04-30T03:28:47.215094153Z" level=info msg="Ensure that sandbox 95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364 in task-service has been cleanup successfully" Apr 30 03:28:47.218074 kubelet[3229]: I0430 03:28:47.218046 3229 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Apr 30 03:28:47.219214 containerd[1987]: time="2025-04-30T03:28:47.219180579Z" level=info msg="StopPodSandbox for \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\"" Apr 30 03:28:47.219568 containerd[1987]: time="2025-04-30T03:28:47.219540514Z" level=info msg="Ensure that sandbox 5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf in task-service has been cleanup successfully" Apr 30 03:28:47.287616 containerd[1987]: time="2025-04-30T03:28:47.287443325Z" level=error msg="StopPodSandbox for \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\" failed" error="failed to destroy network for sandbox \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:47.288465 kubelet[3229]: E0430 03:28:47.288402 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Apr 30 03:28:47.288584 kubelet[3229]: E0430 03:28:47.288470 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf"} Apr 30 03:28:47.288584 kubelet[3229]: E0430 03:28:47.288544 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c125f289-79ba-4045-ac98-3376fc26a663\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:28:47.289367 kubelet[3229]: E0430 03:28:47.288572 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c125f289-79ba-4045-ac98-3376fc26a663\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jbvr9" podUID="c125f289-79ba-4045-ac98-3376fc26a663" Apr 30 03:28:47.313496 containerd[1987]: time="2025-04-30T03:28:47.312099489Z" level=error msg="StopPodSandbox for \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\" failed" error="failed to destroy network for sandbox \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:47.321044 containerd[1987]: time="2025-04-30T03:28:47.319172069Z" level=error msg="StopPodSandbox for \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\" failed" error="failed to destroy network for sandbox \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:47.321202 kubelet[3229]: E0430 03:28:47.319401 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Apr 30 03:28:47.321202 kubelet[3229]: E0430 03:28:47.319450 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a"} Apr 30 03:28:47.321202 kubelet[3229]: E0430 03:28:47.319498 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6a4e6e55-d984-4974-8642-752d6712e827\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:28:47.321202 kubelet[3229]: E0430 03:28:47.319528 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6a4e6e55-d984-4974-8642-752d6712e827\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-589dd46bc6-rd5jw" podUID="6a4e6e55-d984-4974-8642-752d6712e827" Apr 30 03:28:47.321506 kubelet[3229]: E0430 03:28:47.320607 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Apr 30 03:28:47.321506 kubelet[3229]: E0430 03:28:47.320735 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e"} Apr 30 03:28:47.321506 kubelet[3229]: E0430 03:28:47.320980 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09ed9b89-1a51-4296-b830-803e57059495\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:28:47.321506 kubelet[3229]: E0430 03:28:47.321045 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09ed9b89-1a51-4296-b830-803e57059495\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c489dd647-9ccl9" podUID="09ed9b89-1a51-4296-b830-803e57059495" Apr 30 03:28:47.335982 containerd[1987]: time="2025-04-30T03:28:47.335906504Z" level=error msg="StopPodSandbox for \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\" failed" error="failed to destroy network for sandbox \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:47.336488 kubelet[3229]: E0430 03:28:47.336428 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Apr 30 03:28:47.336713 kubelet[3229]: E0430 03:28:47.336682 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364"} Apr 30 03:28:47.336837 kubelet[3229]: E0430 03:28:47.336820 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:28:47.336998 kubelet[3229]: E0430 03:28:47.336975 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c489dd647-zhfwb" podUID="2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605" Apr 30 03:28:47.562054 containerd[1987]: time="2025-04-30T03:28:47.561585312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w297d,Uid:55fd6c70-afa8-486e-b82a-44a54b2e3758,Namespace:kube-system,Attempt:0,}" Apr 30 03:28:47.595760 containerd[1987]: time="2025-04-30T03:28:47.594646149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t7qqn,Uid:aa805bd1-beeb-4584-9d9d-3007469a5975,Namespace:kube-system,Attempt:0,}" Apr 30 03:28:47.792051 containerd[1987]: time="2025-04-30T03:28:47.791972300Z" level=error msg="Failed to destroy network for sandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:47.795044 containerd[1987]: time="2025-04-30T03:28:47.793262287Z" level=error msg="encountered an error cleaning up failed sandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:47.795551 containerd[1987]: time="2025-04-30T03:28:47.795505552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w297d,Uid:55fd6c70-afa8-486e-b82a-44a54b2e3758,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:47.797639 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0-shm.mount: Deactivated successfully. Apr 30 03:28:47.802700 kubelet[3229]: E0430 03:28:47.798193 3229 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:47.802700 kubelet[3229]: E0430 03:28:47.798292 3229 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-w297d" Apr 30 03:28:47.802700 kubelet[3229]: E0430 03:28:47.798497 3229 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-w297d" Apr 30 03:28:47.806050 kubelet[3229]: E0430 03:28:47.803593 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-w297d_kube-system(55fd6c70-afa8-486e-b82a-44a54b2e3758)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-w297d_kube-system(55fd6c70-afa8-486e-b82a-44a54b2e3758)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-w297d" podUID="55fd6c70-afa8-486e-b82a-44a54b2e3758" Apr 30 03:28:47.808046 containerd[1987]: time="2025-04-30T03:28:47.807984333Z" level=error msg="Failed to destroy network for sandbox \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:47.810434 containerd[1987]: time="2025-04-30T03:28:47.810387639Z" level=error msg="encountered an error cleaning up failed sandbox \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:47.812698 containerd[1987]: time="2025-04-30T03:28:47.810696005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t7qqn,Uid:aa805bd1-beeb-4584-9d9d-3007469a5975,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:47.812811 kubelet[3229]: E0430 03:28:47.810919 3229 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:47.812811 kubelet[3229]: E0430 03:28:47.810985 3229 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t7qqn" Apr 30 03:28:47.812811 kubelet[3229]: E0430 03:28:47.811010 3229 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t7qqn" Apr 30 03:28:47.812989 kubelet[3229]: E0430 03:28:47.811083 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-t7qqn_kube-system(aa805bd1-beeb-4584-9d9d-3007469a5975)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-t7qqn_kube-system(aa805bd1-beeb-4584-9d9d-3007469a5975)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-t7qqn" podUID="aa805bd1-beeb-4584-9d9d-3007469a5975" Apr 30 03:28:47.814467 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20-shm.mount: Deactivated successfully. Apr 30 03:28:48.221765 kubelet[3229]: I0430 03:28:48.221670 3229 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Apr 30 03:28:48.226040 containerd[1987]: time="2025-04-30T03:28:48.223444256Z" level=info msg="StopPodSandbox for \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\"" Apr 30 03:28:48.226040 containerd[1987]: time="2025-04-30T03:28:48.223690884Z" level=info msg="Ensure that sandbox 263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20 in task-service has been cleanup successfully" Apr 30 03:28:48.226040 containerd[1987]: time="2025-04-30T03:28:48.224993866Z" level=info msg="StopPodSandbox for \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\"" Apr 30 03:28:48.226040 containerd[1987]: time="2025-04-30T03:28:48.225649412Z" level=info msg="Ensure that sandbox d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0 in task-service has been cleanup successfully" Apr 30 03:28:48.226494 kubelet[3229]: I0430 03:28:48.224219 3229 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Apr 30 03:28:48.296407 containerd[1987]: time="2025-04-30T03:28:48.295421889Z" level=error msg="StopPodSandbox for \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\" failed" error="failed to destroy network for sandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:48.297035 kubelet[3229]: E0430 03:28:48.296834 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Apr 30 03:28:48.297035 kubelet[3229]: E0430 03:28:48.296893 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0"} Apr 30 03:28:48.297035 kubelet[3229]: E0430 03:28:48.296938 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"55fd6c70-afa8-486e-b82a-44a54b2e3758\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:28:48.297035 kubelet[3229]: E0430 03:28:48.296969 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"55fd6c70-afa8-486e-b82a-44a54b2e3758\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-w297d" podUID="55fd6c70-afa8-486e-b82a-44a54b2e3758" Apr 30 03:28:48.301504 containerd[1987]: time="2025-04-30T03:28:48.301453301Z" level=error msg="StopPodSandbox for \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\" failed" error="failed to destroy network for sandbox \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:28:48.301712 kubelet[3229]: E0430 03:28:48.301677 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Apr 30 03:28:48.301837 kubelet[3229]: E0430 03:28:48.301726 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20"} Apr 30 03:28:48.301837 kubelet[3229]: E0430 03:28:48.301782 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa805bd1-beeb-4584-9d9d-3007469a5975\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:28:48.302146 kubelet[3229]: E0430 03:28:48.301826 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa805bd1-beeb-4584-9d9d-3007469a5975\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-t7qqn" podUID="aa805bd1-beeb-4584-9d9d-3007469a5975" Apr 30 03:28:51.900422 systemd[1]: Started sshd@8-172.31.16.5:22-147.75.109.163:57034.service - OpenSSH per-connection server daemon (147.75.109.163:57034). Apr 30 03:28:52.228768 sshd[4531]: Accepted publickey for core from 147.75.109.163 port 57034 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:28:52.232009 sshd[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:52.239623 systemd-logind[1956]: New session 9 of user core. Apr 30 03:28:52.257337 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:28:52.634358 sshd[4531]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:52.640154 systemd[1]: sshd@8-172.31.16.5:22-147.75.109.163:57034.service: Deactivated successfully. Apr 30 03:28:52.644793 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:28:52.647669 systemd-logind[1956]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:28:52.650189 systemd-logind[1956]: Removed session 9. Apr 30 03:28:53.883557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount168339195.mount: Deactivated successfully. Apr 30 03:28:54.000713 containerd[1987]: time="2025-04-30T03:28:54.000526618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:54.001707 containerd[1987]: time="2025-04-30T03:28:54.001655469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 03:28:54.048986 containerd[1987]: time="2025-04-30T03:28:54.048939157Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:54.053462 containerd[1987]: time="2025-04-30T03:28:54.053408476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:54.058714 containerd[1987]: time="2025-04-30T03:28:54.058643444Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 7.857313179s" Apr 30 03:28:54.058714 containerd[1987]: time="2025-04-30T03:28:54.058703211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 03:28:54.166616 containerd[1987]: time="2025-04-30T03:28:54.166486875Z" level=info msg="CreateContainer within sandbox \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:28:54.251818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1255957137.mount: Deactivated successfully. Apr 30 03:28:54.265305 containerd[1987]: time="2025-04-30T03:28:54.265243873Z" level=info msg="CreateContainer within sandbox \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"68f19b46beab2b56c8009c1855f68f38b8269abc356d1ffc6e49a97a2b50bf50\"" Apr 30 03:28:54.274340 containerd[1987]: time="2025-04-30T03:28:54.274294744Z" level=info msg="StartContainer for \"68f19b46beab2b56c8009c1855f68f38b8269abc356d1ffc6e49a97a2b50bf50\"" Apr 30 03:28:54.511221 systemd[1]: Started cri-containerd-68f19b46beab2b56c8009c1855f68f38b8269abc356d1ffc6e49a97a2b50bf50.scope - libcontainer container 68f19b46beab2b56c8009c1855f68f38b8269abc356d1ffc6e49a97a2b50bf50. Apr 30 03:28:54.554220 containerd[1987]: time="2025-04-30T03:28:54.553580622Z" level=info msg="StartContainer for \"68f19b46beab2b56c8009c1855f68f38b8269abc356d1ffc6e49a97a2b50bf50\" returns successfully" Apr 30 03:28:54.730051 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 03:28:54.731246 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 03:28:54.778726 systemd[1]: cri-containerd-68f19b46beab2b56c8009c1855f68f38b8269abc356d1ffc6e49a97a2b50bf50.scope: Deactivated successfully. Apr 30 03:28:54.822083 containerd[1987]: time="2025-04-30T03:28:54.814962088Z" level=info msg="shim disconnected" id=68f19b46beab2b56c8009c1855f68f38b8269abc356d1ffc6e49a97a2b50bf50 namespace=k8s.io Apr 30 03:28:54.822333 containerd[1987]: time="2025-04-30T03:28:54.822085148Z" level=warning msg="cleaning up after shim disconnected" id=68f19b46beab2b56c8009c1855f68f38b8269abc356d1ffc6e49a97a2b50bf50 namespace=k8s.io Apr 30 03:28:54.822333 containerd[1987]: time="2025-04-30T03:28:54.822105844Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:28:55.296913 kubelet[3229]: I0430 03:28:55.296847 3229 scope.go:117] "RemoveContainer" containerID="68f19b46beab2b56c8009c1855f68f38b8269abc356d1ffc6e49a97a2b50bf50" Apr 30 03:28:55.306814 containerd[1987]: time="2025-04-30T03:28:55.306590533Z" level=info msg="CreateContainer within sandbox \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Apr 30 03:28:55.329950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2208266140.mount: Deactivated successfully. Apr 30 03:28:55.332309 containerd[1987]: time="2025-04-30T03:28:55.332270721Z" level=info msg="CreateContainer within sandbox \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"c461626a5809b36728fd91635e8668eb41c8faef97a5169a06a55697b9a87205\"" Apr 30 03:28:55.334814 containerd[1987]: time="2025-04-30T03:28:55.333177390Z" level=info msg="StartContainer for \"c461626a5809b36728fd91635e8668eb41c8faef97a5169a06a55697b9a87205\"" Apr 30 03:28:55.386236 systemd[1]: Started cri-containerd-c461626a5809b36728fd91635e8668eb41c8faef97a5169a06a55697b9a87205.scope - libcontainer container c461626a5809b36728fd91635e8668eb41c8faef97a5169a06a55697b9a87205. Apr 30 03:28:55.419203 containerd[1987]: time="2025-04-30T03:28:55.419153737Z" level=info msg="StartContainer for \"c461626a5809b36728fd91635e8668eb41c8faef97a5169a06a55697b9a87205\" returns successfully" Apr 30 03:28:55.526822 systemd[1]: cri-containerd-c461626a5809b36728fd91635e8668eb41c8faef97a5169a06a55697b9a87205.scope: Deactivated successfully. Apr 30 03:28:55.555815 containerd[1987]: time="2025-04-30T03:28:55.555691450Z" level=info msg="shim disconnected" id=c461626a5809b36728fd91635e8668eb41c8faef97a5169a06a55697b9a87205 namespace=k8s.io Apr 30 03:28:55.555815 containerd[1987]: time="2025-04-30T03:28:55.555742500Z" level=warning msg="cleaning up after shim disconnected" id=c461626a5809b36728fd91635e8668eb41c8faef97a5169a06a55697b9a87205 namespace=k8s.io Apr 30 03:28:55.555815 containerd[1987]: time="2025-04-30T03:28:55.555751130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:28:55.883292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c461626a5809b36728fd91635e8668eb41c8faef97a5169a06a55697b9a87205-rootfs.mount: Deactivated successfully. Apr 30 03:28:56.305894 kubelet[3229]: I0430 03:28:56.305848 3229 scope.go:117] "RemoveContainer" containerID="68f19b46beab2b56c8009c1855f68f38b8269abc356d1ffc6e49a97a2b50bf50" Apr 30 03:28:56.306339 kubelet[3229]: I0430 03:28:56.306055 3229 scope.go:117] "RemoveContainer" containerID="c461626a5809b36728fd91635e8668eb41c8faef97a5169a06a55697b9a87205" Apr 30 03:28:56.306744 kubelet[3229]: E0430 03:28:56.306513 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-mvv2q_calico-system(d76778f1-83ce-4e9e-8d18-b99aeac99167)\"" pod="calico-system/calico-node-mvv2q" podUID="d76778f1-83ce-4e9e-8d18-b99aeac99167" Apr 30 03:28:56.329188 containerd[1987]: time="2025-04-30T03:28:56.329007923Z" level=info msg="RemoveContainer for \"68f19b46beab2b56c8009c1855f68f38b8269abc356d1ffc6e49a97a2b50bf50\"" Apr 30 03:28:56.340997 containerd[1987]: time="2025-04-30T03:28:56.340941994Z" level=info msg="RemoveContainer for \"68f19b46beab2b56c8009c1855f68f38b8269abc356d1ffc6e49a97a2b50bf50\" returns successfully" Apr 30 03:28:57.292190 kubelet[3229]: I0430 03:28:57.292133 3229 scope.go:117] "RemoveContainer" containerID="c461626a5809b36728fd91635e8668eb41c8faef97a5169a06a55697b9a87205" Apr 30 03:28:57.302207 kubelet[3229]: E0430 03:28:57.302150 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-mvv2q_calico-system(d76778f1-83ce-4e9e-8d18-b99aeac99167)\"" pod="calico-system/calico-node-mvv2q" podUID="d76778f1-83ce-4e9e-8d18-b99aeac99167" Apr 30 03:28:57.683333 systemd[1]: Started sshd@9-172.31.16.5:22-147.75.109.163:45876.service - OpenSSH per-connection server daemon (147.75.109.163:45876). Apr 30 03:28:57.970888 sshd[4683]: Accepted publickey for core from 147.75.109.163 port 45876 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:28:57.973606 sshd[4683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:57.979531 systemd-logind[1956]: New session 10 of user core. Apr 30 03:28:57.984229 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:28:58.275165 sshd[4683]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:58.279056 systemd-logind[1956]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:28:58.281184 systemd[1]: sshd@9-172.31.16.5:22-147.75.109.163:45876.service: Deactivated successfully. Apr 30 03:28:58.284738 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:28:58.286261 systemd-logind[1956]: Removed session 10. Apr 30 03:28:58.325736 systemd[1]: Started sshd@10-172.31.16.5:22-147.75.109.163:45892.service - OpenSSH per-connection server daemon (147.75.109.163:45892). Apr 30 03:28:58.568343 sshd[4697]: Accepted publickey for core from 147.75.109.163 port 45892 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:28:58.569887 sshd[4697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:58.575088 systemd-logind[1956]: New session 11 of user core. Apr 30 03:28:58.580238 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:28:58.880494 sshd[4697]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:58.884999 systemd[1]: sshd@10-172.31.16.5:22-147.75.109.163:45892.service: Deactivated successfully. Apr 30 03:28:58.887883 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:28:58.889946 systemd-logind[1956]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:28:58.891519 systemd-logind[1956]: Removed session 11. Apr 30 03:28:58.933342 systemd[1]: Started sshd@11-172.31.16.5:22-147.75.109.163:45906.service - OpenSSH per-connection server daemon (147.75.109.163:45906). Apr 30 03:28:59.174005 sshd[4708]: Accepted publickey for core from 147.75.109.163 port 45906 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:28:59.175508 sshd[4708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:59.180479 systemd-logind[1956]: New session 12 of user core. Apr 30 03:28:59.188286 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:28:59.448714 sshd[4708]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:59.452633 systemd[1]: sshd@11-172.31.16.5:22-147.75.109.163:45906.service: Deactivated successfully. Apr 30 03:28:59.455095 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:28:59.456290 systemd-logind[1956]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:28:59.457598 systemd-logind[1956]: Removed session 12. Apr 30 03:29:00.110646 containerd[1987]: time="2025-04-30T03:29:00.106725821Z" level=info msg="StopPodSandbox for \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\"" Apr 30 03:29:00.123351 containerd[1987]: time="2025-04-30T03:29:00.117727124Z" level=info msg="StopPodSandbox for \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\"" Apr 30 03:29:00.131753 containerd[1987]: time="2025-04-30T03:29:00.131359890Z" level=info msg="StopPodSandbox for \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\"" Apr 30 03:29:00.164904 containerd[1987]: time="2025-04-30T03:29:00.164851379Z" level=info msg="StopPodSandbox for \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\"" Apr 30 03:29:00.222722 containerd[1987]: time="2025-04-30T03:29:00.222652478Z" level=error msg="StopPodSandbox for \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\" failed" error="failed to destroy network for sandbox \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:00.223377 kubelet[3229]: E0430 03:29:00.223151 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Apr 30 03:29:00.223377 kubelet[3229]: E0430 03:29:00.223237 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf"} Apr 30 03:29:00.223377 kubelet[3229]: E0430 03:29:00.223295 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c125f289-79ba-4045-ac98-3376fc26a663\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:00.223377 kubelet[3229]: E0430 03:29:00.223327 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c125f289-79ba-4045-ac98-3376fc26a663\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jbvr9" podUID="c125f289-79ba-4045-ac98-3376fc26a663" Apr 30 03:29:00.260382 containerd[1987]: time="2025-04-30T03:29:00.260335464Z" level=error msg="StopPodSandbox for \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\" failed" error="failed to destroy network for sandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:00.260971 containerd[1987]: time="2025-04-30T03:29:00.260905579Z" level=error msg="StopPodSandbox for \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\" failed" error="failed to destroy network for sandbox \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:00.261246 kubelet[3229]: E0430 03:29:00.261206 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Apr 30 03:29:00.261495 kubelet[3229]: E0430 03:29:00.261472 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a"} Apr 30 03:29:00.261962 kubelet[3229]: E0430 03:29:00.261379 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Apr 30 03:29:00.261962 kubelet[3229]: E0430 03:29:00.261854 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0"} Apr 30 03:29:00.261962 kubelet[3229]: E0430 03:29:00.261893 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"55fd6c70-afa8-486e-b82a-44a54b2e3758\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:00.261962 kubelet[3229]: E0430 03:29:00.261924 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"55fd6c70-afa8-486e-b82a-44a54b2e3758\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-w297d" podUID="55fd6c70-afa8-486e-b82a-44a54b2e3758" Apr 30 03:29:00.265328 kubelet[3229]: E0430 03:29:00.261723 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6a4e6e55-d984-4974-8642-752d6712e827\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:00.265328 kubelet[3229]: E0430 03:29:00.262160 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6a4e6e55-d984-4974-8642-752d6712e827\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-589dd46bc6-rd5jw" podUID="6a4e6e55-d984-4974-8642-752d6712e827" Apr 30 03:29:00.265328 kubelet[3229]: E0430 03:29:00.264004 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Apr 30 03:29:00.265328 kubelet[3229]: E0430 03:29:00.264084 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20"} Apr 30 03:29:00.266069 containerd[1987]: time="2025-04-30T03:29:00.263817363Z" level=error msg="StopPodSandbox for \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\" failed" error="failed to destroy network for sandbox \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:00.266139 kubelet[3229]: E0430 03:29:00.264121 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa805bd1-beeb-4584-9d9d-3007469a5975\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:00.266139 kubelet[3229]: E0430 03:29:00.264148 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa805bd1-beeb-4584-9d9d-3007469a5975\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-t7qqn" podUID="aa805bd1-beeb-4584-9d9d-3007469a5975" Apr 30 03:29:01.046956 containerd[1987]: time="2025-04-30T03:29:01.046325125Z" level=info msg="StopPodSandbox for \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\"" Apr 30 03:29:01.079657 containerd[1987]: time="2025-04-30T03:29:01.079603770Z" level=error msg="StopPodSandbox for \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\" failed" error="failed to destroy network for sandbox \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:01.080069 kubelet[3229]: E0430 03:29:01.080011 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Apr 30 03:29:01.080219 kubelet[3229]: E0430 03:29:01.080084 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e"} Apr 30 03:29:01.080219 kubelet[3229]: E0430 03:29:01.080131 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09ed9b89-1a51-4296-b830-803e57059495\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:01.080219 kubelet[3229]: E0430 03:29:01.080164 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09ed9b89-1a51-4296-b830-803e57059495\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c489dd647-9ccl9" podUID="09ed9b89-1a51-4296-b830-803e57059495" Apr 30 03:29:02.049904 containerd[1987]: time="2025-04-30T03:29:02.047252951Z" level=info msg="StopPodSandbox for \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\"" Apr 30 03:29:02.165822 containerd[1987]: time="2025-04-30T03:29:02.165742491Z" level=error msg="StopPodSandbox for \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\" failed" error="failed to destroy network for sandbox \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:02.166310 kubelet[3229]: E0430 03:29:02.165995 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Apr 30 03:29:02.166310 kubelet[3229]: E0430 03:29:02.166127 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364"} Apr 30 03:29:02.166310 kubelet[3229]: E0430 03:29:02.166166 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:02.166310 kubelet[3229]: E0430 03:29:02.166191 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c489dd647-zhfwb" podUID="2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605" Apr 30 03:29:04.507204 systemd[1]: Started sshd@12-172.31.16.5:22-147.75.109.163:45922.service - OpenSSH per-connection server daemon (147.75.109.163:45922). Apr 30 03:29:04.760701 sshd[4838]: Accepted publickey for core from 147.75.109.163 port 45922 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:04.762758 sshd[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:04.767744 systemd-logind[1956]: New session 13 of user core. Apr 30 03:29:04.774333 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:29:05.137403 sshd[4838]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:05.144892 systemd[1]: sshd@12-172.31.16.5:22-147.75.109.163:45922.service: Deactivated successfully. Apr 30 03:29:05.148900 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:29:05.151817 systemd-logind[1956]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:29:05.153898 systemd-logind[1956]: Removed session 13. Apr 30 03:29:07.673863 kubelet[3229]: I0430 03:29:07.673829 3229 scope.go:117] "RemoveContainer" containerID="c461626a5809b36728fd91635e8668eb41c8faef97a5169a06a55697b9a87205" Apr 30 03:29:07.684623 containerd[1987]: time="2025-04-30T03:29:07.684508309Z" level=info msg="CreateContainer within sandbox \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" Apr 30 03:29:07.726382 containerd[1987]: time="2025-04-30T03:29:07.726335462Z" level=info msg="CreateContainer within sandbox \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"73fa3c73dd1638a5c561427573ab0d26c939750c9e6ffa8b0a9cc173e482822c\"" Apr 30 03:29:07.727650 containerd[1987]: time="2025-04-30T03:29:07.727371827Z" level=info msg="StartContainer for \"73fa3c73dd1638a5c561427573ab0d26c939750c9e6ffa8b0a9cc173e482822c\"" Apr 30 03:29:07.760270 systemd[1]: Started cri-containerd-73fa3c73dd1638a5c561427573ab0d26c939750c9e6ffa8b0a9cc173e482822c.scope - libcontainer container 73fa3c73dd1638a5c561427573ab0d26c939750c9e6ffa8b0a9cc173e482822c. Apr 30 03:29:07.801189 containerd[1987]: time="2025-04-30T03:29:07.801132271Z" level=info msg="StartContainer for \"73fa3c73dd1638a5c561427573ab0d26c939750c9e6ffa8b0a9cc173e482822c\" returns successfully" Apr 30 03:29:07.998255 systemd[1]: cri-containerd-73fa3c73dd1638a5c561427573ab0d26c939750c9e6ffa8b0a9cc173e482822c.scope: Deactivated successfully. Apr 30 03:29:08.027946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73fa3c73dd1638a5c561427573ab0d26c939750c9e6ffa8b0a9cc173e482822c-rootfs.mount: Deactivated successfully. Apr 30 03:29:08.033498 containerd[1987]: time="2025-04-30T03:29:08.033433050Z" level=info msg="shim disconnected" id=73fa3c73dd1638a5c561427573ab0d26c939750c9e6ffa8b0a9cc173e482822c namespace=k8s.io Apr 30 03:29:08.033498 containerd[1987]: time="2025-04-30T03:29:08.033486984Z" level=warning msg="cleaning up after shim disconnected" id=73fa3c73dd1638a5c561427573ab0d26c939750c9e6ffa8b0a9cc173e482822c namespace=k8s.io Apr 30 03:29:08.033498 containerd[1987]: time="2025-04-30T03:29:08.033499735Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:08.333596 kubelet[3229]: I0430 03:29:08.333406 3229 scope.go:117] "RemoveContainer" containerID="c461626a5809b36728fd91635e8668eb41c8faef97a5169a06a55697b9a87205" Apr 30 03:29:08.333876 kubelet[3229]: I0430 03:29:08.333842 3229 scope.go:117] "RemoveContainer" containerID="73fa3c73dd1638a5c561427573ab0d26c939750c9e6ffa8b0a9cc173e482822c" Apr 30 03:29:08.335351 kubelet[3229]: E0430 03:29:08.335302 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-mvv2q_calico-system(d76778f1-83ce-4e9e-8d18-b99aeac99167)\"" pod="calico-system/calico-node-mvv2q" podUID="d76778f1-83ce-4e9e-8d18-b99aeac99167" Apr 30 03:29:08.335623 containerd[1987]: time="2025-04-30T03:29:08.335585180Z" level=info msg="RemoveContainer for \"c461626a5809b36728fd91635e8668eb41c8faef97a5169a06a55697b9a87205\"" Apr 30 03:29:08.340943 containerd[1987]: time="2025-04-30T03:29:08.340482747Z" level=info msg="RemoveContainer for \"c461626a5809b36728fd91635e8668eb41c8faef97a5169a06a55697b9a87205\" returns successfully" Apr 30 03:29:10.189418 systemd[1]: Started sshd@13-172.31.16.5:22-147.75.109.163:43366.service - OpenSSH per-connection server daemon (147.75.109.163:43366). Apr 30 03:29:10.486924 sshd[4918]: Accepted publickey for core from 147.75.109.163 port 43366 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:10.488247 sshd[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:10.494492 systemd-logind[1956]: New session 14 of user core. Apr 30 03:29:10.501239 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:29:10.807247 sshd[4918]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:10.811137 systemd-logind[1956]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:29:10.811859 systemd[1]: sshd@13-172.31.16.5:22-147.75.109.163:43366.service: Deactivated successfully. Apr 30 03:29:10.813942 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:29:10.814913 systemd-logind[1956]: Removed session 14. Apr 30 03:29:11.046649 containerd[1987]: time="2025-04-30T03:29:11.046486967Z" level=info msg="StopPodSandbox for \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\"" Apr 30 03:29:11.072953 containerd[1987]: time="2025-04-30T03:29:11.072782040Z" level=error msg="StopPodSandbox for \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\" failed" error="failed to destroy network for sandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:11.073155 kubelet[3229]: E0430 03:29:11.073051 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Apr 30 03:29:11.073155 kubelet[3229]: E0430 03:29:11.073097 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0"} Apr 30 03:29:11.073155 kubelet[3229]: E0430 03:29:11.073132 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"55fd6c70-afa8-486e-b82a-44a54b2e3758\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:11.073599 kubelet[3229]: E0430 03:29:11.073153 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"55fd6c70-afa8-486e-b82a-44a54b2e3758\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-w297d" podUID="55fd6c70-afa8-486e-b82a-44a54b2e3758" Apr 30 03:29:12.051551 containerd[1987]: time="2025-04-30T03:29:12.050999684Z" level=info msg="StopPodSandbox for \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\"" Apr 30 03:29:12.051551 containerd[1987]: time="2025-04-30T03:29:12.051280350Z" level=info msg="StopPodSandbox for \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\"" Apr 30 03:29:12.088783 containerd[1987]: time="2025-04-30T03:29:12.088454772Z" level=error msg="StopPodSandbox for \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\" failed" error="failed to destroy network for sandbox \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:12.089174 kubelet[3229]: E0430 03:29:12.088676 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Apr 30 03:29:12.089174 kubelet[3229]: E0430 03:29:12.088794 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20"} Apr 30 03:29:12.089174 kubelet[3229]: E0430 03:29:12.088827 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa805bd1-beeb-4584-9d9d-3007469a5975\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:12.089174 kubelet[3229]: E0430 03:29:12.088849 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa805bd1-beeb-4584-9d9d-3007469a5975\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-t7qqn" podUID="aa805bd1-beeb-4584-9d9d-3007469a5975" Apr 30 03:29:12.094765 containerd[1987]: time="2025-04-30T03:29:12.094707688Z" level=error msg="StopPodSandbox for \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\" failed" error="failed to destroy network for sandbox \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:12.095082 kubelet[3229]: E0430 03:29:12.095011 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Apr 30 03:29:12.095213 kubelet[3229]: E0430 03:29:12.095088 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e"} Apr 30 03:29:12.095213 kubelet[3229]: E0430 03:29:12.095122 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09ed9b89-1a51-4296-b830-803e57059495\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:12.095213 kubelet[3229]: E0430 03:29:12.095144 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09ed9b89-1a51-4296-b830-803e57059495\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c489dd647-9ccl9" podUID="09ed9b89-1a51-4296-b830-803e57059495" Apr 30 03:29:13.046415 containerd[1987]: time="2025-04-30T03:29:13.046304068Z" level=info msg="StopPodSandbox for \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\"" Apr 30 03:29:13.075660 containerd[1987]: time="2025-04-30T03:29:13.075581477Z" level=error msg="StopPodSandbox for \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\" failed" error="failed to destroy network for sandbox \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:13.076235 kubelet[3229]: E0430 03:29:13.076191 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Apr 30 03:29:13.076360 kubelet[3229]: E0430 03:29:13.076247 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a"} Apr 30 03:29:13.076360 kubelet[3229]: E0430 03:29:13.076287 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6a4e6e55-d984-4974-8642-752d6712e827\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:13.076360 kubelet[3229]: E0430 03:29:13.076322 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6a4e6e55-d984-4974-8642-752d6712e827\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-589dd46bc6-rd5jw" podUID="6a4e6e55-d984-4974-8642-752d6712e827" Apr 30 03:29:14.047360 containerd[1987]: time="2025-04-30T03:29:14.047287274Z" level=info msg="StopPodSandbox for \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\"" Apr 30 03:29:14.078083 containerd[1987]: time="2025-04-30T03:29:14.078035341Z" level=error msg="StopPodSandbox for \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\" failed" error="failed to destroy network for sandbox \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:14.078713 kubelet[3229]: E0430 03:29:14.078289 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Apr 30 03:29:14.078713 kubelet[3229]: E0430 03:29:14.078334 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf"} Apr 30 03:29:14.078713 kubelet[3229]: E0430 03:29:14.078363 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c125f289-79ba-4045-ac98-3376fc26a663\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:14.078713 kubelet[3229]: E0430 03:29:14.078384 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c125f289-79ba-4045-ac98-3376fc26a663\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jbvr9" podUID="c125f289-79ba-4045-ac98-3376fc26a663" Apr 30 03:29:15.046977 containerd[1987]: time="2025-04-30T03:29:15.046139697Z" level=info msg="StopPodSandbox for \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\"" Apr 30 03:29:15.075688 containerd[1987]: time="2025-04-30T03:29:15.075637356Z" level=error msg="StopPodSandbox for \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\" failed" error="failed to destroy network for sandbox \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:15.075969 kubelet[3229]: E0430 03:29:15.075909 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Apr 30 03:29:15.076082 kubelet[3229]: E0430 03:29:15.075969 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364"} Apr 30 03:29:15.076082 kubelet[3229]: E0430 03:29:15.076046 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:15.076211 kubelet[3229]: E0430 03:29:15.076078 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c489dd647-zhfwb" podUID="2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605" Apr 30 03:29:15.855334 systemd[1]: Started sshd@14-172.31.16.5:22-147.75.109.163:43372.service - OpenSSH per-connection server daemon (147.75.109.163:43372). Apr 30 03:29:16.108218 sshd[5045]: Accepted publickey for core from 147.75.109.163 port 43372 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:16.109685 sshd[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:16.114843 systemd-logind[1956]: New session 15 of user core. Apr 30 03:29:16.119252 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:29:16.365518 sshd[5045]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:16.368361 systemd[1]: sshd@14-172.31.16.5:22-147.75.109.163:43372.service: Deactivated successfully. Apr 30 03:29:16.370458 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:29:16.371770 systemd-logind[1956]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:29:16.373600 systemd-logind[1956]: Removed session 15. Apr 30 03:29:21.413215 systemd[1]: Started sshd@15-172.31.16.5:22-147.75.109.163:55370.service - OpenSSH per-connection server daemon (147.75.109.163:55370). Apr 30 03:29:21.665663 sshd[5058]: Accepted publickey for core from 147.75.109.163 port 55370 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:21.667076 sshd[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:21.672162 systemd-logind[1956]: New session 16 of user core. Apr 30 03:29:21.678225 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:29:21.925496 sshd[5058]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:21.929923 systemd-logind[1956]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:29:21.930705 systemd[1]: sshd@15-172.31.16.5:22-147.75.109.163:55370.service: Deactivated successfully. Apr 30 03:29:21.932662 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:29:21.933653 systemd-logind[1956]: Removed session 16. Apr 30 03:29:22.048257 kubelet[3229]: I0430 03:29:22.046238 3229 scope.go:117] "RemoveContainer" containerID="73fa3c73dd1638a5c561427573ab0d26c939750c9e6ffa8b0a9cc173e482822c" Apr 30 03:29:22.048257 kubelet[3229]: E0430 03:29:22.046639 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-mvv2q_calico-system(d76778f1-83ce-4e9e-8d18-b99aeac99167)\"" pod="calico-system/calico-node-mvv2q" podUID="d76778f1-83ce-4e9e-8d18-b99aeac99167" Apr 30 03:29:23.529009 kubelet[3229]: I0430 03:29:23.528968 3229 scope.go:117] "RemoveContainer" containerID="73fa3c73dd1638a5c561427573ab0d26c939750c9e6ffa8b0a9cc173e482822c" Apr 30 03:29:23.529431 kubelet[3229]: E0430 03:29:23.529409 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-mvv2q_calico-system(d76778f1-83ce-4e9e-8d18-b99aeac99167)\"" pod="calico-system/calico-node-mvv2q" podUID="d76778f1-83ce-4e9e-8d18-b99aeac99167" Apr 30 03:29:26.047168 containerd[1987]: time="2025-04-30T03:29:26.046920868Z" level=info msg="StopPodSandbox for \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\"" Apr 30 03:29:26.081128 containerd[1987]: time="2025-04-30T03:29:26.081070945Z" level=error msg="StopPodSandbox for \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\" failed" error="failed to destroy network for sandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:26.081403 kubelet[3229]: E0430 03:29:26.081356 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Apr 30 03:29:26.081811 kubelet[3229]: E0430 03:29:26.081430 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0"} Apr 30 03:29:26.081811 kubelet[3229]: E0430 03:29:26.081473 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"55fd6c70-afa8-486e-b82a-44a54b2e3758\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:26.081811 kubelet[3229]: E0430 03:29:26.081522 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"55fd6c70-afa8-486e-b82a-44a54b2e3758\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-w297d" podUID="55fd6c70-afa8-486e-b82a-44a54b2e3758" Apr 30 03:29:26.978372 systemd[1]: Started sshd@16-172.31.16.5:22-147.75.109.163:54378.service - OpenSSH per-connection server daemon (147.75.109.163:54378). Apr 30 03:29:27.047224 containerd[1987]: time="2025-04-30T03:29:27.046839972Z" level=info msg="StopPodSandbox for \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\"" Apr 30 03:29:27.047840 containerd[1987]: time="2025-04-30T03:29:27.047796277Z" level=info msg="StopPodSandbox for \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\"" Apr 30 03:29:27.050589 containerd[1987]: time="2025-04-30T03:29:27.049731489Z" level=info msg="StopPodSandbox for \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\"" Apr 30 03:29:27.118691 containerd[1987]: time="2025-04-30T03:29:27.118635364Z" level=error msg="StopPodSandbox for \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\" failed" error="failed to destroy network for sandbox \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:27.119357 kubelet[3229]: E0430 03:29:27.119153 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Apr 30 03:29:27.119357 kubelet[3229]: E0430 03:29:27.119217 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20"} Apr 30 03:29:27.119357 kubelet[3229]: E0430 03:29:27.119263 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa805bd1-beeb-4584-9d9d-3007469a5975\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:27.119357 kubelet[3229]: E0430 03:29:27.119297 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa805bd1-beeb-4584-9d9d-3007469a5975\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-t7qqn" podUID="aa805bd1-beeb-4584-9d9d-3007469a5975" Apr 30 03:29:27.122849 containerd[1987]: time="2025-04-30T03:29:27.122393838Z" level=error msg="StopPodSandbox for \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\" failed" error="failed to destroy network for sandbox \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:27.123085 kubelet[3229]: E0430 03:29:27.122620 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Apr 30 03:29:27.123085 kubelet[3229]: E0430 03:29:27.122681 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364"} Apr 30 03:29:27.123085 kubelet[3229]: E0430 03:29:27.122722 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:27.123085 kubelet[3229]: E0430 03:29:27.122755 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c489dd647-zhfwb" podUID="2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605" Apr 30 03:29:27.130593 containerd[1987]: time="2025-04-30T03:29:27.130537324Z" level=error msg="StopPodSandbox for \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\" failed" error="failed to destroy network for sandbox \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:27.130835 kubelet[3229]: E0430 03:29:27.130783 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Apr 30 03:29:27.130924 kubelet[3229]: E0430 03:29:27.130837 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e"} Apr 30 03:29:27.130924 kubelet[3229]: E0430 03:29:27.130883 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09ed9b89-1a51-4296-b830-803e57059495\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:27.131056 kubelet[3229]: E0430 03:29:27.130913 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09ed9b89-1a51-4296-b830-803e57059495\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c489dd647-9ccl9" podUID="09ed9b89-1a51-4296-b830-803e57059495" Apr 30 03:29:27.226732 sshd[5090]: Accepted publickey for core from 147.75.109.163 port 54378 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:27.228393 sshd[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:27.233999 systemd-logind[1956]: New session 17 of user core. Apr 30 03:29:27.244299 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:29:27.480317 sshd[5090]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:27.483591 systemd[1]: sshd@16-172.31.16.5:22-147.75.109.163:54378.service: Deactivated successfully. Apr 30 03:29:27.485695 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:29:27.487883 systemd-logind[1956]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:29:27.489554 systemd-logind[1956]: Removed session 17. Apr 30 03:29:28.047764 containerd[1987]: time="2025-04-30T03:29:28.047426341Z" level=info msg="StopPodSandbox for \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\"" Apr 30 03:29:28.047764 containerd[1987]: time="2025-04-30T03:29:28.047520612Z" level=info msg="StopPodSandbox for \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\"" Apr 30 03:29:28.110867 containerd[1987]: time="2025-04-30T03:29:28.110260918Z" level=error msg="StopPodSandbox for \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\" failed" error="failed to destroy network for sandbox \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:28.111073 kubelet[3229]: E0430 03:29:28.110604 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Apr 30 03:29:28.111073 kubelet[3229]: E0430 03:29:28.110661 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a"} Apr 30 03:29:28.111073 kubelet[3229]: E0430 03:29:28.110714 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6a4e6e55-d984-4974-8642-752d6712e827\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:28.111073 kubelet[3229]: E0430 03:29:28.110747 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6a4e6e55-d984-4974-8642-752d6712e827\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-589dd46bc6-rd5jw" podUID="6a4e6e55-d984-4974-8642-752d6712e827" Apr 30 03:29:28.113375 containerd[1987]: time="2025-04-30T03:29:28.113325225Z" level=error msg="StopPodSandbox for \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\" failed" error="failed to destroy network for sandbox \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:28.113607 kubelet[3229]: E0430 03:29:28.113542 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Apr 30 03:29:28.113607 kubelet[3229]: E0430 03:29:28.113598 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf"} Apr 30 03:29:28.113786 kubelet[3229]: E0430 03:29:28.113639 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c125f289-79ba-4045-ac98-3376fc26a663\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:28.113786 kubelet[3229]: E0430 03:29:28.113678 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c125f289-79ba-4045-ac98-3376fc26a663\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jbvr9" podUID="c125f289-79ba-4045-ac98-3376fc26a663" Apr 30 03:29:32.532280 systemd[1]: Started sshd@17-172.31.16.5:22-147.75.109.163:54386.service - OpenSSH per-connection server daemon (147.75.109.163:54386). Apr 30 03:29:32.781508 sshd[5197]: Accepted publickey for core from 147.75.109.163 port 54386 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:32.782948 sshd[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:32.789279 systemd-logind[1956]: New session 18 of user core. Apr 30 03:29:32.793965 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:29:33.046710 sshd[5197]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:33.050154 systemd[1]: sshd@17-172.31.16.5:22-147.75.109.163:54386.service: Deactivated successfully. Apr 30 03:29:33.052347 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:29:33.054665 systemd-logind[1956]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:29:33.055803 systemd-logind[1956]: Removed session 18. Apr 30 03:29:34.748168 containerd[1987]: time="2025-04-30T03:29:34.748126051Z" level=info msg="StopPodSandbox for \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\"" Apr 30 03:29:34.748620 containerd[1987]: time="2025-04-30T03:29:34.748199240Z" level=info msg="Container to stop \"5c8471e5697ad99bafa14d45de8a2211fe724878f85640e9fd353cbb30710225\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:29:34.748620 containerd[1987]: time="2025-04-30T03:29:34.748212797Z" level=info msg="Container to stop \"885c5135d32e361bc33cdee6ebb4cc70cde58f1c0b3155c490acb84d873b70b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:29:34.748620 containerd[1987]: time="2025-04-30T03:29:34.748222693Z" level=info msg="Container to stop \"73fa3c73dd1638a5c561427573ab0d26c939750c9e6ffa8b0a9cc173e482822c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:29:34.753987 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a-shm.mount: Deactivated successfully. Apr 30 03:29:34.772225 systemd[1]: cri-containerd-5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a.scope: Deactivated successfully. Apr 30 03:29:34.804861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a-rootfs.mount: Deactivated successfully. Apr 30 03:29:34.844622 containerd[1987]: time="2025-04-30T03:29:34.844554788Z" level=info msg="shim disconnected" id=5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a namespace=k8s.io Apr 30 03:29:34.844622 containerd[1987]: time="2025-04-30T03:29:34.844613160Z" level=warning msg="cleaning up after shim disconnected" id=5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a namespace=k8s.io Apr 30 03:29:34.844622 containerd[1987]: time="2025-04-30T03:29:34.844621598Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:34.875416 containerd[1987]: time="2025-04-30T03:29:34.875156091Z" level=info msg="TearDown network for sandbox \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\" successfully" Apr 30 03:29:34.875416 containerd[1987]: time="2025-04-30T03:29:34.875188716Z" level=info msg="StopPodSandbox for \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\" returns successfully" Apr 30 03:29:34.967934 kubelet[3229]: I0430 03:29:34.964618 3229 topology_manager.go:215] "Topology Admit Handler" podUID="865ac12a-6a99-43ed-939d-62a263eb4a02" podNamespace="calico-system" podName="calico-node-zg2qc" Apr 30 03:29:34.967934 kubelet[3229]: E0430 03:29:34.967765 3229 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d76778f1-83ce-4e9e-8d18-b99aeac99167" containerName="flexvol-driver" Apr 30 03:29:34.967934 kubelet[3229]: E0430 03:29:34.967786 3229 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d76778f1-83ce-4e9e-8d18-b99aeac99167" containerName="install-cni" Apr 30 03:29:34.967934 kubelet[3229]: E0430 03:29:34.967795 3229 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d76778f1-83ce-4e9e-8d18-b99aeac99167" containerName="calico-node" Apr 30 03:29:34.967934 kubelet[3229]: E0430 03:29:34.967805 3229 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d76778f1-83ce-4e9e-8d18-b99aeac99167" containerName="calico-node" Apr 30 03:29:34.974263 kubelet[3229]: I0430 03:29:34.973606 3229 memory_manager.go:354] "RemoveStaleState removing state" podUID="d76778f1-83ce-4e9e-8d18-b99aeac99167" containerName="calico-node" Apr 30 03:29:34.974263 kubelet[3229]: I0430 03:29:34.973654 3229 memory_manager.go:354] "RemoveStaleState removing state" podUID="d76778f1-83ce-4e9e-8d18-b99aeac99167" containerName="calico-node" Apr 30 03:29:34.974263 kubelet[3229]: E0430 03:29:34.973756 3229 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d76778f1-83ce-4e9e-8d18-b99aeac99167" containerName="calico-node" Apr 30 03:29:34.974263 kubelet[3229]: I0430 03:29:34.973797 3229 memory_manager.go:354] "RemoveStaleState removing state" podUID="d76778f1-83ce-4e9e-8d18-b99aeac99167" containerName="calico-node" Apr 30 03:29:34.982408 systemd[1]: Created slice kubepods-besteffort-pod865ac12a_6a99_43ed_939d_62a263eb4a02.slice - libcontainer container kubepods-besteffort-pod865ac12a_6a99_43ed_939d_62a263eb4a02.slice. Apr 30 03:29:35.022321 kubelet[3229]: I0430 03:29:35.021845 3229 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-flexvol-driver-host\") pod \"d76778f1-83ce-4e9e-8d18-b99aeac99167\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " Apr 30 03:29:35.022321 kubelet[3229]: I0430 03:29:35.021895 3229 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpnh7\" (UniqueName: \"kubernetes.io/projected/d76778f1-83ce-4e9e-8d18-b99aeac99167-kube-api-access-wpnh7\") pod \"d76778f1-83ce-4e9e-8d18-b99aeac99167\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " Apr 30 03:29:35.022321 kubelet[3229]: I0430 03:29:35.021913 3229 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-lib-modules\") pod \"d76778f1-83ce-4e9e-8d18-b99aeac99167\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " Apr 30 03:29:35.022321 kubelet[3229]: I0430 03:29:35.021931 3229 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-policysync\") pod \"d76778f1-83ce-4e9e-8d18-b99aeac99167\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " Apr 30 03:29:35.022321 kubelet[3229]: I0430 03:29:35.021950 3229 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d76778f1-83ce-4e9e-8d18-b99aeac99167-tigera-ca-bundle\") pod \"d76778f1-83ce-4e9e-8d18-b99aeac99167\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " Apr 30 03:29:35.022321 kubelet[3229]: I0430 03:29:35.021964 3229 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-xtables-lock\") pod \"d76778f1-83ce-4e9e-8d18-b99aeac99167\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " Apr 30 03:29:35.022572 kubelet[3229]: I0430 03:29:35.021977 3229 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-var-run-calico\") pod \"d76778f1-83ce-4e9e-8d18-b99aeac99167\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " Apr 30 03:29:35.022572 kubelet[3229]: I0430 03:29:35.022000 3229 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d76778f1-83ce-4e9e-8d18-b99aeac99167-node-certs\") pod \"d76778f1-83ce-4e9e-8d18-b99aeac99167\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " Apr 30 03:29:35.022572 kubelet[3229]: I0430 03:29:35.022028 3229 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-var-lib-calico\") pod \"d76778f1-83ce-4e9e-8d18-b99aeac99167\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " Apr 30 03:29:35.022572 kubelet[3229]: I0430 03:29:35.022045 3229 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-cni-net-dir\") pod \"d76778f1-83ce-4e9e-8d18-b99aeac99167\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " Apr 30 03:29:35.022572 kubelet[3229]: I0430 03:29:35.022066 3229 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-cni-log-dir\") pod \"d76778f1-83ce-4e9e-8d18-b99aeac99167\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " Apr 30 03:29:35.022572 kubelet[3229]: I0430 03:29:35.022082 3229 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-cni-bin-dir\") pod \"d76778f1-83ce-4e9e-8d18-b99aeac99167\" (UID: \"d76778f1-83ce-4e9e-8d18-b99aeac99167\") " Apr 30 03:29:35.029657 kubelet[3229]: I0430 03:29:35.025844 3229 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d76778f1-83ce-4e9e-8d18-b99aeac99167" (UID: "d76778f1-83ce-4e9e-8d18-b99aeac99167"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:29:35.029657 kubelet[3229]: I0430 03:29:35.029063 3229 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "d76778f1-83ce-4e9e-8d18-b99aeac99167" (UID: "d76778f1-83ce-4e9e-8d18-b99aeac99167"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:29:35.029657 kubelet[3229]: I0430 03:29:35.026700 3229 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "d76778f1-83ce-4e9e-8d18-b99aeac99167" (UID: "d76778f1-83ce-4e9e-8d18-b99aeac99167"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:29:35.029911 kubelet[3229]: I0430 03:29:35.029887 3229 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d76778f1-83ce-4e9e-8d18-b99aeac99167" (UID: "d76778f1-83ce-4e9e-8d18-b99aeac99167"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:29:35.030064 kubelet[3229]: I0430 03:29:35.030051 3229 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-policysync" (OuterVolumeSpecName: "policysync") pod "d76778f1-83ce-4e9e-8d18-b99aeac99167" (UID: "d76778f1-83ce-4e9e-8d18-b99aeac99167"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:29:35.033096 kubelet[3229]: I0430 03:29:35.033061 3229 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "d76778f1-83ce-4e9e-8d18-b99aeac99167" (UID: "d76778f1-83ce-4e9e-8d18-b99aeac99167"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:29:35.037189 kubelet[3229]: I0430 03:29:35.033246 3229 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "d76778f1-83ce-4e9e-8d18-b99aeac99167" (UID: "d76778f1-83ce-4e9e-8d18-b99aeac99167"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:29:35.037189 kubelet[3229]: I0430 03:29:35.033270 3229 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "d76778f1-83ce-4e9e-8d18-b99aeac99167" (UID: "d76778f1-83ce-4e9e-8d18-b99aeac99167"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:29:35.037189 kubelet[3229]: I0430 03:29:35.033276 3229 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "d76778f1-83ce-4e9e-8d18-b99aeac99167" (UID: "d76778f1-83ce-4e9e-8d18-b99aeac99167"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:29:35.037189 kubelet[3229]: I0430 03:29:35.037129 3229 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d76778f1-83ce-4e9e-8d18-b99aeac99167-node-certs" (OuterVolumeSpecName: "node-certs") pod "d76778f1-83ce-4e9e-8d18-b99aeac99167" (UID: "d76778f1-83ce-4e9e-8d18-b99aeac99167"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 03:29:35.040647 kubelet[3229]: I0430 03:29:35.040405 3229 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d76778f1-83ce-4e9e-8d18-b99aeac99167-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "d76778f1-83ce-4e9e-8d18-b99aeac99167" (UID: "d76778f1-83ce-4e9e-8d18-b99aeac99167"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:29:35.041786 systemd[1]: var-lib-kubelet-pods-d76778f1\x2d83ce\x2d4e9e\x2d8d18\x2db99aeac99167-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Apr 30 03:29:35.041911 systemd[1]: var-lib-kubelet-pods-d76778f1\x2d83ce\x2d4e9e\x2d8d18\x2db99aeac99167-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwpnh7.mount: Deactivated successfully. Apr 30 03:29:35.041973 systemd[1]: var-lib-kubelet-pods-d76778f1\x2d83ce\x2d4e9e\x2d8d18\x2db99aeac99167-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Apr 30 03:29:35.047940 kubelet[3229]: I0430 03:29:35.047717 3229 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d76778f1-83ce-4e9e-8d18-b99aeac99167-kube-api-access-wpnh7" (OuterVolumeSpecName: "kube-api-access-wpnh7") pod "d76778f1-83ce-4e9e-8d18-b99aeac99167" (UID: "d76778f1-83ce-4e9e-8d18-b99aeac99167"). InnerVolumeSpecName "kube-api-access-wpnh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:29:35.122413 kubelet[3229]: I0430 03:29:35.122361 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/865ac12a-6a99-43ed-939d-62a263eb4a02-xtables-lock\") pod \"calico-node-zg2qc\" (UID: \"865ac12a-6a99-43ed-939d-62a263eb4a02\") " pod="calico-system/calico-node-zg2qc" Apr 30 03:29:35.122413 kubelet[3229]: I0430 03:29:35.122405 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/865ac12a-6a99-43ed-939d-62a263eb4a02-var-run-calico\") pod \"calico-node-zg2qc\" (UID: \"865ac12a-6a99-43ed-939d-62a263eb4a02\") " pod="calico-system/calico-node-zg2qc" Apr 30 03:29:35.122413 kubelet[3229]: I0430 03:29:35.122425 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/865ac12a-6a99-43ed-939d-62a263eb4a02-lib-modules\") pod \"calico-node-zg2qc\" (UID: \"865ac12a-6a99-43ed-939d-62a263eb4a02\") " pod="calico-system/calico-node-zg2qc" Apr 30 03:29:35.122625 kubelet[3229]: I0430 03:29:35.122443 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/865ac12a-6a99-43ed-939d-62a263eb4a02-cni-net-dir\") pod \"calico-node-zg2qc\" (UID: \"865ac12a-6a99-43ed-939d-62a263eb4a02\") " pod="calico-system/calico-node-zg2qc" Apr 30 03:29:35.122625 kubelet[3229]: I0430 03:29:35.122464 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/865ac12a-6a99-43ed-939d-62a263eb4a02-tigera-ca-bundle\") pod \"calico-node-zg2qc\" (UID: \"865ac12a-6a99-43ed-939d-62a263eb4a02\") " pod="calico-system/calico-node-zg2qc" Apr 30 03:29:35.122625 kubelet[3229]: I0430 03:29:35.122481 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/865ac12a-6a99-43ed-939d-62a263eb4a02-node-certs\") pod \"calico-node-zg2qc\" (UID: \"865ac12a-6a99-43ed-939d-62a263eb4a02\") " pod="calico-system/calico-node-zg2qc" Apr 30 03:29:35.122625 kubelet[3229]: I0430 03:29:35.122499 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/865ac12a-6a99-43ed-939d-62a263eb4a02-flexvol-driver-host\") pod \"calico-node-zg2qc\" (UID: \"865ac12a-6a99-43ed-939d-62a263eb4a02\") " pod="calico-system/calico-node-zg2qc" Apr 30 03:29:35.122625 kubelet[3229]: I0430 03:29:35.122520 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/865ac12a-6a99-43ed-939d-62a263eb4a02-var-lib-calico\") pod \"calico-node-zg2qc\" (UID: \"865ac12a-6a99-43ed-939d-62a263eb4a02\") " pod="calico-system/calico-node-zg2qc" Apr 30 03:29:35.122784 kubelet[3229]: I0430 03:29:35.122536 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/865ac12a-6a99-43ed-939d-62a263eb4a02-cni-bin-dir\") pod \"calico-node-zg2qc\" (UID: \"865ac12a-6a99-43ed-939d-62a263eb4a02\") " pod="calico-system/calico-node-zg2qc" Apr 30 03:29:35.122784 kubelet[3229]: I0430 03:29:35.122552 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/865ac12a-6a99-43ed-939d-62a263eb4a02-cni-log-dir\") pod \"calico-node-zg2qc\" (UID: \"865ac12a-6a99-43ed-939d-62a263eb4a02\") " pod="calico-system/calico-node-zg2qc" Apr 30 03:29:35.122784 kubelet[3229]: I0430 03:29:35.122568 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzw7r\" (UniqueName: \"kubernetes.io/projected/865ac12a-6a99-43ed-939d-62a263eb4a02-kube-api-access-dzw7r\") pod \"calico-node-zg2qc\" (UID: \"865ac12a-6a99-43ed-939d-62a263eb4a02\") " pod="calico-system/calico-node-zg2qc" Apr 30 03:29:35.122784 kubelet[3229]: I0430 03:29:35.122591 3229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/865ac12a-6a99-43ed-939d-62a263eb4a02-policysync\") pod \"calico-node-zg2qc\" (UID: \"865ac12a-6a99-43ed-939d-62a263eb4a02\") " pod="calico-system/calico-node-zg2qc" Apr 30 03:29:35.122784 kubelet[3229]: I0430 03:29:35.122611 3229 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-lib-modules\") on node \"ip-172-31-16-5\" DevicePath \"\"" Apr 30 03:29:35.122784 kubelet[3229]: I0430 03:29:35.122622 3229 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-policysync\") on node \"ip-172-31-16-5\" DevicePath \"\"" Apr 30 03:29:35.122944 kubelet[3229]: I0430 03:29:35.122631 3229 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wpnh7\" (UniqueName: \"kubernetes.io/projected/d76778f1-83ce-4e9e-8d18-b99aeac99167-kube-api-access-wpnh7\") on node \"ip-172-31-16-5\" DevicePath \"\"" Apr 30 03:29:35.122944 kubelet[3229]: I0430 03:29:35.122640 3229 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d76778f1-83ce-4e9e-8d18-b99aeac99167-tigera-ca-bundle\") on node \"ip-172-31-16-5\" DevicePath \"\"" Apr 30 03:29:35.122944 kubelet[3229]: I0430 03:29:35.122648 3229 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-xtables-lock\") on node \"ip-172-31-16-5\" DevicePath \"\"" Apr 30 03:29:35.122944 kubelet[3229]: I0430 03:29:35.122655 3229 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-var-run-calico\") on node \"ip-172-31-16-5\" DevicePath \"\"" Apr 30 03:29:35.122944 kubelet[3229]: I0430 03:29:35.122663 3229 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-var-lib-calico\") on node \"ip-172-31-16-5\" DevicePath \"\"" Apr 30 03:29:35.122944 kubelet[3229]: I0430 03:29:35.122670 3229 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d76778f1-83ce-4e9e-8d18-b99aeac99167-node-certs\") on node \"ip-172-31-16-5\" DevicePath \"\"" Apr 30 03:29:35.122944 kubelet[3229]: I0430 03:29:35.122678 3229 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-cni-log-dir\") on node \"ip-172-31-16-5\" DevicePath \"\"" Apr 30 03:29:35.122944 kubelet[3229]: I0430 03:29:35.122685 3229 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-cni-net-dir\") on node \"ip-172-31-16-5\" DevicePath \"\"" Apr 30 03:29:35.123165 kubelet[3229]: I0430 03:29:35.122692 3229 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-cni-bin-dir\") on node \"ip-172-31-16-5\" DevicePath \"\"" Apr 30 03:29:35.123165 kubelet[3229]: I0430 03:29:35.122700 3229 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d76778f1-83ce-4e9e-8d18-b99aeac99167-flexvol-driver-host\") on node \"ip-172-31-16-5\" DevicePath \"\"" Apr 30 03:29:35.289459 containerd[1987]: time="2025-04-30T03:29:35.289004801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zg2qc,Uid:865ac12a-6a99-43ed-939d-62a263eb4a02,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:35.317141 containerd[1987]: time="2025-04-30T03:29:35.316653830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:35.317141 containerd[1987]: time="2025-04-30T03:29:35.316730753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:35.317141 containerd[1987]: time="2025-04-30T03:29:35.316752264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:35.317141 containerd[1987]: time="2025-04-30T03:29:35.316865723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:35.339291 systemd[1]: Started cri-containerd-8ec4397a2176b13f5b932200b53e3bbe33764fd14173825925afdbefc549620a.scope - libcontainer container 8ec4397a2176b13f5b932200b53e3bbe33764fd14173825925afdbefc549620a. Apr 30 03:29:35.386258 containerd[1987]: time="2025-04-30T03:29:35.386094500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zg2qc,Uid:865ac12a-6a99-43ed-939d-62a263eb4a02,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ec4397a2176b13f5b932200b53e3bbe33764fd14173825925afdbefc549620a\"" Apr 30 03:29:35.394606 containerd[1987]: time="2025-04-30T03:29:35.394244518Z" level=info msg="CreateContainer within sandbox \"8ec4397a2176b13f5b932200b53e3bbe33764fd14173825925afdbefc549620a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:29:35.399255 kubelet[3229]: I0430 03:29:35.398190 3229 scope.go:117] "RemoveContainer" containerID="73fa3c73dd1638a5c561427573ab0d26c939750c9e6ffa8b0a9cc173e482822c" Apr 30 03:29:35.400354 containerd[1987]: time="2025-04-30T03:29:35.400320944Z" level=info msg="RemoveContainer for \"73fa3c73dd1638a5c561427573ab0d26c939750c9e6ffa8b0a9cc173e482822c\"" Apr 30 03:29:35.409369 containerd[1987]: time="2025-04-30T03:29:35.409311168Z" level=info msg="RemoveContainer for \"73fa3c73dd1638a5c561427573ab0d26c939750c9e6ffa8b0a9cc173e482822c\" returns successfully" Apr 30 03:29:35.410576 kubelet[3229]: I0430 03:29:35.410505 3229 scope.go:117] "RemoveContainer" containerID="885c5135d32e361bc33cdee6ebb4cc70cde58f1c0b3155c490acb84d873b70b1" Apr 30 03:29:35.414303 containerd[1987]: time="2025-04-30T03:29:35.413439695Z" level=info msg="RemoveContainer for \"885c5135d32e361bc33cdee6ebb4cc70cde58f1c0b3155c490acb84d873b70b1\"" Apr 30 03:29:35.414673 systemd[1]: Removed slice kubepods-besteffort-podd76778f1_83ce_4e9e_8d18_b99aeac99167.slice - libcontainer container kubepods-besteffort-podd76778f1_83ce_4e9e_8d18_b99aeac99167.slice. Apr 30 03:29:35.419163 containerd[1987]: time="2025-04-30T03:29:35.419130270Z" level=info msg="RemoveContainer for \"885c5135d32e361bc33cdee6ebb4cc70cde58f1c0b3155c490acb84d873b70b1\" returns successfully" Apr 30 03:29:35.419938 kubelet[3229]: I0430 03:29:35.419907 3229 scope.go:117] "RemoveContainer" containerID="5c8471e5697ad99bafa14d45de8a2211fe724878f85640e9fd353cbb30710225" Apr 30 03:29:35.421709 containerd[1987]: time="2025-04-30T03:29:35.421672034Z" level=info msg="RemoveContainer for \"5c8471e5697ad99bafa14d45de8a2211fe724878f85640e9fd353cbb30710225\"" Apr 30 03:29:35.426746 containerd[1987]: time="2025-04-30T03:29:35.426677603Z" level=info msg="RemoveContainer for \"5c8471e5697ad99bafa14d45de8a2211fe724878f85640e9fd353cbb30710225\" returns successfully" Apr 30 03:29:35.431379 containerd[1987]: time="2025-04-30T03:29:35.431330040Z" level=info msg="CreateContainer within sandbox \"8ec4397a2176b13f5b932200b53e3bbe33764fd14173825925afdbefc549620a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"28aa5451342b942383291def29b5fdb87d86a6046d16257815b88eac48766863\"" Apr 30 03:29:35.434112 containerd[1987]: time="2025-04-30T03:29:35.433723822Z" level=info msg="StartContainer for \"28aa5451342b942383291def29b5fdb87d86a6046d16257815b88eac48766863\"" Apr 30 03:29:35.481620 systemd[1]: Started cri-containerd-28aa5451342b942383291def29b5fdb87d86a6046d16257815b88eac48766863.scope - libcontainer container 28aa5451342b942383291def29b5fdb87d86a6046d16257815b88eac48766863. Apr 30 03:29:35.512755 containerd[1987]: time="2025-04-30T03:29:35.512700171Z" level=info msg="StartContainer for \"28aa5451342b942383291def29b5fdb87d86a6046d16257815b88eac48766863\" returns successfully" Apr 30 03:29:35.611525 systemd[1]: cri-containerd-28aa5451342b942383291def29b5fdb87d86a6046d16257815b88eac48766863.scope: Deactivated successfully. Apr 30 03:29:35.654903 containerd[1987]: time="2025-04-30T03:29:35.654833928Z" level=info msg="shim disconnected" id=28aa5451342b942383291def29b5fdb87d86a6046d16257815b88eac48766863 namespace=k8s.io Apr 30 03:29:35.654903 containerd[1987]: time="2025-04-30T03:29:35.654901257Z" level=warning msg="cleaning up after shim disconnected" id=28aa5451342b942383291def29b5fdb87d86a6046d16257815b88eac48766863 namespace=k8s.io Apr 30 03:29:35.655432 containerd[1987]: time="2025-04-30T03:29:35.654912286Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:36.048522 kubelet[3229]: I0430 03:29:36.048360 3229 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d76778f1-83ce-4e9e-8d18-b99aeac99167" path="/var/lib/kubelet/pods/d76778f1-83ce-4e9e-8d18-b99aeac99167/volumes" Apr 30 03:29:36.406138 containerd[1987]: time="2025-04-30T03:29:36.405201305Z" level=info msg="CreateContainer within sandbox \"8ec4397a2176b13f5b932200b53e3bbe33764fd14173825925afdbefc549620a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:29:36.428181 containerd[1987]: time="2025-04-30T03:29:36.427540668Z" level=info msg="CreateContainer within sandbox \"8ec4397a2176b13f5b932200b53e3bbe33764fd14173825925afdbefc549620a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"436cf107c2ffe53bde19324e205b3989e0e1f2692c4ec1e6557887d68f6cd3b4\"" Apr 30 03:29:36.431738 containerd[1987]: time="2025-04-30T03:29:36.431261352Z" level=info msg="StartContainer for \"436cf107c2ffe53bde19324e205b3989e0e1f2692c4ec1e6557887d68f6cd3b4\"" Apr 30 03:29:36.483913 systemd[1]: Started cri-containerd-436cf107c2ffe53bde19324e205b3989e0e1f2692c4ec1e6557887d68f6cd3b4.scope - libcontainer container 436cf107c2ffe53bde19324e205b3989e0e1f2692c4ec1e6557887d68f6cd3b4. Apr 30 03:29:36.522228 containerd[1987]: time="2025-04-30T03:29:36.522034138Z" level=info msg="StartContainer for \"436cf107c2ffe53bde19324e205b3989e0e1f2692c4ec1e6557887d68f6cd3b4\" returns successfully" Apr 30 03:29:37.053535 containerd[1987]: time="2025-04-30T03:29:37.053223298Z" level=info msg="StopPodSandbox for \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\"" Apr 30 03:29:37.117056 containerd[1987]: time="2025-04-30T03:29:37.116979969Z" level=error msg="StopPodSandbox for \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\" failed" error="failed to destroy network for sandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:37.117706 kubelet[3229]: E0430 03:29:37.117527 3229 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Apr 30 03:29:37.117706 kubelet[3229]: E0430 03:29:37.117585 3229 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0"} Apr 30 03:29:37.117706 kubelet[3229]: E0430 03:29:37.117638 3229 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"55fd6c70-afa8-486e-b82a-44a54b2e3758\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:37.117706 kubelet[3229]: E0430 03:29:37.117668 3229 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"55fd6c70-afa8-486e-b82a-44a54b2e3758\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-w297d" podUID="55fd6c70-afa8-486e-b82a-44a54b2e3758" Apr 30 03:29:38.096973 systemd[1]: Started sshd@18-172.31.16.5:22-147.75.109.163:37922.service - OpenSSH per-connection server daemon (147.75.109.163:37922). Apr 30 03:29:38.338054 systemd[1]: cri-containerd-436cf107c2ffe53bde19324e205b3989e0e1f2692c4ec1e6557887d68f6cd3b4.scope: Deactivated successfully. Apr 30 03:29:38.367192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-436cf107c2ffe53bde19324e205b3989e0e1f2692c4ec1e6557887d68f6cd3b4-rootfs.mount: Deactivated successfully. Apr 30 03:29:38.377203 containerd[1987]: time="2025-04-30T03:29:38.377119086Z" level=info msg="shim disconnected" id=436cf107c2ffe53bde19324e205b3989e0e1f2692c4ec1e6557887d68f6cd3b4 namespace=k8s.io Apr 30 03:29:38.377203 containerd[1987]: time="2025-04-30T03:29:38.377171082Z" level=warning msg="cleaning up after shim disconnected" id=436cf107c2ffe53bde19324e205b3989e0e1f2692c4ec1e6557887d68f6cd3b4 namespace=k8s.io Apr 30 03:29:38.377203 containerd[1987]: time="2025-04-30T03:29:38.377180544Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:38.406496 sshd[5404]: Accepted publickey for core from 147.75.109.163 port 37922 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:38.408654 sshd[5404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:38.414582 systemd-logind[1956]: New session 19 of user core. Apr 30 03:29:38.419230 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:29:38.448836 containerd[1987]: time="2025-04-30T03:29:38.448656349Z" level=info msg="CreateContainer within sandbox \"8ec4397a2176b13f5b932200b53e3bbe33764fd14173825925afdbefc549620a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:29:38.483982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2664641034.mount: Deactivated successfully. Apr 30 03:29:38.493396 containerd[1987]: time="2025-04-30T03:29:38.492729082Z" level=info msg="CreateContainer within sandbox \"8ec4397a2176b13f5b932200b53e3bbe33764fd14173825925afdbefc549620a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d8ea13fd8e46c1ba6c6ae18b7716885c7cfb2f89677e66aa4e1f1e8e2f4743ff\"" Apr 30 03:29:38.495061 containerd[1987]: time="2025-04-30T03:29:38.493801562Z" level=info msg="StartContainer for \"d8ea13fd8e46c1ba6c6ae18b7716885c7cfb2f89677e66aa4e1f1e8e2f4743ff\"" Apr 30 03:29:38.532152 systemd[1]: Started cri-containerd-d8ea13fd8e46c1ba6c6ae18b7716885c7cfb2f89677e66aa4e1f1e8e2f4743ff.scope - libcontainer container d8ea13fd8e46c1ba6c6ae18b7716885c7cfb2f89677e66aa4e1f1e8e2f4743ff. Apr 30 03:29:38.673048 containerd[1987]: time="2025-04-30T03:29:38.672906598Z" level=info msg="StartContainer for \"d8ea13fd8e46c1ba6c6ae18b7716885c7cfb2f89677e66aa4e1f1e8e2f4743ff\" returns successfully" Apr 30 03:29:38.982303 sshd[5404]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:38.987792 systemd-logind[1956]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:29:38.989217 systemd[1]: sshd@18-172.31.16.5:22-147.75.109.163:37922.service: Deactivated successfully. Apr 30 03:29:38.992821 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:29:38.994298 systemd-logind[1956]: Removed session 19. Apr 30 03:29:39.470533 kubelet[3229]: I0430 03:29:39.466330 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zg2qc" podStartSLOduration=5.466302464 podStartE2EDuration="5.466302464s" podCreationTimestamp="2025-04-30 03:29:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:39.465766302 +0000 UTC m=+89.548195010" watchObservedRunningTime="2025-04-30 03:29:39.466302464 +0000 UTC m=+89.548731173" Apr 30 03:29:39.501559 systemd[1]: run-containerd-runc-k8s.io-d8ea13fd8e46c1ba6c6ae18b7716885c7cfb2f89677e66aa4e1f1e8e2f4743ff-runc.eNtTsv.mount: Deactivated successfully. Apr 30 03:29:40.858045 kernel: bpftool[5670]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 03:29:41.047998 containerd[1987]: time="2025-04-30T03:29:41.047951397Z" level=info msg="StopPodSandbox for \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\"" Apr 30 03:29:41.152069 systemd-networkd[1891]: vxlan.calico: Link UP Apr 30 03:29:41.152081 systemd-networkd[1891]: vxlan.calico: Gained carrier Apr 30 03:29:41.158165 (udev-worker)[5706]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:29:41.208635 (udev-worker)[5720]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:29:41.210529 (udev-worker)[5704]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:29:41.371751 containerd[1987]: 2025-04-30 03:29:41.319 [INFO][5701] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Apr 30 03:29:41.371751 containerd[1987]: 2025-04-30 03:29:41.319 [INFO][5701] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" iface="eth0" netns="/var/run/netns/cni-8f627ef2-83a2-7f27-1892-bdd9bba45da4" Apr 30 03:29:41.371751 containerd[1987]: 2025-04-30 03:29:41.321 [INFO][5701] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" iface="eth0" netns="/var/run/netns/cni-8f627ef2-83a2-7f27-1892-bdd9bba45da4" Apr 30 03:29:41.371751 containerd[1987]: 2025-04-30 03:29:41.322 [INFO][5701] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" iface="eth0" netns="/var/run/netns/cni-8f627ef2-83a2-7f27-1892-bdd9bba45da4" Apr 30 03:29:41.371751 containerd[1987]: 2025-04-30 03:29:41.322 [INFO][5701] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Apr 30 03:29:41.371751 containerd[1987]: 2025-04-30 03:29:41.322 [INFO][5701] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Apr 30 03:29:41.371751 containerd[1987]: 2025-04-30 03:29:41.355 [INFO][5731] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" HandleID="k8s-pod-network.4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0" Apr 30 03:29:41.371751 containerd[1987]: 2025-04-30 03:29:41.355 [INFO][5731] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:41.371751 containerd[1987]: 2025-04-30 03:29:41.355 [INFO][5731] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:41.371751 containerd[1987]: 2025-04-30 03:29:41.363 [WARNING][5731] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" HandleID="k8s-pod-network.4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0" Apr 30 03:29:41.371751 containerd[1987]: 2025-04-30 03:29:41.363 [INFO][5731] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" HandleID="k8s-pod-network.4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0" Apr 30 03:29:41.371751 containerd[1987]: 2025-04-30 03:29:41.365 [INFO][5731] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:41.371751 containerd[1987]: 2025-04-30 03:29:41.367 [INFO][5701] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Apr 30 03:29:41.373263 containerd[1987]: time="2025-04-30T03:29:41.372115719Z" level=info msg="TearDown network for sandbox \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\" successfully" Apr 30 03:29:41.373263 containerd[1987]: time="2025-04-30T03:29:41.372168782Z" level=info msg="StopPodSandbox for \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\" returns successfully" Apr 30 03:29:41.376390 containerd[1987]: time="2025-04-30T03:29:41.376342739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c489dd647-9ccl9,Uid:09ed9b89-1a51-4296-b830-803e57059495,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:29:41.380945 systemd[1]: run-netns-cni\x2d8f627ef2\x2d83a2\x2d7f27\x2d1892\x2dbdd9bba45da4.mount: Deactivated successfully. Apr 30 03:29:41.581680 systemd-networkd[1891]: calid8bd63426f5: Link UP Apr 30 03:29:41.583516 systemd-networkd[1891]: calid8bd63426f5: Gained carrier Apr 30 03:29:41.584836 (udev-worker)[5725]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:29:41.626038 containerd[1987]: 2025-04-30 03:29:41.462 [INFO][5741] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0 calico-apiserver-6c489dd647- calico-apiserver 09ed9b89-1a51-4296-b830-803e57059495 1097 0 2025-04-30 03:28:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c489dd647 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-5 calico-apiserver-6c489dd647-9ccl9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid8bd63426f5 [] []}} ContainerID="00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c" Namespace="calico-apiserver" Pod="calico-apiserver-6c489dd647-9ccl9" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-" Apr 30 03:29:41.626038 containerd[1987]: 2025-04-30 03:29:41.462 [INFO][5741] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c" Namespace="calico-apiserver" Pod="calico-apiserver-6c489dd647-9ccl9" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0" Apr 30 03:29:41.626038 containerd[1987]: 2025-04-30 03:29:41.519 [INFO][5752] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c" HandleID="k8s-pod-network.00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0" Apr 30 03:29:41.626038 containerd[1987]: 2025-04-30 03:29:41.532 [INFO][5752] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c" HandleID="k8s-pod-network.00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f0ac0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-5", "pod":"calico-apiserver-6c489dd647-9ccl9", "timestamp":"2025-04-30 03:29:41.519396281 +0000 UTC"}, Hostname:"ip-172-31-16-5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:41.626038 containerd[1987]: 2025-04-30 03:29:41.532 [INFO][5752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:41.626038 containerd[1987]: 2025-04-30 03:29:41.532 [INFO][5752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:41.626038 containerd[1987]: 2025-04-30 03:29:41.532 [INFO][5752] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-5' Apr 30 03:29:41.626038 containerd[1987]: 2025-04-30 03:29:41.535 [INFO][5752] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c" host="ip-172-31-16-5" Apr 30 03:29:41.626038 containerd[1987]: 2025-04-30 03:29:41.540 [INFO][5752] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-5" Apr 30 03:29:41.626038 containerd[1987]: 2025-04-30 03:29:41.545 [INFO][5752] ipam/ipam.go 489: Trying affinity for 192.168.72.128/26 host="ip-172-31-16-5" Apr 30 03:29:41.626038 containerd[1987]: 2025-04-30 03:29:41.548 [INFO][5752] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.128/26 host="ip-172-31-16-5" Apr 30 03:29:41.626038 containerd[1987]: 2025-04-30 03:29:41.551 [INFO][5752] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ip-172-31-16-5" Apr 30 03:29:41.626038 containerd[1987]: 2025-04-30 03:29:41.551 [INFO][5752] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c" host="ip-172-31-16-5" Apr 30 03:29:41.626038 containerd[1987]: 2025-04-30 03:29:41.555 [INFO][5752] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c Apr 30 03:29:41.626038 containerd[1987]: 2025-04-30 03:29:41.563 [INFO][5752] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c" host="ip-172-31-16-5" Apr 30 03:29:41.626038 containerd[1987]: 2025-04-30 03:29:41.572 [INFO][5752] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.129/26] block=192.168.72.128/26 handle="k8s-pod-network.00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c" host="ip-172-31-16-5" Apr 30 03:29:41.626038 containerd[1987]: 2025-04-30 03:29:41.573 [INFO][5752] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.129/26] handle="k8s-pod-network.00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c" host="ip-172-31-16-5" Apr 30 03:29:41.626038 containerd[1987]: 2025-04-30 03:29:41.573 [INFO][5752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:41.626038 containerd[1987]: 2025-04-30 03:29:41.573 [INFO][5752] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.129/26] IPv6=[] ContainerID="00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c" HandleID="k8s-pod-network.00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0" Apr 30 03:29:41.630827 containerd[1987]: 2025-04-30 03:29:41.576 [INFO][5741] cni-plugin/k8s.go 386: Populated endpoint ContainerID="00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c" Namespace="calico-apiserver" Pod="calico-apiserver-6c489dd647-9ccl9" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0", GenerateName:"calico-apiserver-6c489dd647-", Namespace:"calico-apiserver", SelfLink:"", UID:"09ed9b89-1a51-4296-b830-803e57059495", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c489dd647", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"", Pod:"calico-apiserver-6c489dd647-9ccl9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8bd63426f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:41.630827 containerd[1987]: 2025-04-30 03:29:41.576 [INFO][5741] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.129/32] ContainerID="00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c" Namespace="calico-apiserver" Pod="calico-apiserver-6c489dd647-9ccl9" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0" Apr 30 03:29:41.630827 containerd[1987]: 2025-04-30 03:29:41.577 [INFO][5741] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid8bd63426f5 ContainerID="00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c" Namespace="calico-apiserver" Pod="calico-apiserver-6c489dd647-9ccl9" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0" Apr 30 03:29:41.630827 containerd[1987]: 2025-04-30 03:29:41.582 [INFO][5741] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c" Namespace="calico-apiserver" Pod="calico-apiserver-6c489dd647-9ccl9" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0" Apr 30 03:29:41.630827 containerd[1987]: 2025-04-30 03:29:41.584 [INFO][5741] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c" Namespace="calico-apiserver" Pod="calico-apiserver-6c489dd647-9ccl9" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0", GenerateName:"calico-apiserver-6c489dd647-", Namespace:"calico-apiserver", SelfLink:"", UID:"09ed9b89-1a51-4296-b830-803e57059495", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c489dd647", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c", Pod:"calico-apiserver-6c489dd647-9ccl9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8bd63426f5", MAC:"2a:ee:a2:ad:34:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:41.630827 containerd[1987]: 2025-04-30 03:29:41.608 [INFO][5741] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c" Namespace="calico-apiserver" Pod="calico-apiserver-6c489dd647-9ccl9" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0" Apr 30 03:29:41.684738 containerd[1987]: time="2025-04-30T03:29:41.684147357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:41.684738 containerd[1987]: time="2025-04-30T03:29:41.684225425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:41.684738 containerd[1987]: time="2025-04-30T03:29:41.684249988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:41.684738 containerd[1987]: time="2025-04-30T03:29:41.684358470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:41.740684 systemd[1]: Started cri-containerd-00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c.scope - libcontainer container 00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c. Apr 30 03:29:41.822614 containerd[1987]: time="2025-04-30T03:29:41.822572846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c489dd647-9ccl9,Uid:09ed9b89-1a51-4296-b830-803e57059495,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c\"" Apr 30 03:29:41.825799 containerd[1987]: time="2025-04-30T03:29:41.825746502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:29:42.048132 containerd[1987]: time="2025-04-30T03:29:42.047840622Z" level=info msg="StopPodSandbox for \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\"" Apr 30 03:29:42.050048 containerd[1987]: time="2025-04-30T03:29:42.048972043Z" level=info msg="StopPodSandbox for \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\"" Apr 30 03:29:42.052979 containerd[1987]: time="2025-04-30T03:29:42.052933695Z" level=info msg="StopPodSandbox for \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\"" Apr 30 03:29:42.244200 containerd[1987]: 2025-04-30 03:29:42.138 [INFO][5888] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Apr 30 03:29:42.244200 containerd[1987]: 2025-04-30 03:29:42.139 [INFO][5888] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" iface="eth0" netns="/var/run/netns/cni-92dbaef8-c999-b55f-2b5b-33203d163e3c" Apr 30 03:29:42.244200 containerd[1987]: 2025-04-30 03:29:42.141 [INFO][5888] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" iface="eth0" netns="/var/run/netns/cni-92dbaef8-c999-b55f-2b5b-33203d163e3c" Apr 30 03:29:42.244200 containerd[1987]: 2025-04-30 03:29:42.142 [INFO][5888] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" iface="eth0" netns="/var/run/netns/cni-92dbaef8-c999-b55f-2b5b-33203d163e3c" Apr 30 03:29:42.244200 containerd[1987]: 2025-04-30 03:29:42.142 [INFO][5888] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Apr 30 03:29:42.244200 containerd[1987]: 2025-04-30 03:29:42.142 [INFO][5888] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Apr 30 03:29:42.244200 containerd[1987]: 2025-04-30 03:29:42.212 [INFO][5903] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" HandleID="k8s-pod-network.263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0" Apr 30 03:29:42.244200 containerd[1987]: 2025-04-30 03:29:42.213 [INFO][5903] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:42.244200 containerd[1987]: 2025-04-30 03:29:42.213 [INFO][5903] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:42.244200 containerd[1987]: 2025-04-30 03:29:42.226 [WARNING][5903] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" HandleID="k8s-pod-network.263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0" Apr 30 03:29:42.244200 containerd[1987]: 2025-04-30 03:29:42.226 [INFO][5903] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" HandleID="k8s-pod-network.263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0" Apr 30 03:29:42.244200 containerd[1987]: 2025-04-30 03:29:42.230 [INFO][5903] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:42.244200 containerd[1987]: 2025-04-30 03:29:42.240 [INFO][5888] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Apr 30 03:29:42.247476 containerd[1987]: time="2025-04-30T03:29:42.244361541Z" level=info msg="TearDown network for sandbox \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\" successfully" Apr 30 03:29:42.247476 containerd[1987]: time="2025-04-30T03:29:42.244453993Z" level=info msg="StopPodSandbox for \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\" returns successfully" Apr 30 03:29:42.247476 containerd[1987]: time="2025-04-30T03:29:42.245235783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t7qqn,Uid:aa805bd1-beeb-4584-9d9d-3007469a5975,Namespace:kube-system,Attempt:1,}" Apr 30 03:29:42.252998 systemd[1]: run-netns-cni\x2d92dbaef8\x2dc999\x2db55f\x2d2b5b\x2d33203d163e3c.mount: Deactivated successfully. Apr 30 03:29:42.291071 containerd[1987]: 2025-04-30 03:29:42.165 [INFO][5883] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Apr 30 03:29:42.291071 containerd[1987]: 2025-04-30 03:29:42.166 [INFO][5883] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" iface="eth0" netns="/var/run/netns/cni-fbe91b1e-b189-f299-6799-47b813d7e13f" Apr 30 03:29:42.291071 containerd[1987]: 2025-04-30 03:29:42.167 [INFO][5883] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" iface="eth0" netns="/var/run/netns/cni-fbe91b1e-b189-f299-6799-47b813d7e13f" Apr 30 03:29:42.291071 containerd[1987]: 2025-04-30 03:29:42.168 [INFO][5883] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" iface="eth0" netns="/var/run/netns/cni-fbe91b1e-b189-f299-6799-47b813d7e13f" Apr 30 03:29:42.291071 containerd[1987]: 2025-04-30 03:29:42.168 [INFO][5883] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Apr 30 03:29:42.291071 containerd[1987]: 2025-04-30 03:29:42.168 [INFO][5883] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Apr 30 03:29:42.291071 containerd[1987]: 2025-04-30 03:29:42.258 [INFO][5908] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" HandleID="k8s-pod-network.5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Workload="ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0" Apr 30 03:29:42.291071 containerd[1987]: 2025-04-30 03:29:42.259 [INFO][5908] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:42.291071 containerd[1987]: 2025-04-30 03:29:42.259 [INFO][5908] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:42.291071 containerd[1987]: 2025-04-30 03:29:42.270 [WARNING][5908] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" HandleID="k8s-pod-network.5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Workload="ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0" Apr 30 03:29:42.291071 containerd[1987]: 2025-04-30 03:29:42.270 [INFO][5908] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" HandleID="k8s-pod-network.5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Workload="ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0" Apr 30 03:29:42.291071 containerd[1987]: 2025-04-30 03:29:42.273 [INFO][5908] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:42.291071 containerd[1987]: 2025-04-30 03:29:42.284 [INFO][5883] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Apr 30 03:29:42.295984 systemd[1]: run-netns-cni\x2dfbe91b1e\x2db189\x2df299\x2d6799\x2d47b813d7e13f.mount: Deactivated successfully. Apr 30 03:29:42.297889 containerd[1987]: time="2025-04-30T03:29:42.297632367Z" level=info msg="TearDown network for sandbox \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\" successfully" Apr 30 03:29:42.297889 containerd[1987]: time="2025-04-30T03:29:42.297672051Z" level=info msg="StopPodSandbox for \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\" returns successfully" Apr 30 03:29:42.300989 containerd[1987]: time="2025-04-30T03:29:42.299603626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jbvr9,Uid:c125f289-79ba-4045-ac98-3376fc26a663,Namespace:calico-system,Attempt:1,}" Apr 30 03:29:42.332912 containerd[1987]: 2025-04-30 03:29:42.188 [INFO][5887] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Apr 30 03:29:42.332912 containerd[1987]: 2025-04-30 03:29:42.192 [INFO][5887] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" iface="eth0" netns="/var/run/netns/cni-a73a43d8-1475-2639-5d9a-74302bae3f03" Apr 30 03:29:42.332912 containerd[1987]: 2025-04-30 03:29:42.192 [INFO][5887] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" iface="eth0" netns="/var/run/netns/cni-a73a43d8-1475-2639-5d9a-74302bae3f03" Apr 30 03:29:42.332912 containerd[1987]: 2025-04-30 03:29:42.194 [INFO][5887] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" iface="eth0" netns="/var/run/netns/cni-a73a43d8-1475-2639-5d9a-74302bae3f03" Apr 30 03:29:42.332912 containerd[1987]: 2025-04-30 03:29:42.194 [INFO][5887] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Apr 30 03:29:42.332912 containerd[1987]: 2025-04-30 03:29:42.194 [INFO][5887] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Apr 30 03:29:42.332912 containerd[1987]: 2025-04-30 03:29:42.290 [INFO][5914] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" HandleID="k8s-pod-network.95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0" Apr 30 03:29:42.332912 containerd[1987]: 2025-04-30 03:29:42.292 [INFO][5914] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:42.332912 containerd[1987]: 2025-04-30 03:29:42.292 [INFO][5914] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:42.332912 containerd[1987]: 2025-04-30 03:29:42.311 [WARNING][5914] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" HandleID="k8s-pod-network.95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0" Apr 30 03:29:42.332912 containerd[1987]: 2025-04-30 03:29:42.311 [INFO][5914] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" HandleID="k8s-pod-network.95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0" Apr 30 03:29:42.332912 containerd[1987]: 2025-04-30 03:29:42.316 [INFO][5914] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:42.332912 containerd[1987]: 2025-04-30 03:29:42.324 [INFO][5887] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Apr 30 03:29:42.334175 containerd[1987]: time="2025-04-30T03:29:42.333533659Z" level=info msg="TearDown network for sandbox \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\" successfully" Apr 30 03:29:42.334175 containerd[1987]: time="2025-04-30T03:29:42.333565244Z" level=info msg="StopPodSandbox for \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\" returns successfully" Apr 30 03:29:42.334653 containerd[1987]: time="2025-04-30T03:29:42.334620483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c489dd647-zhfwb,Uid:2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:29:42.577785 systemd-networkd[1891]: cali7b07a194832: Link UP Apr 30 03:29:42.581506 systemd-networkd[1891]: cali7b07a194832: Gained carrier Apr 30 03:29:42.616320 containerd[1987]: 2025-04-30 03:29:42.393 [INFO][5924] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0 coredns-7db6d8ff4d- kube-system aa805bd1-beeb-4584-9d9d-3007469a5975 1112 0 2025-04-30 03:28:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-5 coredns-7db6d8ff4d-t7qqn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7b07a194832 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t7qqn" WorkloadEndpoint="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-" Apr 30 03:29:42.616320 containerd[1987]: 2025-04-30 03:29:42.393 [INFO][5924] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t7qqn" WorkloadEndpoint="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0" Apr 30 03:29:42.616320 containerd[1987]: 2025-04-30 03:29:42.476 [INFO][5963] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6" HandleID="k8s-pod-network.6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0" Apr 30 03:29:42.616320 containerd[1987]: 2025-04-30 03:29:42.503 [INFO][5963] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6" HandleID="k8s-pod-network.6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000293c80), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-5", "pod":"coredns-7db6d8ff4d-t7qqn", "timestamp":"2025-04-30 03:29:42.476284422 +0000 UTC"}, Hostname:"ip-172-31-16-5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:42.616320 containerd[1987]: 2025-04-30 03:29:42.503 [INFO][5963] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:42.616320 containerd[1987]: 2025-04-30 03:29:42.503 [INFO][5963] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:42.616320 containerd[1987]: 2025-04-30 03:29:42.503 [INFO][5963] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-5' Apr 30 03:29:42.616320 containerd[1987]: 2025-04-30 03:29:42.506 [INFO][5963] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6" host="ip-172-31-16-5" Apr 30 03:29:42.616320 containerd[1987]: 2025-04-30 03:29:42.515 [INFO][5963] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-5" Apr 30 03:29:42.616320 containerd[1987]: 2025-04-30 03:29:42.521 [INFO][5963] ipam/ipam.go 489: Trying affinity for 192.168.72.128/26 host="ip-172-31-16-5" Apr 30 03:29:42.616320 containerd[1987]: 2025-04-30 03:29:42.527 [INFO][5963] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.128/26 host="ip-172-31-16-5" Apr 30 03:29:42.616320 containerd[1987]: 2025-04-30 03:29:42.530 [INFO][5963] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ip-172-31-16-5" Apr 30 03:29:42.616320 containerd[1987]: 2025-04-30 03:29:42.530 [INFO][5963] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6" host="ip-172-31-16-5" Apr 30 03:29:42.616320 containerd[1987]: 2025-04-30 03:29:42.537 [INFO][5963] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6 Apr 30 03:29:42.616320 containerd[1987]: 2025-04-30 03:29:42.546 [INFO][5963] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6" host="ip-172-31-16-5" Apr 30 03:29:42.616320 containerd[1987]: 2025-04-30 03:29:42.557 [INFO][5963] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.130/26] block=192.168.72.128/26 handle="k8s-pod-network.6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6" host="ip-172-31-16-5" Apr 30 03:29:42.616320 containerd[1987]: 2025-04-30 03:29:42.557 [INFO][5963] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.130/26] handle="k8s-pod-network.6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6" host="ip-172-31-16-5" Apr 30 03:29:42.616320 containerd[1987]: 2025-04-30 03:29:42.557 [INFO][5963] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:42.616320 containerd[1987]: 2025-04-30 03:29:42.557 [INFO][5963] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.130/26] IPv6=[] ContainerID="6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6" HandleID="k8s-pod-network.6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0" Apr 30 03:29:42.618393 containerd[1987]: 2025-04-30 03:29:42.564 [INFO][5924] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t7qqn" WorkloadEndpoint="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"aa805bd1-beeb-4584-9d9d-3007469a5975", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"", Pod:"coredns-7db6d8ff4d-t7qqn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b07a194832", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:42.618393 containerd[1987]: 2025-04-30 03:29:42.564 [INFO][5924] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.130/32] ContainerID="6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t7qqn" WorkloadEndpoint="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0" Apr 30 03:29:42.618393 containerd[1987]: 2025-04-30 03:29:42.564 [INFO][5924] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b07a194832 ContainerID="6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t7qqn" WorkloadEndpoint="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0" Apr 30 03:29:42.618393 containerd[1987]: 2025-04-30 03:29:42.582 [INFO][5924] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t7qqn" WorkloadEndpoint="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0" Apr 30 03:29:42.618393 containerd[1987]: 2025-04-30 03:29:42.584 [INFO][5924] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t7qqn" WorkloadEndpoint="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"aa805bd1-beeb-4584-9d9d-3007469a5975", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6", Pod:"coredns-7db6d8ff4d-t7qqn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b07a194832", MAC:"0a:b3:37:06:a8:e3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:42.619805 containerd[1987]: 2025-04-30 03:29:42.613 [INFO][5924] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t7qqn" WorkloadEndpoint="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0" Apr 30 03:29:42.653824 systemd-networkd[1891]: calicef465bc2c1: Link UP Apr 30 03:29:42.655648 systemd-networkd[1891]: calicef465bc2c1: Gained carrier Apr 30 03:29:42.685981 containerd[1987]: time="2025-04-30T03:29:42.685717579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:42.685981 containerd[1987]: time="2025-04-30T03:29:42.685781202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:42.685981 containerd[1987]: time="2025-04-30T03:29:42.685798868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:42.687792 containerd[1987]: time="2025-04-30T03:29:42.687662017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:42.702344 containerd[1987]: 2025-04-30 03:29:42.456 [INFO][5949] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0 calico-apiserver-6c489dd647- calico-apiserver 2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605 1114 0 2025-04-30 03:28:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c489dd647 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-5 calico-apiserver-6c489dd647-zhfwb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicef465bc2c1 [] []}} ContainerID="1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58" Namespace="calico-apiserver" Pod="calico-apiserver-6c489dd647-zhfwb" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-" Apr 30 03:29:42.702344 containerd[1987]: 2025-04-30 03:29:42.456 [INFO][5949] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58" Namespace="calico-apiserver" Pod="calico-apiserver-6c489dd647-zhfwb" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0" Apr 30 03:29:42.702344 containerd[1987]: 2025-04-30 03:29:42.527 [INFO][5976] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58" HandleID="k8s-pod-network.1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0" Apr 30 03:29:42.702344 containerd[1987]: 2025-04-30 03:29:42.547 [INFO][5976] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58" HandleID="k8s-pod-network.1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333af0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-5", "pod":"calico-apiserver-6c489dd647-zhfwb", "timestamp":"2025-04-30 03:29:42.527656067 +0000 UTC"}, Hostname:"ip-172-31-16-5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:42.702344 containerd[1987]: 2025-04-30 03:29:42.547 [INFO][5976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:42.702344 containerd[1987]: 2025-04-30 03:29:42.557 [INFO][5976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:42.702344 containerd[1987]: 2025-04-30 03:29:42.557 [INFO][5976] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-5' Apr 30 03:29:42.702344 containerd[1987]: 2025-04-30 03:29:42.562 [INFO][5976] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58" host="ip-172-31-16-5" Apr 30 03:29:42.702344 containerd[1987]: 2025-04-30 03:29:42.572 [INFO][5976] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-5" Apr 30 03:29:42.702344 containerd[1987]: 2025-04-30 03:29:42.583 [INFO][5976] ipam/ipam.go 489: Trying affinity for 192.168.72.128/26 host="ip-172-31-16-5" Apr 30 03:29:42.702344 containerd[1987]: 2025-04-30 03:29:42.587 [INFO][5976] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.128/26 host="ip-172-31-16-5" Apr 30 03:29:42.702344 containerd[1987]: 2025-04-30 03:29:42.592 [INFO][5976] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ip-172-31-16-5" Apr 30 03:29:42.702344 containerd[1987]: 2025-04-30 03:29:42.592 [INFO][5976] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58" host="ip-172-31-16-5" Apr 30 03:29:42.702344 containerd[1987]: 2025-04-30 03:29:42.599 [INFO][5976] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58 Apr 30 03:29:42.702344 containerd[1987]: 2025-04-30 03:29:42.607 [INFO][5976] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58" host="ip-172-31-16-5" Apr 30 03:29:42.702344 containerd[1987]: 2025-04-30 03:29:42.624 [INFO][5976] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.131/26] block=192.168.72.128/26 handle="k8s-pod-network.1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58" host="ip-172-31-16-5" Apr 30 03:29:42.702344 containerd[1987]: 2025-04-30 03:29:42.625 [INFO][5976] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.131/26] handle="k8s-pod-network.1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58" host="ip-172-31-16-5" Apr 30 03:29:42.702344 containerd[1987]: 2025-04-30 03:29:42.625 [INFO][5976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:42.702344 containerd[1987]: 2025-04-30 03:29:42.626 [INFO][5976] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.131/26] IPv6=[] ContainerID="1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58" HandleID="k8s-pod-network.1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0" Apr 30 03:29:42.706011 containerd[1987]: 2025-04-30 03:29:42.641 [INFO][5949] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58" Namespace="calico-apiserver" Pod="calico-apiserver-6c489dd647-zhfwb" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0", GenerateName:"calico-apiserver-6c489dd647-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c489dd647", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"", Pod:"calico-apiserver-6c489dd647-zhfwb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicef465bc2c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:42.706011 containerd[1987]: 2025-04-30 03:29:42.641 [INFO][5949] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.131/32] ContainerID="1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58" Namespace="calico-apiserver" Pod="calico-apiserver-6c489dd647-zhfwb" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0" Apr 30 03:29:42.706011 containerd[1987]: 2025-04-30 03:29:42.641 [INFO][5949] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicef465bc2c1 ContainerID="1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58" Namespace="calico-apiserver" Pod="calico-apiserver-6c489dd647-zhfwb" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0" Apr 30 03:29:42.706011 containerd[1987]: 2025-04-30 03:29:42.654 [INFO][5949] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58" Namespace="calico-apiserver" Pod="calico-apiserver-6c489dd647-zhfwb" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0" Apr 30 03:29:42.706011 containerd[1987]: 2025-04-30 03:29:42.664 [INFO][5949] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58" Namespace="calico-apiserver" Pod="calico-apiserver-6c489dd647-zhfwb" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0", GenerateName:"calico-apiserver-6c489dd647-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c489dd647", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58", Pod:"calico-apiserver-6c489dd647-zhfwb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicef465bc2c1", MAC:"22:03:aa:35:1a:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:42.706011 containerd[1987]: 2025-04-30 03:29:42.693 [INFO][5949] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58" Namespace="calico-apiserver" Pod="calico-apiserver-6c489dd647-zhfwb" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0" Apr 30 03:29:42.709424 systemd[1]: run-netns-cni\x2da73a43d8\x2d1475\x2d2639\x2d5d9a\x2d74302bae3f03.mount: Deactivated successfully. Apr 30 03:29:42.746833 systemd-networkd[1891]: calica5a9f38e63: Link UP Apr 30 03:29:42.752460 systemd-networkd[1891]: calica5a9f38e63: Gained carrier Apr 30 03:29:42.754662 systemd[1]: run-containerd-runc-k8s.io-6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6-runc.xfBy9d.mount: Deactivated successfully. Apr 30 03:29:42.776271 systemd[1]: Started cri-containerd-6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6.scope - libcontainer container 6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6. Apr 30 03:29:42.812530 containerd[1987]: 2025-04-30 03:29:42.419 [INFO][5935] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0 csi-node-driver- calico-system c125f289-79ba-4045-ac98-3376fc26a663 1113 0 2025-04-30 03:28:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-16-5 csi-node-driver-jbvr9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calica5a9f38e63 [] []}} ContainerID="3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747" Namespace="calico-system" Pod="csi-node-driver-jbvr9" WorkloadEndpoint="ip--172--31--16--5-k8s-csi--node--driver--jbvr9-" Apr 30 03:29:42.812530 containerd[1987]: 2025-04-30 03:29:42.420 [INFO][5935] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747" Namespace="calico-system" Pod="csi-node-driver-jbvr9" WorkloadEndpoint="ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0" Apr 30 03:29:42.812530 containerd[1987]: 2025-04-30 03:29:42.544 [INFO][5970] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747" HandleID="k8s-pod-network.3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747" Workload="ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0" Apr 30 03:29:42.812530 containerd[1987]: 2025-04-30 03:29:42.564 [INFO][5970] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747" HandleID="k8s-pod-network.3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747" Workload="ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000315b40), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-5", "pod":"csi-node-driver-jbvr9", "timestamp":"2025-04-30 03:29:42.544180197 +0000 UTC"}, Hostname:"ip-172-31-16-5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:42.812530 containerd[1987]: 2025-04-30 03:29:42.564 [INFO][5970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:42.812530 containerd[1987]: 2025-04-30 03:29:42.627 [INFO][5970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:42.812530 containerd[1987]: 2025-04-30 03:29:42.628 [INFO][5970] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-5' Apr 30 03:29:42.812530 containerd[1987]: 2025-04-30 03:29:42.632 [INFO][5970] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747" host="ip-172-31-16-5" Apr 30 03:29:42.812530 containerd[1987]: 2025-04-30 03:29:42.651 [INFO][5970] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-5" Apr 30 03:29:42.812530 containerd[1987]: 2025-04-30 03:29:42.669 [INFO][5970] ipam/ipam.go 489: Trying affinity for 192.168.72.128/26 host="ip-172-31-16-5" Apr 30 03:29:42.812530 containerd[1987]: 2025-04-30 03:29:42.676 [INFO][5970] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.128/26 host="ip-172-31-16-5" Apr 30 03:29:42.812530 containerd[1987]: 2025-04-30 03:29:42.684 [INFO][5970] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ip-172-31-16-5" Apr 30 03:29:42.812530 containerd[1987]: 2025-04-30 03:29:42.684 [INFO][5970] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747" host="ip-172-31-16-5" Apr 30 03:29:42.812530 containerd[1987]: 2025-04-30 03:29:42.695 [INFO][5970] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747 Apr 30 03:29:42.812530 containerd[1987]: 2025-04-30 03:29:42.706 [INFO][5970] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747" host="ip-172-31-16-5" Apr 30 03:29:42.812530 containerd[1987]: 2025-04-30 03:29:42.729 [INFO][5970] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.132/26] block=192.168.72.128/26 handle="k8s-pod-network.3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747" host="ip-172-31-16-5" Apr 30 03:29:42.812530 containerd[1987]: 2025-04-30 03:29:42.729 [INFO][5970] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.132/26] handle="k8s-pod-network.3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747" host="ip-172-31-16-5" Apr 30 03:29:42.812530 containerd[1987]: 2025-04-30 03:29:42.729 [INFO][5970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:42.812530 containerd[1987]: 2025-04-30 03:29:42.729 [INFO][5970] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.132/26] IPv6=[] ContainerID="3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747" HandleID="k8s-pod-network.3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747" Workload="ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0" Apr 30 03:29:42.814241 containerd[1987]: 2025-04-30 03:29:42.738 [INFO][5935] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747" Namespace="calico-system" Pod="csi-node-driver-jbvr9" WorkloadEndpoint="ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c125f289-79ba-4045-ac98-3376fc26a663", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"", Pod:"csi-node-driver-jbvr9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calica5a9f38e63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:42.814241 containerd[1987]: 2025-04-30 03:29:42.738 [INFO][5935] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.132/32] ContainerID="3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747" Namespace="calico-system" Pod="csi-node-driver-jbvr9" WorkloadEndpoint="ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0" Apr 30 03:29:42.814241 containerd[1987]: 2025-04-30 03:29:42.738 [INFO][5935] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica5a9f38e63 ContainerID="3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747" Namespace="calico-system" Pod="csi-node-driver-jbvr9" WorkloadEndpoint="ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0" Apr 30 03:29:42.814241 containerd[1987]: 2025-04-30 03:29:42.764 [INFO][5935] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747" Namespace="calico-system" Pod="csi-node-driver-jbvr9" WorkloadEndpoint="ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0" Apr 30 03:29:42.814241 containerd[1987]: 2025-04-30 03:29:42.767 [INFO][5935] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747" Namespace="calico-system" Pod="csi-node-driver-jbvr9" WorkloadEndpoint="ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c125f289-79ba-4045-ac98-3376fc26a663", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747", Pod:"csi-node-driver-jbvr9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calica5a9f38e63", MAC:"4a:38:b4:5e:0f:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:42.814241 containerd[1987]: 2025-04-30 03:29:42.801 [INFO][5935] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747" Namespace="calico-system" Pod="csi-node-driver-jbvr9" WorkloadEndpoint="ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0" Apr 30 03:29:42.832059 containerd[1987]: time="2025-04-30T03:29:42.831336514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:42.832059 containerd[1987]: time="2025-04-30T03:29:42.831416801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:42.832059 containerd[1987]: time="2025-04-30T03:29:42.831435728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:42.832059 containerd[1987]: time="2025-04-30T03:29:42.831546662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:42.903177 systemd[1]: Started cri-containerd-1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58.scope - libcontainer container 1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58. Apr 30 03:29:42.906661 containerd[1987]: time="2025-04-30T03:29:42.906268770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:42.906661 containerd[1987]: time="2025-04-30T03:29:42.906408109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:42.906661 containerd[1987]: time="2025-04-30T03:29:42.906462141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:42.906903 containerd[1987]: time="2025-04-30T03:29:42.906769528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:42.916513 systemd-networkd[1891]: calid8bd63426f5: Gained IPv6LL Apr 30 03:29:42.916856 systemd-networkd[1891]: vxlan.calico: Gained IPv6LL Apr 30 03:29:42.941274 systemd[1]: Started cri-containerd-3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747.scope - libcontainer container 3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747. Apr 30 03:29:42.953626 containerd[1987]: time="2025-04-30T03:29:42.953568118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t7qqn,Uid:aa805bd1-beeb-4584-9d9d-3007469a5975,Namespace:kube-system,Attempt:1,} returns sandbox id \"6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6\"" Apr 30 03:29:42.968025 containerd[1987]: time="2025-04-30T03:29:42.967835136Z" level=info msg="CreateContainer within sandbox \"6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:29:43.013325 containerd[1987]: time="2025-04-30T03:29:43.013277857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jbvr9,Uid:c125f289-79ba-4045-ac98-3376fc26a663,Namespace:calico-system,Attempt:1,} returns sandbox id \"3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747\"" Apr 30 03:29:43.020966 containerd[1987]: time="2025-04-30T03:29:43.020906797Z" level=info msg="CreateContainer within sandbox \"6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e78de35b51d02c482dcd7cd8854b9d2404524ac8bb4c674615fbdf4d8ffdc201\"" Apr 30 03:29:43.021914 containerd[1987]: time="2025-04-30T03:29:43.021572912Z" level=info msg="StartContainer for \"e78de35b51d02c482dcd7cd8854b9d2404524ac8bb4c674615fbdf4d8ffdc201\"" Apr 30 03:29:43.034140 containerd[1987]: time="2025-04-30T03:29:43.034098701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c489dd647-zhfwb,Uid:2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58\"" Apr 30 03:29:43.047179 containerd[1987]: time="2025-04-30T03:29:43.046946194Z" level=info msg="StopPodSandbox for \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\"" Apr 30 03:29:43.068314 systemd[1]: Started cri-containerd-e78de35b51d02c482dcd7cd8854b9d2404524ac8bb4c674615fbdf4d8ffdc201.scope - libcontainer container e78de35b51d02c482dcd7cd8854b9d2404524ac8bb4c674615fbdf4d8ffdc201. Apr 30 03:29:43.151363 containerd[1987]: time="2025-04-30T03:29:43.150443619Z" level=info msg="StartContainer for \"e78de35b51d02c482dcd7cd8854b9d2404524ac8bb4c674615fbdf4d8ffdc201\" returns successfully" Apr 30 03:29:43.251977 containerd[1987]: 2025-04-30 03:29:43.169 [INFO][6175] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Apr 30 03:29:43.251977 containerd[1987]: 2025-04-30 03:29:43.170 [INFO][6175] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" iface="eth0" netns="/var/run/netns/cni-442b7c75-273b-faa7-0684-1dfcdedfb77f" Apr 30 03:29:43.251977 containerd[1987]: 2025-04-30 03:29:43.170 [INFO][6175] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" iface="eth0" netns="/var/run/netns/cni-442b7c75-273b-faa7-0684-1dfcdedfb77f" Apr 30 03:29:43.251977 containerd[1987]: 2025-04-30 03:29:43.171 [INFO][6175] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" iface="eth0" netns="/var/run/netns/cni-442b7c75-273b-faa7-0684-1dfcdedfb77f" Apr 30 03:29:43.251977 containerd[1987]: 2025-04-30 03:29:43.171 [INFO][6175] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Apr 30 03:29:43.251977 containerd[1987]: 2025-04-30 03:29:43.171 [INFO][6175] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Apr 30 03:29:43.251977 containerd[1987]: 2025-04-30 03:29:43.219 [INFO][6195] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" HandleID="k8s-pod-network.b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Workload="ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0" Apr 30 03:29:43.251977 containerd[1987]: 2025-04-30 03:29:43.220 [INFO][6195] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:43.251977 containerd[1987]: 2025-04-30 03:29:43.220 [INFO][6195] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:43.251977 containerd[1987]: 2025-04-30 03:29:43.239 [WARNING][6195] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" HandleID="k8s-pod-network.b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Workload="ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0" Apr 30 03:29:43.251977 containerd[1987]: 2025-04-30 03:29:43.239 [INFO][6195] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" HandleID="k8s-pod-network.b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Workload="ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0" Apr 30 03:29:43.251977 containerd[1987]: 2025-04-30 03:29:43.244 [INFO][6195] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:43.251977 containerd[1987]: 2025-04-30 03:29:43.247 [INFO][6175] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Apr 30 03:29:43.251977 containerd[1987]: time="2025-04-30T03:29:43.249781206Z" level=info msg="TearDown network for sandbox \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\" successfully" Apr 30 03:29:43.251977 containerd[1987]: time="2025-04-30T03:29:43.249811660Z" level=info msg="StopPodSandbox for \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\" returns successfully" Apr 30 03:29:43.251977 containerd[1987]: time="2025-04-30T03:29:43.250753748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589dd46bc6-rd5jw,Uid:6a4e6e55-d984-4974-8642-752d6712e827,Namespace:calico-system,Attempt:1,}" Apr 30 03:29:43.522062 systemd-networkd[1891]: cali198e8466a1c: Link UP Apr 30 03:29:43.522478 systemd-networkd[1891]: cali198e8466a1c: Gained carrier Apr 30 03:29:43.544602 kubelet[3229]: I0430 03:29:43.543660 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-t7qqn" podStartSLOduration=78.543634909 podStartE2EDuration="1m18.543634909s" podCreationTimestamp="2025-04-30 03:28:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:43.541534695 +0000 UTC m=+93.623963407" watchObservedRunningTime="2025-04-30 03:29:43.543634909 +0000 UTC m=+93.626063614" Apr 30 03:29:43.555497 containerd[1987]: 2025-04-30 03:29:43.353 [INFO][6206] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0 calico-kube-controllers-589dd46bc6- calico-system 6a4e6e55-d984-4974-8642-752d6712e827 1130 0 2025-04-30 03:28:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:589dd46bc6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-16-5 calico-kube-controllers-589dd46bc6-rd5jw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali198e8466a1c [] []}} ContainerID="3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b" Namespace="calico-system" Pod="calico-kube-controllers-589dd46bc6-rd5jw" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-" Apr 30 03:29:43.555497 containerd[1987]: 2025-04-30 03:29:43.353 [INFO][6206] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b" Namespace="calico-system" Pod="calico-kube-controllers-589dd46bc6-rd5jw" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0" Apr 30 03:29:43.555497 containerd[1987]: 2025-04-30 03:29:43.412 [INFO][6218] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b" HandleID="k8s-pod-network.3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b" Workload="ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0" Apr 30 03:29:43.555497 containerd[1987]: 2025-04-30 03:29:43.431 [INFO][6218] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b" HandleID="k8s-pod-network.3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b" Workload="ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291350), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-5", "pod":"calico-kube-controllers-589dd46bc6-rd5jw", "timestamp":"2025-04-30 03:29:43.412341421 +0000 UTC"}, Hostname:"ip-172-31-16-5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:43.555497 containerd[1987]: 2025-04-30 03:29:43.431 [INFO][6218] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:43.555497 containerd[1987]: 2025-04-30 03:29:43.431 [INFO][6218] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:43.555497 containerd[1987]: 2025-04-30 03:29:43.431 [INFO][6218] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-5' Apr 30 03:29:43.555497 containerd[1987]: 2025-04-30 03:29:43.434 [INFO][6218] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b" host="ip-172-31-16-5" Apr 30 03:29:43.555497 containerd[1987]: 2025-04-30 03:29:43.448 [INFO][6218] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-5" Apr 30 03:29:43.555497 containerd[1987]: 2025-04-30 03:29:43.458 [INFO][6218] ipam/ipam.go 489: Trying affinity for 192.168.72.128/26 host="ip-172-31-16-5" Apr 30 03:29:43.555497 containerd[1987]: 2025-04-30 03:29:43.461 [INFO][6218] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.128/26 host="ip-172-31-16-5" Apr 30 03:29:43.555497 containerd[1987]: 2025-04-30 03:29:43.468 [INFO][6218] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ip-172-31-16-5" Apr 30 03:29:43.555497 containerd[1987]: 2025-04-30 03:29:43.468 [INFO][6218] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b" host="ip-172-31-16-5" Apr 30 03:29:43.555497 containerd[1987]: 2025-04-30 03:29:43.472 [INFO][6218] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b Apr 30 03:29:43.555497 containerd[1987]: 2025-04-30 03:29:43.481 [INFO][6218] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b" host="ip-172-31-16-5" Apr 30 03:29:43.555497 containerd[1987]: 2025-04-30 03:29:43.501 [INFO][6218] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.133/26] block=192.168.72.128/26 handle="k8s-pod-network.3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b" host="ip-172-31-16-5" Apr 30 03:29:43.555497 containerd[1987]: 2025-04-30 03:29:43.501 [INFO][6218] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.133/26] handle="k8s-pod-network.3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b" host="ip-172-31-16-5" Apr 30 03:29:43.555497 containerd[1987]: 2025-04-30 03:29:43.501 [INFO][6218] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:43.555497 containerd[1987]: 2025-04-30 03:29:43.501 [INFO][6218] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.133/26] IPv6=[] ContainerID="3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b" HandleID="k8s-pod-network.3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b" Workload="ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0" Apr 30 03:29:43.559428 containerd[1987]: 2025-04-30 03:29:43.508 [INFO][6206] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b" Namespace="calico-system" Pod="calico-kube-controllers-589dd46bc6-rd5jw" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0", GenerateName:"calico-kube-controllers-589dd46bc6-", Namespace:"calico-system", SelfLink:"", UID:"6a4e6e55-d984-4974-8642-752d6712e827", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"589dd46bc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"", Pod:"calico-kube-controllers-589dd46bc6-rd5jw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali198e8466a1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:43.559428 containerd[1987]: 2025-04-30 03:29:43.509 [INFO][6206] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.133/32] ContainerID="3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b" Namespace="calico-system" Pod="calico-kube-controllers-589dd46bc6-rd5jw" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0" Apr 30 03:29:43.559428 containerd[1987]: 2025-04-30 03:29:43.509 [INFO][6206] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali198e8466a1c ContainerID="3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b" Namespace="calico-system" Pod="calico-kube-controllers-589dd46bc6-rd5jw" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0" Apr 30 03:29:43.559428 containerd[1987]: 2025-04-30 03:29:43.511 [INFO][6206] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b" Namespace="calico-system" Pod="calico-kube-controllers-589dd46bc6-rd5jw" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0" Apr 30 03:29:43.559428 containerd[1987]: 2025-04-30 03:29:43.511 [INFO][6206] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b" Namespace="calico-system" Pod="calico-kube-controllers-589dd46bc6-rd5jw" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0", GenerateName:"calico-kube-controllers-589dd46bc6-", Namespace:"calico-system", SelfLink:"", UID:"6a4e6e55-d984-4974-8642-752d6712e827", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"589dd46bc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b", Pod:"calico-kube-controllers-589dd46bc6-rd5jw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali198e8466a1c", MAC:"0e:b9:54:51:39:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:43.559428 containerd[1987]: 2025-04-30 03:29:43.546 [INFO][6206] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b" Namespace="calico-system" Pod="calico-kube-controllers-589dd46bc6-rd5jw" WorkloadEndpoint="ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0" Apr 30 03:29:43.628070 containerd[1987]: time="2025-04-30T03:29:43.626375340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:43.628070 containerd[1987]: time="2025-04-30T03:29:43.626452965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:43.628070 containerd[1987]: time="2025-04-30T03:29:43.626504451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:43.628070 containerd[1987]: time="2025-04-30T03:29:43.627865207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:43.667043 systemd[1]: Started cri-containerd-3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b.scope - libcontainer container 3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b. Apr 30 03:29:43.709453 systemd[1]: run-netns-cni\x2d442b7c75\x2d273b\x2dfaa7\x2d0684\x2d1dfcdedfb77f.mount: Deactivated successfully. Apr 30 03:29:43.811891 containerd[1987]: time="2025-04-30T03:29:43.811639750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589dd46bc6-rd5jw,Uid:6a4e6e55-d984-4974-8642-752d6712e827,Namespace:calico-system,Attempt:1,} returns sandbox id \"3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b\"" Apr 30 03:29:44.004240 systemd-networkd[1891]: cali7b07a194832: Gained IPv6LL Apr 30 03:29:44.038603 systemd[1]: Started sshd@19-172.31.16.5:22-147.75.109.163:37924.service - OpenSSH per-connection server daemon (147.75.109.163:37924). Apr 30 03:29:44.196729 systemd-networkd[1891]: calica5a9f38e63: Gained IPv6LL Apr 30 03:29:44.324392 systemd-networkd[1891]: calicef465bc2c1: Gained IPv6LL Apr 30 03:29:44.357078 sshd[6290]: Accepted publickey for core from 147.75.109.163 port 37924 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:44.361378 sshd[6290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:44.376386 systemd-logind[1956]: New session 20 of user core. Apr 30 03:29:44.384490 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:29:45.028981 systemd-networkd[1891]: cali198e8466a1c: Gained IPv6LL Apr 30 03:29:45.346271 containerd[1987]: time="2025-04-30T03:29:45.345944171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 03:29:45.362930 containerd[1987]: time="2025-04-30T03:29:45.362718809Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 3.522927699s" Apr 30 03:29:45.362930 containerd[1987]: time="2025-04-30T03:29:45.362781789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:29:45.366049 containerd[1987]: time="2025-04-30T03:29:45.365841885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 03:29:45.379337 containerd[1987]: time="2025-04-30T03:29:45.378612171Z" level=info msg="CreateContainer within sandbox \"00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:29:45.396138 containerd[1987]: time="2025-04-30T03:29:45.396082925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:45.434643 containerd[1987]: time="2025-04-30T03:29:45.432354147Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:45.434643 containerd[1987]: time="2025-04-30T03:29:45.433380881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:45.438779 containerd[1987]: time="2025-04-30T03:29:45.436758410Z" level=info msg="CreateContainer within sandbox \"00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f3fabe99c50f98868b1ada28dd2106903b089b32225e33500de260572b79c65f\"" Apr 30 03:29:45.438779 containerd[1987]: time="2025-04-30T03:29:45.437794409Z" level=info msg="StartContainer for \"f3fabe99c50f98868b1ada28dd2106903b089b32225e33500de260572b79c65f\"" Apr 30 03:29:45.514716 sshd[6290]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:45.522189 systemd-logind[1956]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:29:45.522710 systemd[1]: sshd@19-172.31.16.5:22-147.75.109.163:37924.service: Deactivated successfully. Apr 30 03:29:45.531798 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:29:45.536100 systemd-logind[1956]: Removed session 20. Apr 30 03:29:45.539234 systemd[1]: Started cri-containerd-f3fabe99c50f98868b1ada28dd2106903b089b32225e33500de260572b79c65f.scope - libcontainer container f3fabe99c50f98868b1ada28dd2106903b089b32225e33500de260572b79c65f. Apr 30 03:29:45.600072 containerd[1987]: time="2025-04-30T03:29:45.599770340Z" level=info msg="StartContainer for \"f3fabe99c50f98868b1ada28dd2106903b089b32225e33500de260572b79c65f\" returns successfully" Apr 30 03:29:46.784063 kubelet[3229]: I0430 03:29:46.783975 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c489dd647-9ccl9" podStartSLOduration=70.243036324 podStartE2EDuration="1m13.783944452s" podCreationTimestamp="2025-04-30 03:28:33 +0000 UTC" firstStartedPulling="2025-04-30 03:29:41.824671261 +0000 UTC m=+91.907099961" lastFinishedPulling="2025-04-30 03:29:45.365579384 +0000 UTC m=+95.448008089" observedRunningTime="2025-04-30 03:29:46.618852543 +0000 UTC m=+96.701281255" watchObservedRunningTime="2025-04-30 03:29:46.783944452 +0000 UTC m=+96.866373163" Apr 30 03:29:46.885415 containerd[1987]: time="2025-04-30T03:29:46.885344776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:46.886823 containerd[1987]: time="2025-04-30T03:29:46.886774960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 03:29:46.888490 containerd[1987]: time="2025-04-30T03:29:46.888421158Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:46.890896 containerd[1987]: time="2025-04-30T03:29:46.890833165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:46.891638 containerd[1987]: time="2025-04-30T03:29:46.891603898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.525724453s" Apr 30 03:29:46.891638 containerd[1987]: time="2025-04-30T03:29:46.891638228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 03:29:46.893126 containerd[1987]: time="2025-04-30T03:29:46.892846838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:29:46.896157 containerd[1987]: time="2025-04-30T03:29:46.896106848Z" level=info msg="CreateContainer within sandbox \"3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 03:29:46.920837 containerd[1987]: time="2025-04-30T03:29:46.920791359Z" level=info msg="CreateContainer within sandbox \"3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"76fb74ec967111a0e752e57c851a445a8f2633a33134addfcf10abf2c94ae08d\"" Apr 30 03:29:46.922792 containerd[1987]: time="2025-04-30T03:29:46.921634002Z" level=info msg="StartContainer for \"76fb74ec967111a0e752e57c851a445a8f2633a33134addfcf10abf2c94ae08d\"" Apr 30 03:29:46.972227 systemd[1]: Started cri-containerd-76fb74ec967111a0e752e57c851a445a8f2633a33134addfcf10abf2c94ae08d.scope - libcontainer container 76fb74ec967111a0e752e57c851a445a8f2633a33134addfcf10abf2c94ae08d. Apr 30 03:29:47.014453 containerd[1987]: time="2025-04-30T03:29:47.014409475Z" level=info msg="StartContainer for \"76fb74ec967111a0e752e57c851a445a8f2633a33134addfcf10abf2c94ae08d\" returns successfully" Apr 30 03:29:47.217817 containerd[1987]: time="2025-04-30T03:29:47.217680324Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:47.220827 containerd[1987]: time="2025-04-30T03:29:47.219589399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 03:29:47.221913 containerd[1987]: time="2025-04-30T03:29:47.221868460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 328.996583ms" Apr 30 03:29:47.221913 containerd[1987]: time="2025-04-30T03:29:47.221908356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:29:47.223361 containerd[1987]: time="2025-04-30T03:29:47.223079295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 03:29:47.227438 containerd[1987]: time="2025-04-30T03:29:47.226968385Z" level=info msg="CreateContainer within sandbox \"1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:29:47.243537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount675558896.mount: Deactivated successfully. Apr 30 03:29:47.251498 containerd[1987]: time="2025-04-30T03:29:47.251454883Z" level=info msg="CreateContainer within sandbox \"1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2b23055b99f9a2e5213b7699c499d66b6e1aee235e82bf754aa906f98560f0f1\"" Apr 30 03:29:47.253255 containerd[1987]: time="2025-04-30T03:29:47.253209366Z" level=info msg="StartContainer for \"2b23055b99f9a2e5213b7699c499d66b6e1aee235e82bf754aa906f98560f0f1\"" Apr 30 03:29:47.295547 systemd[1]: Started cri-containerd-2b23055b99f9a2e5213b7699c499d66b6e1aee235e82bf754aa906f98560f0f1.scope - libcontainer container 2b23055b99f9a2e5213b7699c499d66b6e1aee235e82bf754aa906f98560f0f1. Apr 30 03:29:47.345616 containerd[1987]: time="2025-04-30T03:29:47.345565130Z" level=info msg="StartContainer for \"2b23055b99f9a2e5213b7699c499d66b6e1aee235e82bf754aa906f98560f0f1\" returns successfully" Apr 30 03:29:47.806716 ntpd[1945]: Listen normally on 7 vxlan.calico 192.168.72.128:123 Apr 30 03:29:47.806807 ntpd[1945]: Listen normally on 8 vxlan.calico [fe80::6438:55ff:fe06:63ff%4]:123 Apr 30 03:29:47.807966 ntpd[1945]: 30 Apr 03:29:47 ntpd[1945]: Listen normally on 7 vxlan.calico 192.168.72.128:123 Apr 30 03:29:47.807966 ntpd[1945]: 30 Apr 03:29:47 ntpd[1945]: Listen normally on 8 vxlan.calico [fe80::6438:55ff:fe06:63ff%4]:123 Apr 30 03:29:47.807966 ntpd[1945]: 30 Apr 03:29:47 ntpd[1945]: Listen normally on 9 calid8bd63426f5 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 30 03:29:47.807966 ntpd[1945]: 30 Apr 03:29:47 ntpd[1945]: Listen normally on 10 cali7b07a194832 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 30 03:29:47.807966 ntpd[1945]: 30 Apr 03:29:47 ntpd[1945]: Listen normally on 11 calicef465bc2c1 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 30 03:29:47.807966 ntpd[1945]: 30 Apr 03:29:47 ntpd[1945]: Listen normally on 12 calica5a9f38e63 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 30 03:29:47.807966 ntpd[1945]: 30 Apr 03:29:47 ntpd[1945]: Listen normally on 13 cali198e8466a1c [fe80::ecee:eeff:feee:eeee%11]:123 Apr 30 03:29:47.806877 ntpd[1945]: Listen normally on 9 calid8bd63426f5 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 30 03:29:47.806920 ntpd[1945]: Listen normally on 10 cali7b07a194832 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 30 03:29:47.806961 ntpd[1945]: Listen normally on 11 calicef465bc2c1 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 30 03:29:47.807000 ntpd[1945]: Listen normally on 12 calica5a9f38e63 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 30 03:29:47.807124 ntpd[1945]: Listen normally on 13 cali198e8466a1c [fe80::ecee:eeff:feee:eeee%11]:123 Apr 30 03:29:48.048925 containerd[1987]: time="2025-04-30T03:29:48.048875990Z" level=info msg="StopPodSandbox for \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\"" Apr 30 03:29:48.187910 kubelet[3229]: I0430 03:29:48.187756 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c489dd647-zhfwb" podStartSLOduration=71.001333853 podStartE2EDuration="1m15.187628541s" podCreationTimestamp="2025-04-30 03:28:33 +0000 UTC" firstStartedPulling="2025-04-30 03:29:43.036525533 +0000 UTC m=+93.118954233" lastFinishedPulling="2025-04-30 03:29:47.222820217 +0000 UTC m=+97.305248921" observedRunningTime="2025-04-30 03:29:47.619224586 +0000 UTC m=+97.701653296" watchObservedRunningTime="2025-04-30 03:29:48.187628541 +0000 UTC m=+98.270057251" Apr 30 03:29:48.265970 containerd[1987]: 2025-04-30 03:29:48.186 [INFO][6447] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Apr 30 03:29:48.265970 containerd[1987]: 2025-04-30 03:29:48.186 [INFO][6447] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" iface="eth0" netns="/var/run/netns/cni-ae6f5940-4ac4-c796-54ab-12f0bf38f297" Apr 30 03:29:48.265970 containerd[1987]: 2025-04-30 03:29:48.188 [INFO][6447] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" iface="eth0" netns="/var/run/netns/cni-ae6f5940-4ac4-c796-54ab-12f0bf38f297" Apr 30 03:29:48.265970 containerd[1987]: 2025-04-30 03:29:48.189 [INFO][6447] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" iface="eth0" netns="/var/run/netns/cni-ae6f5940-4ac4-c796-54ab-12f0bf38f297" Apr 30 03:29:48.265970 containerd[1987]: 2025-04-30 03:29:48.189 [INFO][6447] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Apr 30 03:29:48.265970 containerd[1987]: 2025-04-30 03:29:48.189 [INFO][6447] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Apr 30 03:29:48.265970 containerd[1987]: 2025-04-30 03:29:48.242 [INFO][6454] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" HandleID="k8s-pod-network.d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0" Apr 30 03:29:48.265970 containerd[1987]: 2025-04-30 03:29:48.242 [INFO][6454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:48.265970 containerd[1987]: 2025-04-30 03:29:48.243 [INFO][6454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:48.265970 containerd[1987]: 2025-04-30 03:29:48.257 [WARNING][6454] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" HandleID="k8s-pod-network.d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0" Apr 30 03:29:48.265970 containerd[1987]: 2025-04-30 03:29:48.257 [INFO][6454] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" HandleID="k8s-pod-network.d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0" Apr 30 03:29:48.265970 containerd[1987]: 2025-04-30 03:29:48.259 [INFO][6454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:48.265970 containerd[1987]: 2025-04-30 03:29:48.262 [INFO][6447] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Apr 30 03:29:48.265970 containerd[1987]: time="2025-04-30T03:29:48.265371746Z" level=info msg="TearDown network for sandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\" successfully" Apr 30 03:29:48.265970 containerd[1987]: time="2025-04-30T03:29:48.265403576Z" level=info msg="StopPodSandbox for \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\" returns successfully" Apr 30 03:29:48.272706 containerd[1987]: time="2025-04-30T03:29:48.269290865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w297d,Uid:55fd6c70-afa8-486e-b82a-44a54b2e3758,Namespace:kube-system,Attempt:1,}" Apr 30 03:29:48.273034 systemd[1]: run-netns-cni\x2dae6f5940\x2d4ac4\x2dc796\x2d54ab\x2d12f0bf38f297.mount: Deactivated successfully. Apr 30 03:29:48.608642 systemd-networkd[1891]: cali86d506f40a1: Link UP Apr 30 03:29:48.609177 systemd-networkd[1891]: cali86d506f40a1: Gained carrier Apr 30 03:29:48.628643 (udev-worker)[6481]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:29:48.649157 containerd[1987]: 2025-04-30 03:29:48.394 [INFO][6461] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0 coredns-7db6d8ff4d- kube-system 55fd6c70-afa8-486e-b82a-44a54b2e3758 1188 0 2025-04-30 03:28:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-5 coredns-7db6d8ff4d-w297d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali86d506f40a1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w297d" WorkloadEndpoint="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-" Apr 30 03:29:48.649157 containerd[1987]: 2025-04-30 03:29:48.394 [INFO][6461] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w297d" WorkloadEndpoint="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0" Apr 30 03:29:48.649157 containerd[1987]: 2025-04-30 03:29:48.469 [INFO][6473] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e" HandleID="k8s-pod-network.bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0" Apr 30 03:29:48.649157 containerd[1987]: 2025-04-30 03:29:48.493 [INFO][6473] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e" HandleID="k8s-pod-network.bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050fd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-5", "pod":"coredns-7db6d8ff4d-w297d", "timestamp":"2025-04-30 03:29:48.469503934 +0000 UTC"}, Hostname:"ip-172-31-16-5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:48.649157 containerd[1987]: 2025-04-30 03:29:48.494 [INFO][6473] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:48.649157 containerd[1987]: 2025-04-30 03:29:48.495 [INFO][6473] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:48.649157 containerd[1987]: 2025-04-30 03:29:48.496 [INFO][6473] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-5' Apr 30 03:29:48.649157 containerd[1987]: 2025-04-30 03:29:48.503 [INFO][6473] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e" host="ip-172-31-16-5" Apr 30 03:29:48.649157 containerd[1987]: 2025-04-30 03:29:48.517 [INFO][6473] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-5" Apr 30 03:29:48.649157 containerd[1987]: 2025-04-30 03:29:48.538 [INFO][6473] ipam/ipam.go 489: Trying affinity for 192.168.72.128/26 host="ip-172-31-16-5" Apr 30 03:29:48.649157 containerd[1987]: 2025-04-30 03:29:48.541 [INFO][6473] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.128/26 host="ip-172-31-16-5" Apr 30 03:29:48.649157 containerd[1987]: 2025-04-30 03:29:48.548 [INFO][6473] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ip-172-31-16-5" Apr 30 03:29:48.649157 containerd[1987]: 2025-04-30 03:29:48.548 [INFO][6473] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e" host="ip-172-31-16-5" Apr 30 03:29:48.649157 containerd[1987]: 2025-04-30 03:29:48.552 [INFO][6473] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e Apr 30 03:29:48.649157 containerd[1987]: 2025-04-30 03:29:48.564 [INFO][6473] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e" host="ip-172-31-16-5" Apr 30 03:29:48.649157 containerd[1987]: 2025-04-30 03:29:48.578 [INFO][6473] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.134/26] block=192.168.72.128/26 handle="k8s-pod-network.bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e" host="ip-172-31-16-5" Apr 30 03:29:48.649157 containerd[1987]: 2025-04-30 03:29:48.579 [INFO][6473] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.134/26] handle="k8s-pod-network.bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e" host="ip-172-31-16-5" Apr 30 03:29:48.649157 containerd[1987]: 2025-04-30 03:29:48.579 [INFO][6473] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:48.649157 containerd[1987]: 2025-04-30 03:29:48.579 [INFO][6473] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.134/26] IPv6=[] ContainerID="bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e" HandleID="k8s-pod-network.bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0" Apr 30 03:29:48.652964 containerd[1987]: 2025-04-30 03:29:48.589 [INFO][6461] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w297d" WorkloadEndpoint="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"55fd6c70-afa8-486e-b82a-44a54b2e3758", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"", Pod:"coredns-7db6d8ff4d-w297d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86d506f40a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:48.652964 containerd[1987]: 2025-04-30 03:29:48.591 [INFO][6461] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.134/32] ContainerID="bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w297d" WorkloadEndpoint="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0" Apr 30 03:29:48.652964 containerd[1987]: 2025-04-30 03:29:48.591 [INFO][6461] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86d506f40a1 ContainerID="bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w297d" WorkloadEndpoint="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0" Apr 30 03:29:48.652964 containerd[1987]: 2025-04-30 03:29:48.609 [INFO][6461] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w297d" WorkloadEndpoint="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0" Apr 30 03:29:48.652964 containerd[1987]: 2025-04-30 03:29:48.611 [INFO][6461] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w297d" WorkloadEndpoint="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"55fd6c70-afa8-486e-b82a-44a54b2e3758", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e", Pod:"coredns-7db6d8ff4d-w297d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86d506f40a1", MAC:"3a:da:fb:b4:b1:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:48.654416 containerd[1987]: 2025-04-30 03:29:48.634 [INFO][6461] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w297d" WorkloadEndpoint="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0" Apr 30 03:29:48.760669 containerd[1987]: time="2025-04-30T03:29:48.760248268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:48.760669 containerd[1987]: time="2025-04-30T03:29:48.760326941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:48.760669 containerd[1987]: time="2025-04-30T03:29:48.760376449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:48.760669 containerd[1987]: time="2025-04-30T03:29:48.760507271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:48.816552 systemd[1]: Started cri-containerd-bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e.scope - libcontainer container bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e. Apr 30 03:29:48.932067 containerd[1987]: time="2025-04-30T03:29:48.931861894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w297d,Uid:55fd6c70-afa8-486e-b82a-44a54b2e3758,Namespace:kube-system,Attempt:1,} returns sandbox id \"bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e\"" Apr 30 03:29:48.941647 containerd[1987]: time="2025-04-30T03:29:48.941329835Z" level=info msg="CreateContainer within sandbox \"bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:29:48.994841 containerd[1987]: time="2025-04-30T03:29:48.994697040Z" level=info msg="CreateContainer within sandbox \"bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4792bf7d5a3cd623f2d1ad9ef75037d587e136dab629c24cad1536a61fe97369\"" Apr 30 03:29:48.995712 containerd[1987]: time="2025-04-30T03:29:48.995659767Z" level=info msg="StartContainer for \"4792bf7d5a3cd623f2d1ad9ef75037d587e136dab629c24cad1536a61fe97369\"" Apr 30 03:29:49.126276 systemd[1]: Started cri-containerd-4792bf7d5a3cd623f2d1ad9ef75037d587e136dab629c24cad1536a61fe97369.scope - libcontainer container 4792bf7d5a3cd623f2d1ad9ef75037d587e136dab629c24cad1536a61fe97369. Apr 30 03:29:49.355643 containerd[1987]: time="2025-04-30T03:29:49.355060396Z" level=info msg="StartContainer for \"4792bf7d5a3cd623f2d1ad9ef75037d587e136dab629c24cad1536a61fe97369\" returns successfully" Apr 30 03:29:50.233979 containerd[1987]: time="2025-04-30T03:29:50.233650491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:50.234950 containerd[1987]: time="2025-04-30T03:29:50.234912354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" Apr 30 03:29:50.237530 containerd[1987]: time="2025-04-30T03:29:50.236708453Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:50.239777 containerd[1987]: time="2025-04-30T03:29:50.239732632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:50.240954 containerd[1987]: time="2025-04-30T03:29:50.240912795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 3.017798872s" Apr 30 03:29:50.240954 containerd[1987]: time="2025-04-30T03:29:50.240952146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" Apr 30 03:29:50.262840 containerd[1987]: time="2025-04-30T03:29:50.262803492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 03:29:50.283680 containerd[1987]: time="2025-04-30T03:29:50.283297481Z" level=info msg="CreateContainer within sandbox \"3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 03:29:50.320363 containerd[1987]: time="2025-04-30T03:29:50.317692956Z" level=info msg="CreateContainer within sandbox \"3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1bd34870056618c4572d1c7c14cfc1bb42c8dc380c1220f3777a48cd6c5bb11f\"" Apr 30 03:29:50.319321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3532046604.mount: Deactivated successfully. Apr 30 03:29:50.322569 containerd[1987]: time="2025-04-30T03:29:50.322391000Z" level=info msg="StartContainer for \"1bd34870056618c4572d1c7c14cfc1bb42c8dc380c1220f3777a48cd6c5bb11f\"" Apr 30 03:29:50.375316 systemd[1]: Started cri-containerd-1bd34870056618c4572d1c7c14cfc1bb42c8dc380c1220f3777a48cd6c5bb11f.scope - libcontainer container 1bd34870056618c4572d1c7c14cfc1bb42c8dc380c1220f3777a48cd6c5bb11f. Apr 30 03:29:50.443055 containerd[1987]: time="2025-04-30T03:29:50.442692504Z" level=info msg="StartContainer for \"1bd34870056618c4572d1c7c14cfc1bb42c8dc380c1220f3777a48cd6c5bb11f\" returns successfully" Apr 30 03:29:50.570424 systemd[1]: Started sshd@20-172.31.16.5:22-147.75.109.163:43726.service - OpenSSH per-connection server daemon (147.75.109.163:43726). Apr 30 03:29:50.657522 kubelet[3229]: I0430 03:29:50.657036 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-589dd46bc6-rd5jw" podStartSLOduration=70.238066121 podStartE2EDuration="1m16.656997611s" podCreationTimestamp="2025-04-30 03:28:34 +0000 UTC" firstStartedPulling="2025-04-30 03:29:43.823035664 +0000 UTC m=+93.905464360" lastFinishedPulling="2025-04-30 03:29:50.24196716 +0000 UTC m=+100.324395850" observedRunningTime="2025-04-30 03:29:50.655548265 +0000 UTC m=+100.737976975" watchObservedRunningTime="2025-04-30 03:29:50.656997611 +0000 UTC m=+100.739426326" Apr 30 03:29:50.657522 kubelet[3229]: I0430 03:29:50.657515 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-w297d" podStartSLOduration=85.657508623 podStartE2EDuration="1m25.657508623s" podCreationTimestamp="2025-04-30 03:28:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:49.68858626 +0000 UTC m=+99.771014971" watchObservedRunningTime="2025-04-30 03:29:50.657508623 +0000 UTC m=+100.739937336" Apr 30 03:29:50.660262 systemd-networkd[1891]: cali86d506f40a1: Gained IPv6LL Apr 30 03:29:50.886178 sshd[6636]: Accepted publickey for core from 147.75.109.163 port 43726 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:50.890239 sshd[6636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:50.899286 systemd-logind[1956]: New session 21 of user core. Apr 30 03:29:50.904279 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 03:29:52.106283 sshd[6636]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:52.117495 systemd[1]: sshd@20-172.31.16.5:22-147.75.109.163:43726.service: Deactivated successfully. Apr 30 03:29:52.124880 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 03:29:52.132224 systemd-logind[1956]: Session 21 logged out. Waiting for processes to exit. Apr 30 03:29:52.135835 systemd-logind[1956]: Removed session 21. Apr 30 03:29:52.175425 containerd[1987]: time="2025-04-30T03:29:52.175369672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:52.179567 containerd[1987]: time="2025-04-30T03:29:52.179284890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 03:29:52.181757 containerd[1987]: time="2025-04-30T03:29:52.181712088Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:52.186204 containerd[1987]: time="2025-04-30T03:29:52.186151723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:52.186895 containerd[1987]: time="2025-04-30T03:29:52.186858500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.924020173s" Apr 30 03:29:52.187005 containerd[1987]: time="2025-04-30T03:29:52.186895115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 03:29:52.192215 containerd[1987]: time="2025-04-30T03:29:52.191742432Z" level=info msg="CreateContainer within sandbox \"3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 03:29:52.215366 containerd[1987]: time="2025-04-30T03:29:52.215304226Z" level=info msg="CreateContainer within sandbox \"3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6e29f8170aa92b8aae5654587f66793ad9f981bb566a6586f822cbc50834de73\"" Apr 30 03:29:52.217642 containerd[1987]: time="2025-04-30T03:29:52.216213379Z" level=info msg="StartContainer for \"6e29f8170aa92b8aae5654587f66793ad9f981bb566a6586f822cbc50834de73\"" Apr 30 03:29:52.267221 systemd[1]: Started cri-containerd-6e29f8170aa92b8aae5654587f66793ad9f981bb566a6586f822cbc50834de73.scope - libcontainer container 6e29f8170aa92b8aae5654587f66793ad9f981bb566a6586f822cbc50834de73. Apr 30 03:29:52.306473 containerd[1987]: time="2025-04-30T03:29:52.306420148Z" level=info msg="StartContainer for \"6e29f8170aa92b8aae5654587f66793ad9f981bb566a6586f822cbc50834de73\" returns successfully" Apr 30 03:29:52.806698 ntpd[1945]: Listen normally on 14 cali86d506f40a1 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 30 03:29:52.807113 ntpd[1945]: 30 Apr 03:29:52 ntpd[1945]: Listen normally on 14 cali86d506f40a1 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 30 03:29:53.546062 kubelet[3229]: I0430 03:29:53.541689 3229 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 03:29:53.551851 kubelet[3229]: I0430 03:29:53.551289 3229 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 03:29:57.154384 systemd[1]: Started sshd@21-172.31.16.5:22-147.75.109.163:45126.service - OpenSSH per-connection server daemon (147.75.109.163:45126). Apr 30 03:29:57.456037 sshd[6716]: Accepted publickey for core from 147.75.109.163 port 45126 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:57.459730 sshd[6716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:57.467499 systemd-logind[1956]: New session 22 of user core. Apr 30 03:29:57.472246 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 03:29:58.014974 sshd[6716]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:58.019201 systemd[1]: sshd@21-172.31.16.5:22-147.75.109.163:45126.service: Deactivated successfully. Apr 30 03:29:58.021745 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 03:29:58.024269 systemd-logind[1956]: Session 22 logged out. Waiting for processes to exit. Apr 30 03:29:58.025663 systemd-logind[1956]: Removed session 22. Apr 30 03:29:58.072371 systemd[1]: Started sshd@22-172.31.16.5:22-147.75.109.163:45136.service - OpenSSH per-connection server daemon (147.75.109.163:45136). Apr 30 03:29:58.324559 sshd[6731]: Accepted publickey for core from 147.75.109.163 port 45136 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:58.325618 sshd[6731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:58.332560 systemd-logind[1956]: New session 23 of user core. Apr 30 03:29:58.336251 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 03:29:59.195336 sshd[6731]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:59.199107 systemd[1]: sshd@22-172.31.16.5:22-147.75.109.163:45136.service: Deactivated successfully. Apr 30 03:29:59.201261 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 03:29:59.202689 systemd-logind[1956]: Session 23 logged out. Waiting for processes to exit. Apr 30 03:29:59.204796 systemd-logind[1956]: Removed session 23. Apr 30 03:29:59.242189 systemd[1]: Started sshd@23-172.31.16.5:22-147.75.109.163:45152.service - OpenSSH per-connection server daemon (147.75.109.163:45152). Apr 30 03:29:59.514983 sshd[6752]: Accepted publickey for core from 147.75.109.163 port 45152 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:59.516604 sshd[6752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:59.521785 systemd-logind[1956]: New session 24 of user core. Apr 30 03:29:59.526259 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 03:30:03.964483 sshd[6752]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:03.976478 systemd[1]: sshd@23-172.31.16.5:22-147.75.109.163:45152.service: Deactivated successfully. Apr 30 03:30:03.982715 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 03:30:03.988681 systemd-logind[1956]: Session 24 logged out. Waiting for processes to exit. Apr 30 03:30:03.990960 systemd-logind[1956]: Removed session 24. Apr 30 03:30:04.023710 systemd[1]: Started sshd@24-172.31.16.5:22-147.75.109.163:45158.service - OpenSSH per-connection server daemon (147.75.109.163:45158). Apr 30 03:30:04.391321 sshd[6776]: Accepted publickey for core from 147.75.109.163 port 45158 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:04.395801 sshd[6776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:04.402754 systemd-logind[1956]: New session 25 of user core. Apr 30 03:30:04.409299 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 03:30:05.988616 sshd[6776]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:05.995707 systemd[1]: sshd@24-172.31.16.5:22-147.75.109.163:45158.service: Deactivated successfully. Apr 30 03:30:05.999658 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 03:30:06.002633 systemd-logind[1956]: Session 25 logged out. Waiting for processes to exit. Apr 30 03:30:06.005291 systemd-logind[1956]: Removed session 25. Apr 30 03:30:06.035224 systemd[1]: Started sshd@25-172.31.16.5:22-147.75.109.163:45166.service - OpenSSH per-connection server daemon (147.75.109.163:45166). Apr 30 03:30:06.319767 sshd[6808]: Accepted publickey for core from 147.75.109.163 port 45166 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:06.321390 sshd[6808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:06.332715 systemd-logind[1956]: New session 26 of user core. Apr 30 03:30:06.335232 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 03:30:06.598731 kubelet[3229]: I0430 03:30:06.594446 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jbvr9" podStartSLOduration=83.374720253 podStartE2EDuration="1m32.545999405s" podCreationTimestamp="2025-04-30 03:28:34 +0000 UTC" firstStartedPulling="2025-04-30 03:29:43.016567515 +0000 UTC m=+93.098996218" lastFinishedPulling="2025-04-30 03:29:52.187846678 +0000 UTC m=+102.270275370" observedRunningTime="2025-04-30 03:29:52.674278396 +0000 UTC m=+102.756707127" watchObservedRunningTime="2025-04-30 03:30:06.545999405 +0000 UTC m=+116.628428115" Apr 30 03:30:06.872824 sshd[6808]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:06.887848 systemd[1]: sshd@25-172.31.16.5:22-147.75.109.163:45166.service: Deactivated successfully. Apr 30 03:30:06.892680 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 03:30:06.895682 systemd-logind[1956]: Session 26 logged out. Waiting for processes to exit. Apr 30 03:30:06.898321 systemd-logind[1956]: Removed session 26. Apr 30 03:30:10.361094 containerd[1987]: time="2025-04-30T03:30:10.360980741Z" level=info msg="StopPodSandbox for \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\"" Apr 30 03:30:11.442581 containerd[1987]: 2025-04-30 03:30:11.024 [WARNING][6841] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"55fd6c70-afa8-486e-b82a-44a54b2e3758", ResourceVersion:"1204", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e", Pod:"coredns-7db6d8ff4d-w297d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86d506f40a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:11.442581 containerd[1987]: 2025-04-30 03:30:11.036 [INFO][6841] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Apr 30 03:30:11.442581 containerd[1987]: 2025-04-30 03:30:11.036 [INFO][6841] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" iface="eth0" netns="" Apr 30 03:30:11.442581 containerd[1987]: 2025-04-30 03:30:11.036 [INFO][6841] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Apr 30 03:30:11.442581 containerd[1987]: 2025-04-30 03:30:11.036 [INFO][6841] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Apr 30 03:30:11.442581 containerd[1987]: 2025-04-30 03:30:11.417 [INFO][6848] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" HandleID="k8s-pod-network.d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0" Apr 30 03:30:11.442581 containerd[1987]: 2025-04-30 03:30:11.419 [INFO][6848] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:11.442581 containerd[1987]: 2025-04-30 03:30:11.419 [INFO][6848] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:11.442581 containerd[1987]: 2025-04-30 03:30:11.433 [WARNING][6848] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" HandleID="k8s-pod-network.d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0" Apr 30 03:30:11.442581 containerd[1987]: 2025-04-30 03:30:11.433 [INFO][6848] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" HandleID="k8s-pod-network.d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0" Apr 30 03:30:11.442581 containerd[1987]: 2025-04-30 03:30:11.435 [INFO][6848] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:11.442581 containerd[1987]: 2025-04-30 03:30:11.440 [INFO][6841] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Apr 30 03:30:11.451500 containerd[1987]: time="2025-04-30T03:30:11.451440757Z" level=info msg="TearDown network for sandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\" successfully" Apr 30 03:30:11.451500 containerd[1987]: time="2025-04-30T03:30:11.451489981Z" level=info msg="StopPodSandbox for \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\" returns successfully" Apr 30 03:30:11.591771 containerd[1987]: time="2025-04-30T03:30:11.591648206Z" level=info msg="RemovePodSandbox for \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\"" Apr 30 03:30:11.602003 containerd[1987]: time="2025-04-30T03:30:11.601930586Z" level=info msg="Forcibly stopping sandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\"" Apr 30 03:30:11.706677 containerd[1987]: 2025-04-30 03:30:11.658 [WARNING][6867] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"55fd6c70-afa8-486e-b82a-44a54b2e3758", ResourceVersion:"1204", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"bf5ef8c58a56fc62f72fbbd56eabd25530bf18d5f976fef70fb28646d607d65e", Pod:"coredns-7db6d8ff4d-w297d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86d506f40a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:11.706677 containerd[1987]: 2025-04-30 03:30:11.658 [INFO][6867] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Apr 30 03:30:11.706677 containerd[1987]: 2025-04-30 03:30:11.659 [INFO][6867] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" iface="eth0" netns="" Apr 30 03:30:11.706677 containerd[1987]: 2025-04-30 03:30:11.659 [INFO][6867] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Apr 30 03:30:11.706677 containerd[1987]: 2025-04-30 03:30:11.659 [INFO][6867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Apr 30 03:30:11.706677 containerd[1987]: 2025-04-30 03:30:11.688 [INFO][6874] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" HandleID="k8s-pod-network.d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0" Apr 30 03:30:11.706677 containerd[1987]: 2025-04-30 03:30:11.688 [INFO][6874] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:11.706677 containerd[1987]: 2025-04-30 03:30:11.688 [INFO][6874] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:11.706677 containerd[1987]: 2025-04-30 03:30:11.697 [WARNING][6874] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" HandleID="k8s-pod-network.d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0" Apr 30 03:30:11.706677 containerd[1987]: 2025-04-30 03:30:11.697 [INFO][6874] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" HandleID="k8s-pod-network.d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--w297d-eth0" Apr 30 03:30:11.706677 containerd[1987]: 2025-04-30 03:30:11.699 [INFO][6874] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:11.706677 containerd[1987]: 2025-04-30 03:30:11.703 [INFO][6867] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0" Apr 30 03:30:11.706677 containerd[1987]: time="2025-04-30T03:30:11.706657467Z" level=info msg="TearDown network for sandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\" successfully" Apr 30 03:30:11.743628 containerd[1987]: time="2025-04-30T03:30:11.743506372Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:11.776135 containerd[1987]: time="2025-04-30T03:30:11.775989049Z" level=info msg="RemovePodSandbox \"d284b3951fd93873afc170bcbc522521c5de40f3a07cc6913e75b56b0e3471c0\" returns successfully" Apr 30 03:30:11.781504 containerd[1987]: time="2025-04-30T03:30:11.781463189Z" level=info msg="StopPodSandbox for \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\"" Apr 30 03:30:11.886305 containerd[1987]: 2025-04-30 03:30:11.836 [WARNING][6892] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c125f289-79ba-4045-ac98-3376fc26a663", ResourceVersion:"1235", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747", Pod:"csi-node-driver-jbvr9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calica5a9f38e63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:11.886305 containerd[1987]: 2025-04-30 03:30:11.837 [INFO][6892] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Apr 30 03:30:11.886305 containerd[1987]: 2025-04-30 03:30:11.837 [INFO][6892] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" iface="eth0" netns="" Apr 30 03:30:11.886305 containerd[1987]: 2025-04-30 03:30:11.837 [INFO][6892] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Apr 30 03:30:11.886305 containerd[1987]: 2025-04-30 03:30:11.837 [INFO][6892] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Apr 30 03:30:11.886305 containerd[1987]: 2025-04-30 03:30:11.873 [INFO][6899] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" HandleID="k8s-pod-network.5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Workload="ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0" Apr 30 03:30:11.886305 containerd[1987]: 2025-04-30 03:30:11.874 [INFO][6899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:11.886305 containerd[1987]: 2025-04-30 03:30:11.874 [INFO][6899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:11.886305 containerd[1987]: 2025-04-30 03:30:11.880 [WARNING][6899] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" HandleID="k8s-pod-network.5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Workload="ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0" Apr 30 03:30:11.886305 containerd[1987]: 2025-04-30 03:30:11.880 [INFO][6899] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" HandleID="k8s-pod-network.5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Workload="ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0" Apr 30 03:30:11.886305 containerd[1987]: 2025-04-30 03:30:11.881 [INFO][6899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:11.886305 containerd[1987]: 2025-04-30 03:30:11.884 [INFO][6892] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Apr 30 03:30:11.889211 containerd[1987]: time="2025-04-30T03:30:11.886419518Z" level=info msg="TearDown network for sandbox \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\" successfully" Apr 30 03:30:11.889211 containerd[1987]: time="2025-04-30T03:30:11.886624811Z" level=info msg="StopPodSandbox for \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\" returns successfully" Apr 30 03:30:11.889211 containerd[1987]: time="2025-04-30T03:30:11.888520057Z" level=info msg="RemovePodSandbox for \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\"" Apr 30 03:30:11.889211 containerd[1987]: time="2025-04-30T03:30:11.888552001Z" level=info msg="Forcibly stopping sandbox \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\"" Apr 30 03:30:11.924416 systemd[1]: Started sshd@26-172.31.16.5:22-147.75.109.163:52816.service - OpenSSH per-connection server daemon (147.75.109.163:52816). Apr 30 03:30:12.059835 containerd[1987]: 2025-04-30 03:30:11.968 [WARNING][6917] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c125f289-79ba-4045-ac98-3376fc26a663", ResourceVersion:"1235", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"3ef81b8bb9850f1ebbe6ef0b3b02bbfb2744db5816b3b4647b907de9de8b3747", Pod:"csi-node-driver-jbvr9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calica5a9f38e63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:12.059835 containerd[1987]: 2025-04-30 03:30:11.970 [INFO][6917] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Apr 30 03:30:12.059835 containerd[1987]: 2025-04-30 03:30:11.970 [INFO][6917] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" iface="eth0" netns="" Apr 30 03:30:12.059835 containerd[1987]: 2025-04-30 03:30:11.970 [INFO][6917] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Apr 30 03:30:12.059835 containerd[1987]: 2025-04-30 03:30:11.970 [INFO][6917] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Apr 30 03:30:12.059835 containerd[1987]: 2025-04-30 03:30:12.039 [INFO][6926] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" HandleID="k8s-pod-network.5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Workload="ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0" Apr 30 03:30:12.059835 containerd[1987]: 2025-04-30 03:30:12.040 [INFO][6926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:12.059835 containerd[1987]: 2025-04-30 03:30:12.040 [INFO][6926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:12.059835 containerd[1987]: 2025-04-30 03:30:12.050 [WARNING][6926] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" HandleID="k8s-pod-network.5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Workload="ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0" Apr 30 03:30:12.059835 containerd[1987]: 2025-04-30 03:30:12.051 [INFO][6926] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" HandleID="k8s-pod-network.5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Workload="ip--172--31--16--5-k8s-csi--node--driver--jbvr9-eth0" Apr 30 03:30:12.059835 containerd[1987]: 2025-04-30 03:30:12.052 [INFO][6926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:12.059835 containerd[1987]: 2025-04-30 03:30:12.055 [INFO][6917] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf" Apr 30 03:30:12.059835 containerd[1987]: time="2025-04-30T03:30:12.057719813Z" level=info msg="TearDown network for sandbox \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\" successfully" Apr 30 03:30:12.064163 containerd[1987]: time="2025-04-30T03:30:12.064119921Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:12.064411 containerd[1987]: time="2025-04-30T03:30:12.064199164Z" level=info msg="RemovePodSandbox \"5ebaa71e6c09e59fda293d20ec9d063b686872977459a08e22c9f5b2e641cdaf\" returns successfully" Apr 30 03:30:12.065344 containerd[1987]: time="2025-04-30T03:30:12.064920148Z" level=info msg="StopPodSandbox for \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\"" Apr 30 03:30:12.235062 containerd[1987]: 2025-04-30 03:30:12.147 [WARNING][6945] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0", GenerateName:"calico-kube-controllers-589dd46bc6-", Namespace:"calico-system", SelfLink:"", UID:"6a4e6e55-d984-4974-8642-752d6712e827", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"589dd46bc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b", Pod:"calico-kube-controllers-589dd46bc6-rd5jw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali198e8466a1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:12.235062 containerd[1987]: 2025-04-30 03:30:12.152 [INFO][6945] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Apr 30 03:30:12.235062 containerd[1987]: 2025-04-30 03:30:12.152 [INFO][6945] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" iface="eth0" netns="" Apr 30 03:30:12.235062 containerd[1987]: 2025-04-30 03:30:12.152 [INFO][6945] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Apr 30 03:30:12.235062 containerd[1987]: 2025-04-30 03:30:12.152 [INFO][6945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Apr 30 03:30:12.235062 containerd[1987]: 2025-04-30 03:30:12.217 [INFO][6953] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" HandleID="k8s-pod-network.b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Workload="ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0" Apr 30 03:30:12.235062 containerd[1987]: 2025-04-30 03:30:12.218 [INFO][6953] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:12.235062 containerd[1987]: 2025-04-30 03:30:12.218 [INFO][6953] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:12.235062 containerd[1987]: 2025-04-30 03:30:12.226 [WARNING][6953] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" HandleID="k8s-pod-network.b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Workload="ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0" Apr 30 03:30:12.235062 containerd[1987]: 2025-04-30 03:30:12.226 [INFO][6953] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" HandleID="k8s-pod-network.b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Workload="ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0" Apr 30 03:30:12.235062 containerd[1987]: 2025-04-30 03:30:12.229 [INFO][6953] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:12.235062 containerd[1987]: 2025-04-30 03:30:12.231 [INFO][6945] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Apr 30 03:30:12.235062 containerd[1987]: time="2025-04-30T03:30:12.234922535Z" level=info msg="TearDown network for sandbox \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\" successfully" Apr 30 03:30:12.235062 containerd[1987]: time="2025-04-30T03:30:12.234946665Z" level=info msg="StopPodSandbox for \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\" returns successfully" Apr 30 03:30:12.238323 containerd[1987]: time="2025-04-30T03:30:12.235481362Z" level=info msg="RemovePodSandbox for \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\"" Apr 30 03:30:12.238323 containerd[1987]: time="2025-04-30T03:30:12.235515316Z" level=info msg="Forcibly stopping sandbox \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\"" Apr 30 03:30:12.301903 sshd[6923]: Accepted publickey for core from 147.75.109.163 port 52816 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:12.306493 sshd[6923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:12.321816 systemd-logind[1956]: New session 27 of user core. Apr 30 03:30:12.326543 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 03:30:12.382046 containerd[1987]: 2025-04-30 03:30:12.293 [WARNING][6971] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0", GenerateName:"calico-kube-controllers-589dd46bc6-", Namespace:"calico-system", SelfLink:"", UID:"6a4e6e55-d984-4974-8642-752d6712e827", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"589dd46bc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"3a426cec9e8a3e0aaa1d3073b8fba04bae05bb456937c85d29f6fd0ade237f2b", Pod:"calico-kube-controllers-589dd46bc6-rd5jw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali198e8466a1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:12.382046 containerd[1987]: 2025-04-30 03:30:12.294 [INFO][6971] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Apr 30 03:30:12.382046 containerd[1987]: 2025-04-30 03:30:12.294 [INFO][6971] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" iface="eth0" netns="" Apr 30 03:30:12.382046 containerd[1987]: 2025-04-30 03:30:12.294 [INFO][6971] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Apr 30 03:30:12.382046 containerd[1987]: 2025-04-30 03:30:12.294 [INFO][6971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Apr 30 03:30:12.382046 containerd[1987]: 2025-04-30 03:30:12.356 [INFO][6978] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" HandleID="k8s-pod-network.b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Workload="ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0" Apr 30 03:30:12.382046 containerd[1987]: 2025-04-30 03:30:12.359 [INFO][6978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:12.382046 containerd[1987]: 2025-04-30 03:30:12.359 [INFO][6978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:12.382046 containerd[1987]: 2025-04-30 03:30:12.373 [WARNING][6978] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" HandleID="k8s-pod-network.b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Workload="ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0" Apr 30 03:30:12.382046 containerd[1987]: 2025-04-30 03:30:12.373 [INFO][6978] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" HandleID="k8s-pod-network.b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Workload="ip--172--31--16--5-k8s-calico--kube--controllers--589dd46bc6--rd5jw-eth0" Apr 30 03:30:12.382046 containerd[1987]: 2025-04-30 03:30:12.376 [INFO][6978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:12.382046 containerd[1987]: 2025-04-30 03:30:12.379 [INFO][6971] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a" Apr 30 03:30:12.382046 containerd[1987]: time="2025-04-30T03:30:12.381672994Z" level=info msg="TearDown network for sandbox \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\" successfully" Apr 30 03:30:12.485705 containerd[1987]: time="2025-04-30T03:30:12.485526013Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:12.485705 containerd[1987]: time="2025-04-30T03:30:12.485594811Z" level=info msg="RemovePodSandbox \"b17b0ad30740e3af9f86c34c07a1fe9a515aac9337c8a0d33960414b64baa11a\" returns successfully" Apr 30 03:30:12.488379 containerd[1987]: time="2025-04-30T03:30:12.486790649Z" level=info msg="StopPodSandbox for \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\"" Apr 30 03:30:12.669433 containerd[1987]: 2025-04-30 03:30:12.589 [WARNING][7002] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0", GenerateName:"calico-apiserver-6c489dd647-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605", ResourceVersion:"1198", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c489dd647", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58", Pod:"calico-apiserver-6c489dd647-zhfwb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicef465bc2c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:12.669433 containerd[1987]: 2025-04-30 03:30:12.590 [INFO][7002] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Apr 30 03:30:12.669433 containerd[1987]: 2025-04-30 03:30:12.590 [INFO][7002] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" iface="eth0" netns="" Apr 30 03:30:12.669433 containerd[1987]: 2025-04-30 03:30:12.590 [INFO][7002] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Apr 30 03:30:12.669433 containerd[1987]: 2025-04-30 03:30:12.590 [INFO][7002] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Apr 30 03:30:12.669433 containerd[1987]: 2025-04-30 03:30:12.652 [INFO][7009] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" HandleID="k8s-pod-network.95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0" Apr 30 03:30:12.669433 containerd[1987]: 2025-04-30 03:30:12.653 [INFO][7009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:12.669433 containerd[1987]: 2025-04-30 03:30:12.653 [INFO][7009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:12.669433 containerd[1987]: 2025-04-30 03:30:12.662 [WARNING][7009] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" HandleID="k8s-pod-network.95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0" Apr 30 03:30:12.669433 containerd[1987]: 2025-04-30 03:30:12.662 [INFO][7009] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" HandleID="k8s-pod-network.95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0" Apr 30 03:30:12.669433 containerd[1987]: 2025-04-30 03:30:12.664 [INFO][7009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:12.669433 containerd[1987]: 2025-04-30 03:30:12.666 [INFO][7002] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Apr 30 03:30:12.671084 containerd[1987]: time="2025-04-30T03:30:12.669407603Z" level=info msg="TearDown network for sandbox \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\" successfully" Apr 30 03:30:12.671084 containerd[1987]: time="2025-04-30T03:30:12.670209727Z" level=info msg="StopPodSandbox for \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\" returns successfully" Apr 30 03:30:12.671084 containerd[1987]: time="2025-04-30T03:30:12.670790129Z" level=info msg="RemovePodSandbox for \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\"" Apr 30 03:30:12.671084 containerd[1987]: time="2025-04-30T03:30:12.670814282Z" level=info msg="Forcibly stopping sandbox \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\"" Apr 30 03:30:12.821118 containerd[1987]: 2025-04-30 03:30:12.749 [WARNING][7028] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0", GenerateName:"calico-apiserver-6c489dd647-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ca2a0d4-a8bb-4ffb-bf87-3cfe5b949605", ResourceVersion:"1198", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c489dd647", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"1ec465cf8f11369ee84989fd6389f50a51f010bdc5ea7d6f1f4f2002b393db58", Pod:"calico-apiserver-6c489dd647-zhfwb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicef465bc2c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:12.821118 containerd[1987]: 2025-04-30 03:30:12.749 [INFO][7028] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Apr 30 03:30:12.821118 containerd[1987]: 2025-04-30 03:30:12.749 [INFO][7028] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" iface="eth0" netns="" Apr 30 03:30:12.821118 containerd[1987]: 2025-04-30 03:30:12.749 [INFO][7028] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Apr 30 03:30:12.821118 containerd[1987]: 2025-04-30 03:30:12.749 [INFO][7028] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Apr 30 03:30:12.821118 containerd[1987]: 2025-04-30 03:30:12.800 [INFO][7038] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" HandleID="k8s-pod-network.95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0" Apr 30 03:30:12.821118 containerd[1987]: 2025-04-30 03:30:12.800 [INFO][7038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:12.821118 containerd[1987]: 2025-04-30 03:30:12.800 [INFO][7038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:12.821118 containerd[1987]: 2025-04-30 03:30:12.809 [WARNING][7038] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" HandleID="k8s-pod-network.95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0" Apr 30 03:30:12.821118 containerd[1987]: 2025-04-30 03:30:12.809 [INFO][7038] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" HandleID="k8s-pod-network.95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--zhfwb-eth0" Apr 30 03:30:12.821118 containerd[1987]: 2025-04-30 03:30:12.811 [INFO][7038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:12.821118 containerd[1987]: 2025-04-30 03:30:12.815 [INFO][7028] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364" Apr 30 03:30:12.823892 containerd[1987]: time="2025-04-30T03:30:12.821353096Z" level=info msg="TearDown network for sandbox \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\" successfully" Apr 30 03:30:12.828766 containerd[1987]: time="2025-04-30T03:30:12.828480065Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:12.828766 containerd[1987]: time="2025-04-30T03:30:12.828616579Z" level=info msg="RemovePodSandbox \"95883b047e74375044cdad8be7f46c2e38ac44a5cf6f0e4f90977c98c0183364\" returns successfully" Apr 30 03:30:12.829125 containerd[1987]: time="2025-04-30T03:30:12.829099958Z" level=info msg="StopPodSandbox for \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\"" Apr 30 03:30:12.829196 containerd[1987]: time="2025-04-30T03:30:12.829178575Z" level=info msg="TearDown network for sandbox \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\" successfully" Apr 30 03:30:12.829196 containerd[1987]: time="2025-04-30T03:30:12.829192246Z" level=info msg="StopPodSandbox for \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\" returns successfully" Apr 30 03:30:12.829622 containerd[1987]: time="2025-04-30T03:30:12.829566379Z" level=info msg="RemovePodSandbox for \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\"" Apr 30 03:30:12.829622 containerd[1987]: time="2025-04-30T03:30:12.829591298Z" level=info msg="Forcibly stopping sandbox \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\"" Apr 30 03:30:12.829738 containerd[1987]: time="2025-04-30T03:30:12.829639994Z" level=info msg="TearDown network for sandbox \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\" successfully" Apr 30 03:30:12.834221 containerd[1987]: time="2025-04-30T03:30:12.833957356Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:12.834221 containerd[1987]: time="2025-04-30T03:30:12.834046795Z" level=info msg="RemovePodSandbox \"5879147ccfe811570ed1341506e5018fa7535398b1869bcc735322d50da9253a\" returns successfully" Apr 30 03:30:12.835007 containerd[1987]: time="2025-04-30T03:30:12.834709174Z" level=info msg="StopPodSandbox for \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\"" Apr 30 03:30:12.995806 containerd[1987]: 2025-04-30 03:30:12.894 [WARNING][7056] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"aa805bd1-beeb-4584-9d9d-3007469a5975", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6", Pod:"coredns-7db6d8ff4d-t7qqn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b07a194832", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:12.995806 containerd[1987]: 2025-04-30 03:30:12.894 [INFO][7056] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Apr 30 03:30:12.995806 containerd[1987]: 2025-04-30 03:30:12.895 [INFO][7056] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" iface="eth0" netns="" Apr 30 03:30:12.995806 containerd[1987]: 2025-04-30 03:30:12.895 [INFO][7056] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Apr 30 03:30:12.995806 containerd[1987]: 2025-04-30 03:30:12.895 [INFO][7056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Apr 30 03:30:12.995806 containerd[1987]: 2025-04-30 03:30:12.965 [INFO][7064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" HandleID="k8s-pod-network.263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0" Apr 30 03:30:12.995806 containerd[1987]: 2025-04-30 03:30:12.967 [INFO][7064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:12.995806 containerd[1987]: 2025-04-30 03:30:12.967 [INFO][7064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:12.995806 containerd[1987]: 2025-04-30 03:30:12.979 [WARNING][7064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" HandleID="k8s-pod-network.263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0" Apr 30 03:30:12.995806 containerd[1987]: 2025-04-30 03:30:12.979 [INFO][7064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" HandleID="k8s-pod-network.263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0" Apr 30 03:30:12.995806 containerd[1987]: 2025-04-30 03:30:12.985 [INFO][7064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:12.995806 containerd[1987]: 2025-04-30 03:30:12.991 [INFO][7056] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Apr 30 03:30:12.996611 containerd[1987]: time="2025-04-30T03:30:12.995869200Z" level=info msg="TearDown network for sandbox \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\" successfully" Apr 30 03:30:12.996611 containerd[1987]: time="2025-04-30T03:30:12.995902936Z" level=info msg="StopPodSandbox for \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\" returns successfully" Apr 30 03:30:12.997905 containerd[1987]: time="2025-04-30T03:30:12.997662067Z" level=info msg="RemovePodSandbox for \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\"" Apr 30 03:30:12.997905 containerd[1987]: time="2025-04-30T03:30:12.997715615Z" level=info msg="Forcibly stopping sandbox \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\"" Apr 30 03:30:13.143723 containerd[1987]: 2025-04-30 03:30:13.068 [WARNING][7083] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"aa805bd1-beeb-4584-9d9d-3007469a5975", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"6dc06a663b87e2222ceba8edf67a3fd6e037f50a396317ccd4ed05edfa5f88b6", Pod:"coredns-7db6d8ff4d-t7qqn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b07a194832", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:13.143723 containerd[1987]: 2025-04-30 03:30:13.068 [INFO][7083] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Apr 30 03:30:13.143723 containerd[1987]: 2025-04-30 03:30:13.068 [INFO][7083] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" iface="eth0" netns="" Apr 30 03:30:13.143723 containerd[1987]: 2025-04-30 03:30:13.069 [INFO][7083] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Apr 30 03:30:13.143723 containerd[1987]: 2025-04-30 03:30:13.069 [INFO][7083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Apr 30 03:30:13.143723 containerd[1987]: 2025-04-30 03:30:13.118 [INFO][7090] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" HandleID="k8s-pod-network.263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0" Apr 30 03:30:13.143723 containerd[1987]: 2025-04-30 03:30:13.118 [INFO][7090] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:13.143723 containerd[1987]: 2025-04-30 03:30:13.119 [INFO][7090] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:13.143723 containerd[1987]: 2025-04-30 03:30:13.132 [WARNING][7090] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" HandleID="k8s-pod-network.263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0" Apr 30 03:30:13.143723 containerd[1987]: 2025-04-30 03:30:13.132 [INFO][7090] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" HandleID="k8s-pod-network.263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Workload="ip--172--31--16--5-k8s-coredns--7db6d8ff4d--t7qqn-eth0" Apr 30 03:30:13.143723 containerd[1987]: 2025-04-30 03:30:13.135 [INFO][7090] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:13.143723 containerd[1987]: 2025-04-30 03:30:13.139 [INFO][7083] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20" Apr 30 03:30:13.145919 containerd[1987]: time="2025-04-30T03:30:13.143767884Z" level=info msg="TearDown network for sandbox \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\" successfully" Apr 30 03:30:13.149474 containerd[1987]: time="2025-04-30T03:30:13.149371786Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:13.149615 containerd[1987]: time="2025-04-30T03:30:13.149533143Z" level=info msg="RemovePodSandbox \"263eab5ed704279c5a4d18bde0d3924ff75686d4f8611d41b7f7db0d944bba20\" returns successfully" Apr 30 03:30:13.150882 containerd[1987]: time="2025-04-30T03:30:13.150531908Z" level=info msg="StopPodSandbox for \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\"" Apr 30 03:30:13.341580 containerd[1987]: 2025-04-30 03:30:13.222 [WARNING][7107] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0", GenerateName:"calico-apiserver-6c489dd647-", Namespace:"calico-apiserver", SelfLink:"", UID:"09ed9b89-1a51-4296-b830-803e57059495", ResourceVersion:"1164", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c489dd647", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c", Pod:"calico-apiserver-6c489dd647-9ccl9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8bd63426f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:13.341580 containerd[1987]: 2025-04-30 03:30:13.222 [INFO][7107] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Apr 30 03:30:13.341580 containerd[1987]: 2025-04-30 03:30:13.222 [INFO][7107] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" iface="eth0" netns="" Apr 30 03:30:13.341580 containerd[1987]: 2025-04-30 03:30:13.222 [INFO][7107] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Apr 30 03:30:13.341580 containerd[1987]: 2025-04-30 03:30:13.222 [INFO][7107] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Apr 30 03:30:13.341580 containerd[1987]: 2025-04-30 03:30:13.298 [INFO][7114] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" HandleID="k8s-pod-network.4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0" Apr 30 03:30:13.341580 containerd[1987]: 2025-04-30 03:30:13.301 [INFO][7114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:13.341580 containerd[1987]: 2025-04-30 03:30:13.301 [INFO][7114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:13.341580 containerd[1987]: 2025-04-30 03:30:13.322 [WARNING][7114] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" HandleID="k8s-pod-network.4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0" Apr 30 03:30:13.341580 containerd[1987]: 2025-04-30 03:30:13.322 [INFO][7114] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" HandleID="k8s-pod-network.4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0" Apr 30 03:30:13.341580 containerd[1987]: 2025-04-30 03:30:13.326 [INFO][7114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:13.341580 containerd[1987]: 2025-04-30 03:30:13.334 [INFO][7107] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Apr 30 03:30:13.341580 containerd[1987]: time="2025-04-30T03:30:13.340711560Z" level=info msg="TearDown network for sandbox \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\" successfully" Apr 30 03:30:13.341580 containerd[1987]: time="2025-04-30T03:30:13.340758492Z" level=info msg="StopPodSandbox for \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\" returns successfully" Apr 30 03:30:13.343766 containerd[1987]: time="2025-04-30T03:30:13.342357194Z" level=info msg="RemovePodSandbox for \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\"" Apr 30 03:30:13.343766 containerd[1987]: time="2025-04-30T03:30:13.342408148Z" level=info msg="Forcibly stopping sandbox \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\"" Apr 30 03:30:13.589980 containerd[1987]: 2025-04-30 03:30:13.447 [WARNING][7132] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0", GenerateName:"calico-apiserver-6c489dd647-", Namespace:"calico-apiserver", SelfLink:"", UID:"09ed9b89-1a51-4296-b830-803e57059495", ResourceVersion:"1164", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c489dd647", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-5", ContainerID:"00eb9537b98d1ee011a7789b62ecae11f57793b737d436db758b88d79e6e396c", Pod:"calico-apiserver-6c489dd647-9ccl9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8bd63426f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:13.589980 containerd[1987]: 2025-04-30 03:30:13.448 [INFO][7132] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Apr 30 03:30:13.589980 containerd[1987]: 2025-04-30 03:30:13.448 [INFO][7132] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" iface="eth0" netns="" Apr 30 03:30:13.589980 containerd[1987]: 2025-04-30 03:30:13.448 [INFO][7132] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Apr 30 03:30:13.589980 containerd[1987]: 2025-04-30 03:30:13.448 [INFO][7132] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Apr 30 03:30:13.589980 containerd[1987]: 2025-04-30 03:30:13.563 [INFO][7139] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" HandleID="k8s-pod-network.4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0" Apr 30 03:30:13.589980 containerd[1987]: 2025-04-30 03:30:13.564 [INFO][7139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:13.589980 containerd[1987]: 2025-04-30 03:30:13.564 [INFO][7139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:13.589980 containerd[1987]: 2025-04-30 03:30:13.577 [WARNING][7139] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" HandleID="k8s-pod-network.4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0" Apr 30 03:30:13.589980 containerd[1987]: 2025-04-30 03:30:13.577 [INFO][7139] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" HandleID="k8s-pod-network.4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Workload="ip--172--31--16--5-k8s-calico--apiserver--6c489dd647--9ccl9-eth0" Apr 30 03:30:13.589980 containerd[1987]: 2025-04-30 03:30:13.579 [INFO][7139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:13.589980 containerd[1987]: 2025-04-30 03:30:13.583 [INFO][7132] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e" Apr 30 03:30:13.594244 containerd[1987]: time="2025-04-30T03:30:13.591914373Z" level=info msg="TearDown network for sandbox \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\" successfully" Apr 30 03:30:13.603947 containerd[1987]: time="2025-04-30T03:30:13.603569379Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:13.603947 containerd[1987]: time="2025-04-30T03:30:13.603662092Z" level=info msg="RemovePodSandbox \"4e6be36de49c9035d798ca5eee3f703add1477775b17bc2b2f71ab687b4b132e\" returns successfully" Apr 30 03:30:13.923666 sshd[6923]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:13.945233 systemd[1]: sshd@26-172.31.16.5:22-147.75.109.163:52816.service: Deactivated successfully. Apr 30 03:30:13.947140 systemd-logind[1956]: Session 27 logged out. Waiting for processes to exit. Apr 30 03:30:13.949604 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 03:30:13.951780 systemd-logind[1956]: Removed session 27. Apr 30 03:30:18.973828 systemd[1]: Started sshd@27-172.31.16.5:22-147.75.109.163:60814.service - OpenSSH per-connection server daemon (147.75.109.163:60814). Apr 30 03:30:19.227091 sshd[7169]: Accepted publickey for core from 147.75.109.163 port 60814 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:19.227484 sshd[7169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:19.234248 systemd-logind[1956]: New session 28 of user core. Apr 30 03:30:19.239258 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 03:30:19.504810 sshd[7169]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:19.507803 systemd-logind[1956]: Session 28 logged out. Waiting for processes to exit. Apr 30 03:30:19.508139 systemd[1]: sshd@27-172.31.16.5:22-147.75.109.163:60814.service: Deactivated successfully. Apr 30 03:30:19.518925 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 03:30:19.521941 systemd-logind[1956]: Removed session 28. Apr 30 03:30:24.558308 systemd[1]: Started sshd@28-172.31.16.5:22-147.75.109.163:60826.service - OpenSSH per-connection server daemon (147.75.109.163:60826). Apr 30 03:30:24.802179 sshd[7188]: Accepted publickey for core from 147.75.109.163 port 60826 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:24.802790 sshd[7188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:24.807584 systemd-logind[1956]: New session 29 of user core. Apr 30 03:30:24.810203 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 30 03:30:25.073654 sshd[7188]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:25.078417 systemd[1]: sshd@28-172.31.16.5:22-147.75.109.163:60826.service: Deactivated successfully. Apr 30 03:30:25.080949 systemd[1]: session-29.scope: Deactivated successfully. Apr 30 03:30:25.082379 systemd-logind[1956]: Session 29 logged out. Waiting for processes to exit. Apr 30 03:30:25.084180 systemd-logind[1956]: Removed session 29. Apr 30 03:30:30.123220 systemd[1]: Started sshd@29-172.31.16.5:22-147.75.109.163:36716.service - OpenSSH per-connection server daemon (147.75.109.163:36716). Apr 30 03:30:30.374115 sshd[7205]: Accepted publickey for core from 147.75.109.163 port 36716 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:30.375601 sshd[7205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:30.380365 systemd-logind[1956]: New session 30 of user core. Apr 30 03:30:30.385209 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 30 03:30:30.697196 sshd[7205]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:30.700595 systemd[1]: sshd@29-172.31.16.5:22-147.75.109.163:36716.service: Deactivated successfully. Apr 30 03:30:30.702590 systemd[1]: session-30.scope: Deactivated successfully. Apr 30 03:30:30.704995 systemd-logind[1956]: Session 30 logged out. Waiting for processes to exit. Apr 30 03:30:30.706520 systemd-logind[1956]: Removed session 30. Apr 30 03:30:35.750465 systemd[1]: Started sshd@30-172.31.16.5:22-147.75.109.163:36726.service - OpenSSH per-connection server daemon (147.75.109.163:36726). Apr 30 03:30:36.038593 sshd[7240]: Accepted publickey for core from 147.75.109.163 port 36726 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:36.039010 sshd[7240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:36.046665 systemd-logind[1956]: New session 31 of user core. Apr 30 03:30:36.051275 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 30 03:30:36.484813 sshd[7240]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:36.487799 systemd-logind[1956]: Session 31 logged out. Waiting for processes to exit. Apr 30 03:30:36.489427 systemd[1]: sshd@30-172.31.16.5:22-147.75.109.163:36726.service: Deactivated successfully. Apr 30 03:30:36.491444 systemd[1]: session-31.scope: Deactivated successfully. Apr 30 03:30:36.492802 systemd-logind[1956]: Removed session 31. Apr 30 03:30:41.540528 systemd[1]: Started sshd@31-172.31.16.5:22-147.75.109.163:41534.service - OpenSSH per-connection server daemon (147.75.109.163:41534). Apr 30 03:30:41.838098 sshd[7254]: Accepted publickey for core from 147.75.109.163 port 41534 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:41.838901 sshd[7254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:41.850101 systemd-logind[1956]: New session 32 of user core. Apr 30 03:30:41.855650 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 30 03:30:42.433378 sshd[7254]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:42.443152 systemd[1]: sshd@31-172.31.16.5:22-147.75.109.163:41534.service: Deactivated successfully. Apr 30 03:30:42.446137 systemd[1]: session-32.scope: Deactivated successfully. Apr 30 03:30:42.447186 systemd-logind[1956]: Session 32 logged out. Waiting for processes to exit. Apr 30 03:30:42.448962 systemd-logind[1956]: Removed session 32. Apr 30 03:30:56.045847 systemd[1]: cri-containerd-c25f7009aa134439961e8fd026d40bf9f586263592823407b8bbe05643c2d59c.scope: Deactivated successfully. Apr 30 03:30:56.047123 systemd[1]: cri-containerd-c25f7009aa134439961e8fd026d40bf9f586263592823407b8bbe05643c2d59c.scope: Consumed 3.761s CPU time, 24.6M memory peak, 0B memory swap peak. Apr 30 03:30:56.117416 systemd[1]: cri-containerd-b45513be812a5548f6cd480f04fd71be3312524aa313563d7730654cf8a0d31e.scope: Deactivated successfully. Apr 30 03:30:56.117653 systemd[1]: cri-containerd-b45513be812a5548f6cd480f04fd71be3312524aa313563d7730654cf8a0d31e.scope: Consumed 4.185s CPU time. Apr 30 03:30:56.197654 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c25f7009aa134439961e8fd026d40bf9f586263592823407b8bbe05643c2d59c-rootfs.mount: Deactivated successfully. Apr 30 03:30:56.207277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b45513be812a5548f6cd480f04fd71be3312524aa313563d7730654cf8a0d31e-rootfs.mount: Deactivated successfully. Apr 30 03:30:56.250137 containerd[1987]: time="2025-04-30T03:30:56.229535938Z" level=info msg="shim disconnected" id=b45513be812a5548f6cd480f04fd71be3312524aa313563d7730654cf8a0d31e namespace=k8s.io Apr 30 03:30:56.251345 containerd[1987]: time="2025-04-30T03:30:56.210295305Z" level=info msg="shim disconnected" id=c25f7009aa134439961e8fd026d40bf9f586263592823407b8bbe05643c2d59c namespace=k8s.io Apr 30 03:30:56.261802 containerd[1987]: time="2025-04-30T03:30:56.261567210Z" level=warning msg="cleaning up after shim disconnected" id=c25f7009aa134439961e8fd026d40bf9f586263592823407b8bbe05643c2d59c namespace=k8s.io Apr 30 03:30:56.261802 containerd[1987]: time="2025-04-30T03:30:56.261615165Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:56.266838 containerd[1987]: time="2025-04-30T03:30:56.266784225Z" level=warning msg="cleaning up after shim disconnected" id=b45513be812a5548f6cd480f04fd71be3312524aa313563d7730654cf8a0d31e namespace=k8s.io Apr 30 03:30:56.266838 containerd[1987]: time="2025-04-30T03:30:56.266826849Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:30:56.526393 kubelet[3229]: I0430 03:30:56.526354 3229 scope.go:117] "RemoveContainer" containerID="c25f7009aa134439961e8fd026d40bf9f586263592823407b8bbe05643c2d59c" Apr 30 03:30:56.526885 kubelet[3229]: I0430 03:30:56.526815 3229 scope.go:117] "RemoveContainer" containerID="b45513be812a5548f6cd480f04fd71be3312524aa313563d7730654cf8a0d31e" Apr 30 03:30:56.561179 containerd[1987]: time="2025-04-30T03:30:56.560951832Z" level=info msg="CreateContainer within sandbox \"b919c4b8238dc7f2d66f8d9b5e2ddf05c2c7d95e3aceb764a683c0db21d11a73\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 30 03:30:56.562007 containerd[1987]: time="2025-04-30T03:30:56.561959536Z" level=info msg="CreateContainer within sandbox \"51eaa2f39c19fdf004fd7df112b4b6d61ea03978b06e8569112a47d98f5f9606\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 03:30:56.664914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount890172636.mount: Deactivated successfully. Apr 30 03:30:56.676871 containerd[1987]: time="2025-04-30T03:30:56.676803550Z" level=info msg="CreateContainer within sandbox \"b919c4b8238dc7f2d66f8d9b5e2ddf05c2c7d95e3aceb764a683c0db21d11a73\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"aec5c7e7796291b8be9727850ce446ba3b6d952b60d89fa2b177fa3468e9d0cc\"" Apr 30 03:30:56.678524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1691788637.mount: Deactivated successfully. Apr 30 03:30:56.685624 containerd[1987]: time="2025-04-30T03:30:56.685434274Z" level=info msg="StartContainer for \"aec5c7e7796291b8be9727850ce446ba3b6d952b60d89fa2b177fa3468e9d0cc\"" Apr 30 03:30:56.686893 containerd[1987]: time="2025-04-30T03:30:56.686859939Z" level=info msg="CreateContainer within sandbox \"51eaa2f39c19fdf004fd7df112b4b6d61ea03978b06e8569112a47d98f5f9606\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2fb65a8273de41654bf9caa9f53dfb6b1cac716af687fe3eb7f5c74fb99560fa\"" Apr 30 03:30:56.688196 containerd[1987]: time="2025-04-30T03:30:56.687384150Z" level=info msg="StartContainer for \"2fb65a8273de41654bf9caa9f53dfb6b1cac716af687fe3eb7f5c74fb99560fa\"" Apr 30 03:30:56.754263 systemd[1]: Started cri-containerd-2fb65a8273de41654bf9caa9f53dfb6b1cac716af687fe3eb7f5c74fb99560fa.scope - libcontainer container 2fb65a8273de41654bf9caa9f53dfb6b1cac716af687fe3eb7f5c74fb99560fa. Apr 30 03:30:56.760242 systemd[1]: Started cri-containerd-aec5c7e7796291b8be9727850ce446ba3b6d952b60d89fa2b177fa3468e9d0cc.scope - libcontainer container aec5c7e7796291b8be9727850ce446ba3b6d952b60d89fa2b177fa3468e9d0cc. Apr 30 03:30:56.856967 containerd[1987]: time="2025-04-30T03:30:56.856765679Z" level=info msg="StartContainer for \"aec5c7e7796291b8be9727850ce446ba3b6d952b60d89fa2b177fa3468e9d0cc\" returns successfully" Apr 30 03:30:56.865767 containerd[1987]: time="2025-04-30T03:30:56.865734592Z" level=info msg="StartContainer for \"2fb65a8273de41654bf9caa9f53dfb6b1cac716af687fe3eb7f5c74fb99560fa\" returns successfully" Apr 30 03:31:01.012382 systemd[1]: cri-containerd-671482f102454f582483da29a6e10d7965a0ace21469d1f526523ec5ed729da9.scope: Deactivated successfully. Apr 30 03:31:01.013170 systemd[1]: cri-containerd-671482f102454f582483da29a6e10d7965a0ace21469d1f526523ec5ed729da9.scope: Consumed 1.781s CPU time, 18.9M memory peak, 0B memory swap peak. Apr 30 03:31:01.040848 containerd[1987]: time="2025-04-30T03:31:01.040777669Z" level=info msg="shim disconnected" id=671482f102454f582483da29a6e10d7965a0ace21469d1f526523ec5ed729da9 namespace=k8s.io Apr 30 03:31:01.040848 containerd[1987]: time="2025-04-30T03:31:01.040843182Z" level=warning msg="cleaning up after shim disconnected" id=671482f102454f582483da29a6e10d7965a0ace21469d1f526523ec5ed729da9 namespace=k8s.io Apr 30 03:31:01.041355 containerd[1987]: time="2025-04-30T03:31:01.040858937Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:31:01.043909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-671482f102454f582483da29a6e10d7965a0ace21469d1f526523ec5ed729da9-rootfs.mount: Deactivated successfully. Apr 30 03:31:01.543875 kubelet[3229]: I0430 03:31:01.543838 3229 scope.go:117] "RemoveContainer" containerID="671482f102454f582483da29a6e10d7965a0ace21469d1f526523ec5ed729da9" Apr 30 03:31:01.578770 containerd[1987]: time="2025-04-30T03:31:01.578718821Z" level=info msg="CreateContainer within sandbox \"4b333d445da86d0298823b637ed97402c27fcb819c2ade3522351354097317f6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 03:31:01.636107 containerd[1987]: time="2025-04-30T03:31:01.635669978Z" level=info msg="CreateContainer within sandbox \"4b333d445da86d0298823b637ed97402c27fcb819c2ade3522351354097317f6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ca99edf1222a2df4f5b3b1897e28d47a15fe39956c5338dd6b331b8fc3345b40\"" Apr 30 03:31:01.637920 containerd[1987]: time="2025-04-30T03:31:01.637182084Z" level=info msg="StartContainer for \"ca99edf1222a2df4f5b3b1897e28d47a15fe39956c5338dd6b331b8fc3345b40\"" Apr 30 03:31:01.702386 systemd[1]: Started cri-containerd-ca99edf1222a2df4f5b3b1897e28d47a15fe39956c5338dd6b331b8fc3345b40.scope - libcontainer container ca99edf1222a2df4f5b3b1897e28d47a15fe39956c5338dd6b331b8fc3345b40. Apr 30 03:31:01.869503 containerd[1987]: time="2025-04-30T03:31:01.869133878Z" level=info msg="StartContainer for \"ca99edf1222a2df4f5b3b1897e28d47a15fe39956c5338dd6b331b8fc3345b40\" returns successfully" Apr 30 03:31:03.198414 kubelet[3229]: E0430 03:31:03.195914 3229 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-16-5)" Apr 30 03:31:13.207030 kubelet[3229]: E0430 03:31:13.206872 3229 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-5?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 30 03:31:16.126350 systemd[1]: run-containerd-runc-k8s.io-1bd34870056618c4572d1c7c14cfc1bb42c8dc380c1220f3777a48cd6c5bb11f-runc.z6IgKO.mount: Deactivated successfully.