Aug 13 00:17:56.259945 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Aug 13 00:17:56.259990 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Aug 12 22:21:53 -00 2025 Aug 13 00:17:56.260014 kernel: KASLR disabled due to lack of seed Aug 13 00:17:56.260031 kernel: efi: EFI v2.7 by EDK II Aug 13 00:17:56.260047 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Aug 13 00:17:56.260063 kernel: ACPI: Early table checksum verification disabled Aug 13 00:17:56.260080 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Aug 13 00:17:56.260096 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Aug 13 00:17:56.260112 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Aug 13 00:17:56.260128 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Aug 13 00:17:56.260148 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Aug 13 00:17:56.260164 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Aug 13 00:17:56.260180 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Aug 13 00:17:56.260196 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Aug 13 00:17:56.260215 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Aug 13 00:17:56.260236 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Aug 13 00:17:56.260253 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Aug 13 00:17:56.260270 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Aug 13 00:17:56.260287 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Aug 13 00:17:56.260303 kernel: printk: bootconsole [uart0] enabled Aug 13 00:17:56.260320 kernel: NUMA: Failed to initialise from firmware Aug 13 00:17:56.260336 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Aug 13 00:17:56.260353 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Aug 13 00:17:56.260369 kernel: Zone ranges: Aug 13 00:17:56.260386 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Aug 13 00:17:56.260402 kernel: DMA32 empty Aug 13 00:17:56.260423 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Aug 13 00:17:56.260440 kernel: Movable zone start for each node Aug 13 00:17:56.260477 kernel: Early memory node ranges Aug 13 00:17:56.260499 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Aug 13 00:17:56.260517 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Aug 13 00:17:56.260534 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Aug 13 00:17:56.260551 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Aug 13 00:17:56.260568 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Aug 13 00:17:56.260584 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Aug 13 00:17:56.260600 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Aug 13 00:17:56.260617 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Aug 13 00:17:56.260633 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Aug 13 00:17:56.260655 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Aug 13 00:17:56.260673 kernel: psci: probing for conduit method from ACPI. Aug 13 00:17:56.260697 kernel: psci: PSCIv1.0 detected in firmware. Aug 13 00:17:56.260715 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:17:56.260733 kernel: psci: Trusted OS migration not required Aug 13 00:17:56.260754 kernel: psci: SMC Calling Convention v1.1 Aug 13 00:17:56.260772 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Aug 13 00:17:56.260790 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 13 00:17:56.260808 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 13 00:17:56.260826 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 13 00:17:56.260843 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:17:56.260861 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:17:56.260878 kernel: CPU features: detected: Spectre-v2 Aug 13 00:17:56.260896 kernel: CPU features: detected: Spectre-v3a Aug 13 00:17:56.260913 kernel: CPU features: detected: Spectre-BHB Aug 13 00:17:56.260931 kernel: CPU features: detected: ARM erratum 1742098 Aug 13 00:17:56.260953 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Aug 13 00:17:56.260971 kernel: alternatives: applying boot alternatives Aug 13 00:17:56.260991 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:17:56.261009 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:17:56.261027 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:17:56.261045 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:17:56.261062 kernel: Fallback order for Node 0: 0 Aug 13 00:17:56.261080 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Aug 13 00:17:56.261097 kernel: Policy zone: Normal Aug 13 00:17:56.261115 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:17:56.261133 kernel: software IO TLB: area num 2. Aug 13 00:17:56.261154 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Aug 13 00:17:56.261173 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) Aug 13 00:17:56.261191 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:17:56.261208 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:17:56.261226 kernel: rcu: RCU event tracing is enabled. Aug 13 00:17:56.261245 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:17:56.261263 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:17:56.261281 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:17:56.261298 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:17:56.261316 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:17:56.261333 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:17:56.261355 kernel: GICv3: 96 SPIs implemented Aug 13 00:17:56.261372 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:17:56.261390 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:17:56.261407 kernel: GICv3: GICv3 features: 16 PPIs Aug 13 00:17:56.261424 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Aug 13 00:17:56.261442 kernel: ITS [mem 0x10080000-0x1009ffff] Aug 13 00:17:56.261478 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Aug 13 00:17:56.261499 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Aug 13 00:17:56.261517 kernel: GICv3: using LPI property table @0x00000004000d0000 Aug 13 00:17:56.261535 kernel: ITS: Using hypervisor restricted LPI range [128] Aug 13 00:17:56.261553 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Aug 13 00:17:56.261570 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:17:56.261594 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Aug 13 00:17:56.261613 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Aug 13 00:17:56.261632 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Aug 13 00:17:56.261652 kernel: Console: colour dummy device 80x25 Aug 13 00:17:56.261671 kernel: printk: console [tty1] enabled Aug 13 00:17:56.261691 kernel: ACPI: Core revision 20230628 Aug 13 00:17:56.261710 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Aug 13 00:17:56.261729 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:17:56.261748 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:17:56.261771 kernel: landlock: Up and running. Aug 13 00:17:56.261790 kernel: SELinux: Initializing. Aug 13 00:17:56.261809 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:17:56.261827 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:17:56.261846 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:17:56.261864 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:17:56.261883 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:17:56.261903 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:17:56.261921 kernel: Platform MSI: ITS@0x10080000 domain created Aug 13 00:17:56.261943 kernel: PCI/MSI: ITS@0x10080000 domain created Aug 13 00:17:56.261963 kernel: Remapping and enabling EFI services. Aug 13 00:17:56.261981 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:17:56.261999 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:17:56.262018 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Aug 13 00:17:56.262036 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Aug 13 00:17:56.262055 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Aug 13 00:17:56.262074 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:17:56.262092 kernel: SMP: Total of 2 processors activated. Aug 13 00:17:56.262111 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:17:56.262134 kernel: CPU features: detected: 32-bit EL1 Support Aug 13 00:17:56.262152 kernel: CPU features: detected: CRC32 instructions Aug 13 00:17:56.262181 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:17:56.262204 kernel: alternatives: applying system-wide alternatives Aug 13 00:17:56.262222 kernel: devtmpfs: initialized Aug 13 00:17:56.262241 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:17:56.262260 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:17:56.262278 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:17:56.262297 kernel: SMBIOS 3.0.0 present. Aug 13 00:17:56.262320 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Aug 13 00:17:56.262339 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:17:56.262358 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:17:56.262376 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:17:56.262395 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:17:56.262414 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:17:56.262433 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 Aug 13 00:17:56.262982 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:17:56.263010 kernel: cpuidle: using governor menu Aug 13 00:17:56.263030 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:17:56.263049 kernel: ASID allocator initialised with 65536 entries Aug 13 00:17:56.263067 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:17:56.263086 kernel: Serial: AMBA PL011 UART driver Aug 13 00:17:56.263104 kernel: Modules: 17488 pages in range for non-PLT usage Aug 13 00:17:56.263123 kernel: Modules: 509008 pages in range for PLT usage Aug 13 00:17:56.263141 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:17:56.263186 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:17:56.263207 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:17:56.263226 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 13 00:17:56.263245 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:17:56.263263 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:17:56.263282 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:17:56.263301 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 13 00:17:56.263320 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:17:56.263338 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:17:56.263362 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:17:56.263381 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:17:56.263399 kernel: ACPI: Interpreter enabled Aug 13 00:17:56.263418 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:17:56.263436 kernel: ACPI: MCFG table detected, 1 entries Aug 13 00:17:56.263470 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Aug 13 00:17:56.263804 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:17:56.264028 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 13 00:17:56.264382 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 13 00:17:56.267581 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Aug 13 00:17:56.267801 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Aug 13 00:17:56.267828 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Aug 13 00:17:56.267848 kernel: acpiphp: Slot [1] registered Aug 13 00:17:56.267868 kernel: acpiphp: Slot [2] registered Aug 13 00:17:56.267887 kernel: acpiphp: Slot [3] registered Aug 13 00:17:56.267906 kernel: acpiphp: Slot [4] registered Aug 13 00:17:56.267934 kernel: acpiphp: Slot [5] registered Aug 13 00:17:56.267954 kernel: acpiphp: Slot [6] registered Aug 13 00:17:56.267973 kernel: acpiphp: Slot [7] registered Aug 13 00:17:56.267992 kernel: acpiphp: Slot [8] registered Aug 13 00:17:56.268011 kernel: acpiphp: Slot [9] registered Aug 13 00:17:56.268030 kernel: acpiphp: Slot [10] registered Aug 13 00:17:56.268049 kernel: acpiphp: Slot [11] registered Aug 13 00:17:56.268068 kernel: acpiphp: Slot [12] registered Aug 13 00:17:56.268087 kernel: acpiphp: Slot [13] registered Aug 13 00:17:56.268106 kernel: acpiphp: Slot [14] registered Aug 13 00:17:56.268130 kernel: acpiphp: Slot [15] registered Aug 13 00:17:56.268149 kernel: acpiphp: Slot [16] registered Aug 13 00:17:56.268167 kernel: acpiphp: Slot [17] registered Aug 13 00:17:56.268186 kernel: acpiphp: Slot [18] registered Aug 13 00:17:56.268205 kernel: acpiphp: Slot [19] registered Aug 13 00:17:56.268223 kernel: acpiphp: Slot [20] registered Aug 13 00:17:56.268242 kernel: acpiphp: Slot [21] registered Aug 13 00:17:56.268260 kernel: acpiphp: Slot [22] registered Aug 13 00:17:56.268279 kernel: acpiphp: Slot [23] registered Aug 13 00:17:56.268302 kernel: acpiphp: Slot [24] registered Aug 13 00:17:56.268322 kernel: acpiphp: Slot [25] registered Aug 13 00:17:56.268340 kernel: acpiphp: Slot [26] registered Aug 13 00:17:56.268359 kernel: acpiphp: Slot [27] registered Aug 13 00:17:56.268377 kernel: acpiphp: Slot [28] registered Aug 13 00:17:56.268396 kernel: acpiphp: Slot [29] registered Aug 13 00:17:56.268415 kernel: acpiphp: Slot [30] registered Aug 13 00:17:56.268433 kernel: acpiphp: Slot [31] registered Aug 13 00:17:56.268473 kernel: PCI host bridge to bus 0000:00 Aug 13 00:17:56.268725 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Aug 13 00:17:56.268929 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 13 00:17:56.269123 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Aug 13 00:17:56.269316 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Aug 13 00:17:56.270594 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Aug 13 00:17:56.270848 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Aug 13 00:17:56.271059 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Aug 13 00:17:56.271314 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Aug 13 00:17:56.273875 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Aug 13 00:17:56.274123 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 00:17:56.274346 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Aug 13 00:17:56.274618 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Aug 13 00:17:56.274851 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Aug 13 00:17:56.275100 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Aug 13 00:17:56.275359 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 00:17:56.280043 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Aug 13 00:17:56.280280 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Aug 13 00:17:56.282124 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Aug 13 00:17:56.282375 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Aug 13 00:17:56.282625 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Aug 13 00:17:56.282827 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Aug 13 00:17:56.283026 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 13 00:17:56.283239 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Aug 13 00:17:56.283266 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 13 00:17:56.283286 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 13 00:17:56.283306 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 13 00:17:56.283325 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 13 00:17:56.283344 kernel: iommu: Default domain type: Translated Aug 13 00:17:56.283363 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:17:56.283388 kernel: efivars: Registered efivars operations Aug 13 00:17:56.283407 kernel: vgaarb: loaded Aug 13 00:17:56.283425 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:17:56.283444 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:17:56.291593 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:17:56.292514 kernel: pnp: PnP ACPI init Aug 13 00:17:56.292768 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Aug 13 00:17:56.292797 kernel: pnp: PnP ACPI: found 1 devices Aug 13 00:17:56.292827 kernel: NET: Registered PF_INET protocol family Aug 13 00:17:56.292847 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:17:56.292867 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:17:56.292886 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:17:56.292905 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:17:56.292924 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:17:56.292943 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:17:56.292962 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:17:56.292980 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:17:56.293004 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:17:56.293023 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:17:56.293041 kernel: kvm [1]: HYP mode not available Aug 13 00:17:56.293060 kernel: Initialise system trusted keyrings Aug 13 00:17:56.293079 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:17:56.293097 kernel: Key type asymmetric registered Aug 13 00:17:56.293116 kernel: Asymmetric key parser 'x509' registered Aug 13 00:17:56.293134 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:17:56.293153 kernel: io scheduler mq-deadline registered Aug 13 00:17:56.293176 kernel: io scheduler kyber registered Aug 13 00:17:56.293194 kernel: io scheduler bfq registered Aug 13 00:17:56.293406 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Aug 13 00:17:56.293434 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 13 00:17:56.293471 kernel: ACPI: button: Power Button [PWRB] Aug 13 00:17:56.293496 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Aug 13 00:17:56.293516 kernel: ACPI: button: Sleep Button [SLPB] Aug 13 00:17:56.293535 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:17:56.293561 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Aug 13 00:17:56.293779 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Aug 13 00:17:56.293806 kernel: printk: console [ttyS0] disabled Aug 13 00:17:56.293825 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Aug 13 00:17:56.293844 kernel: printk: console [ttyS0] enabled Aug 13 00:17:56.293863 kernel: printk: bootconsole [uart0] disabled Aug 13 00:17:56.293882 kernel: thunder_xcv, ver 1.0 Aug 13 00:17:56.293900 kernel: thunder_bgx, ver 1.0 Aug 13 00:17:56.293918 kernel: nicpf, ver 1.0 Aug 13 00:17:56.293943 kernel: nicvf, ver 1.0 Aug 13 00:17:56.294153 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:17:56.294348 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:17:55 UTC (1755044275) Aug 13 00:17:56.294374 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:17:56.294394 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Aug 13 00:17:56.294413 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 13 00:17:56.294432 kernel: watchdog: Hard watchdog permanently disabled Aug 13 00:17:56.294451 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:17:56.294556 kernel: Segment Routing with IPv6 Aug 13 00:17:56.294577 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:17:56.294597 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:17:56.294615 kernel: Key type dns_resolver registered Aug 13 00:17:56.294634 kernel: registered taskstats version 1 Aug 13 00:17:56.294653 kernel: Loading compiled-in X.509 certificates Aug 13 00:17:56.294672 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 7263800c6d21650660e2b030c1023dce09b1e8b6' Aug 13 00:17:56.294691 kernel: Key type .fscrypt registered Aug 13 00:17:56.294709 kernel: Key type fscrypt-provisioning registered Aug 13 00:17:56.294734 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:17:56.294753 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:17:56.294773 kernel: ima: No architecture policies found Aug 13 00:17:56.294792 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:17:56.294811 kernel: clk: Disabling unused clocks Aug 13 00:17:56.294830 kernel: Freeing unused kernel memory: 39424K Aug 13 00:17:56.294849 kernel: Run /init as init process Aug 13 00:17:56.294868 kernel: with arguments: Aug 13 00:17:56.294886 kernel: /init Aug 13 00:17:56.294905 kernel: with environment: Aug 13 00:17:56.294928 kernel: HOME=/ Aug 13 00:17:56.294947 kernel: TERM=linux Aug 13 00:17:56.294965 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:17:56.294988 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:17:56.295013 systemd[1]: Detected virtualization amazon. Aug 13 00:17:56.295035 systemd[1]: Detected architecture arm64. Aug 13 00:17:56.295054 systemd[1]: Running in initrd. Aug 13 00:17:56.295079 systemd[1]: No hostname configured, using default hostname. Aug 13 00:17:56.295100 systemd[1]: Hostname set to . Aug 13 00:17:56.295121 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:17:56.295142 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:17:56.295184 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:17:56.295210 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:17:56.295232 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:17:56.295254 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:17:56.295280 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:17:56.295302 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:17:56.295326 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:17:56.295347 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:17:56.295368 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:17:56.295388 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:17:56.295409 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:17:56.295433 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:17:56.295484 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:17:56.295511 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:17:56.295532 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:17:56.295552 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:17:56.295573 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:17:56.295593 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:17:56.295613 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:17:56.295634 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:17:56.295661 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:17:56.295682 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:17:56.295702 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:17:56.295722 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:17:56.295743 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:17:56.295763 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:17:56.295783 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:17:56.295804 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:17:56.295828 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:17:56.295849 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:17:56.295870 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:17:56.295890 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:17:56.295912 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:17:56.295997 systemd-journald[251]: Collecting audit messages is disabled. Aug 13 00:17:56.296042 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:17:56.296064 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:17:56.296089 kernel: Bridge firewalling registered Aug 13 00:17:56.296110 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:17:56.296131 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:17:56.296151 systemd-journald[251]: Journal started Aug 13 00:17:56.296189 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2661e03fbe16e782aec2abd18b183b) is 8.0M, max 75.3M, 67.3M free. Aug 13 00:17:56.235566 systemd-modules-load[252]: Inserted module 'overlay' Aug 13 00:17:56.277111 systemd-modules-load[252]: Inserted module 'br_netfilter' Aug 13 00:17:56.306772 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:17:56.322505 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:17:56.342132 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:17:56.342205 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:17:56.352744 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:17:56.362555 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:17:56.367104 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:17:56.374708 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:17:56.395816 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:17:56.417586 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:17:56.429154 dracut-cmdline[284]: dracut-dracut-053 Aug 13 00:17:56.431794 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:17:56.447478 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:17:56.506961 systemd-resolved[294]: Positive Trust Anchors: Aug 13 00:17:56.507903 systemd-resolved[294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:17:56.507967 systemd-resolved[294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:17:56.614710 kernel: SCSI subsystem initialized Aug 13 00:17:56.622583 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:17:56.634569 kernel: iscsi: registered transport (tcp) Aug 13 00:17:56.657291 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:17:56.657364 kernel: QLogic iSCSI HBA Driver Aug 13 00:17:56.731497 kernel: random: crng init done Aug 13 00:17:56.731973 systemd-resolved[294]: Defaulting to hostname 'linux'. Aug 13 00:17:56.736136 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:17:56.738695 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:17:56.765785 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:17:56.773993 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:17:56.808588 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:17:56.808677 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:17:56.808719 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:17:56.876503 kernel: raid6: neonx8 gen() 6711 MB/s Aug 13 00:17:56.893490 kernel: raid6: neonx4 gen() 6464 MB/s Aug 13 00:17:56.910489 kernel: raid6: neonx2 gen() 5397 MB/s Aug 13 00:17:56.927489 kernel: raid6: neonx1 gen() 3925 MB/s Aug 13 00:17:56.944489 kernel: raid6: int64x8 gen() 3801 MB/s Aug 13 00:17:56.961489 kernel: raid6: int64x4 gen() 3689 MB/s Aug 13 00:17:56.978491 kernel: raid6: int64x2 gen() 3571 MB/s Aug 13 00:17:56.996525 kernel: raid6: int64x1 gen() 2772 MB/s Aug 13 00:17:56.996557 kernel: raid6: using algorithm neonx8 gen() 6711 MB/s Aug 13 00:17:57.015486 kernel: raid6: .... xor() 4872 MB/s, rmw enabled Aug 13 00:17:57.015526 kernel: raid6: using neon recovery algorithm Aug 13 00:17:57.023501 kernel: xor: measuring software checksum speed Aug 13 00:17:57.025732 kernel: 8regs : 10261 MB/sec Aug 13 00:17:57.025770 kernel: 32regs : 11904 MB/sec Aug 13 00:17:57.027040 kernel: arm64_neon : 9506 MB/sec Aug 13 00:17:57.027073 kernel: xor: using function: 32regs (11904 MB/sec) Aug 13 00:17:57.111509 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:17:57.131233 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:17:57.144768 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:17:57.190827 systemd-udevd[470]: Using default interface naming scheme 'v255'. Aug 13 00:17:57.198974 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:17:57.211850 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:17:57.250654 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Aug 13 00:17:57.307987 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:17:57.323897 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:17:57.437532 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:17:57.451799 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:17:57.490561 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:17:57.493429 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:17:57.497402 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:17:57.506285 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:17:57.519750 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:17:57.564827 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:17:57.650753 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 13 00:17:57.651834 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Aug 13 00:17:57.649897 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:17:57.650165 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:17:57.667785 kernel: ena 0000:00:05.0: ENA device version: 0.10 Aug 13 00:17:57.668101 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Aug 13 00:17:57.653149 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:17:57.655649 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:17:57.655914 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:17:57.663257 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:17:57.685347 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:17:57.694574 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:17:57:11:2b:23 Aug 13 00:17:57.700288 (udev-worker)[540]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:17:57.712008 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Aug 13 00:17:57.712074 kernel: nvme nvme0: pci function 0000:00:04.0 Aug 13 00:17:57.724083 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 13 00:17:57.733995 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:17:57.749580 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:17:57.749619 kernel: GPT:9289727 != 16777215 Aug 13 00:17:57.749655 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:17:57.749681 kernel: GPT:9289727 != 16777215 Aug 13 00:17:57.749706 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:17:57.749729 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:17:57.754395 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:17:57.788086 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:17:57.873509 kernel: BTRFS: device fsid 03408483-5051-409a-aab4-4e6d5027e982 devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (534) Aug 13 00:17:57.888487 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (528) Aug 13 00:17:57.930347 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Aug 13 00:17:57.971304 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Aug 13 00:17:58.003096 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Aug 13 00:17:58.009706 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Aug 13 00:17:58.025945 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 13 00:17:58.043702 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:17:58.057250 disk-uuid[659]: Primary Header is updated. Aug 13 00:17:58.057250 disk-uuid[659]: Secondary Entries is updated. Aug 13 00:17:58.057250 disk-uuid[659]: Secondary Header is updated. Aug 13 00:17:58.069520 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:17:58.078508 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:17:58.087508 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:17:59.091822 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:17:59.091900 disk-uuid[660]: The operation has completed successfully. Aug 13 00:17:59.274639 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:17:59.274859 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:17:59.356775 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:17:59.367565 sh[1004]: Success Aug 13 00:17:59.383675 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:17:59.484827 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:17:59.501692 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:17:59.517311 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:17:59.544624 kernel: BTRFS info (device dm-0): first mount of filesystem 03408483-5051-409a-aab4-4e6d5027e982 Aug 13 00:17:59.544687 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:17:59.547490 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:17:59.547543 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:17:59.548002 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:17:59.697509 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 00:17:59.733285 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:17:59.741339 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:17:59.750820 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:17:59.761777 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:17:59.795273 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:17:59.795362 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:17:59.795400 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 00:17:59.812504 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 00:17:59.830856 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:17:59.842518 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:17:59.853540 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:17:59.866881 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:17:59.972575 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:17:59.982801 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:18:00.059315 systemd-networkd[1196]: lo: Link UP Aug 13 00:18:00.059333 systemd-networkd[1196]: lo: Gained carrier Aug 13 00:18:00.064330 systemd-networkd[1196]: Enumeration completed Aug 13 00:18:00.065538 systemd-networkd[1196]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:18:00.065546 systemd-networkd[1196]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:18:00.067360 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:18:00.071717 systemd[1]: Reached target network.target - Network. Aug 13 00:18:00.072255 systemd-networkd[1196]: eth0: Link UP Aug 13 00:18:00.072262 systemd-networkd[1196]: eth0: Gained carrier Aug 13 00:18:00.072280 systemd-networkd[1196]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:18:00.105608 systemd-networkd[1196]: eth0: DHCPv4 address 172.31.31.36/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 13 00:18:00.389848 ignition[1123]: Ignition 2.19.0 Aug 13 00:18:00.389879 ignition[1123]: Stage: fetch-offline Aug 13 00:18:00.394396 ignition[1123]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:00.394445 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:00.399585 ignition[1123]: Ignition finished successfully Aug 13 00:18:00.404206 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:18:00.414768 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:18:00.459792 ignition[1206]: Ignition 2.19.0 Aug 13 00:18:00.459815 ignition[1206]: Stage: fetch Aug 13 00:18:00.460522 ignition[1206]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:00.460559 ignition[1206]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:00.460732 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:00.480655 ignition[1206]: PUT result: OK Aug 13 00:18:00.488527 ignition[1206]: parsed url from cmdline: "" Aug 13 00:18:00.488547 ignition[1206]: no config URL provided Aug 13 00:18:00.488565 ignition[1206]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:18:00.488594 ignition[1206]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:18:00.488634 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:00.497684 ignition[1206]: PUT result: OK Aug 13 00:18:00.497790 ignition[1206]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Aug 13 00:18:00.503393 ignition[1206]: GET result: OK Aug 13 00:18:00.503712 ignition[1206]: parsing config with SHA512: 0fcf6fe8b60790843e84151f752f6f29231a37799493786a234e97c9268958621f4a88a5706b5e2c22b47635d05bfb164890a2be9936f65259de15b5fd9feaef Aug 13 00:18:00.513166 unknown[1206]: fetched base config from "system" Aug 13 00:18:00.513207 unknown[1206]: fetched base config from "system" Aug 13 00:18:00.513222 unknown[1206]: fetched user config from "aws" Aug 13 00:18:00.519187 ignition[1206]: fetch: fetch complete Aug 13 00:18:00.525590 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:18:00.519202 ignition[1206]: fetch: fetch passed Aug 13 00:18:00.519331 ignition[1206]: Ignition finished successfully Aug 13 00:18:00.546590 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:18:00.586808 ignition[1212]: Ignition 2.19.0 Aug 13 00:18:00.586849 ignition[1212]: Stage: kargs Aug 13 00:18:00.588870 ignition[1212]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:00.588902 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:00.589557 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:00.592509 ignition[1212]: PUT result: OK Aug 13 00:18:00.605476 ignition[1212]: kargs: kargs passed Aug 13 00:18:00.605620 ignition[1212]: Ignition finished successfully Aug 13 00:18:00.615682 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:18:00.632843 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:18:00.661639 ignition[1219]: Ignition 2.19.0 Aug 13 00:18:00.661696 ignition[1219]: Stage: disks Aug 13 00:18:00.662421 ignition[1219]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:00.662451 ignition[1219]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:00.662688 ignition[1219]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:00.670522 ignition[1219]: PUT result: OK Aug 13 00:18:00.680988 ignition[1219]: disks: disks passed Aug 13 00:18:00.681183 ignition[1219]: Ignition finished successfully Aug 13 00:18:00.685410 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:18:00.689494 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:18:00.694629 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:18:00.697602 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:18:00.702265 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:18:00.707131 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:18:00.721767 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:18:00.773903 systemd-fsck[1228]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 00:18:00.781987 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:18:00.793775 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:18:00.891513 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 128aec8b-f05d-48ed-8996-c9e8b21a7810 r/w with ordered data mode. Quota mode: none. Aug 13 00:18:00.892914 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:18:00.895732 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:18:00.913664 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:18:00.926686 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:18:00.936063 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:18:00.936190 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:18:00.936246 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:18:00.959507 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1247) Aug 13 00:18:00.966584 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:18:00.966658 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:18:00.968095 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 00:18:00.967065 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:18:00.979763 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:18:00.990528 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 00:18:00.993093 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:18:01.335948 initrd-setup-root[1271]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:18:01.347500 initrd-setup-root[1278]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:18:01.357431 initrd-setup-root[1285]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:18:01.366956 initrd-setup-root[1292]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:18:01.598673 systemd-networkd[1196]: eth0: Gained IPv6LL Aug 13 00:18:01.606952 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:18:01.619794 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:18:01.626788 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:18:01.648855 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:18:01.651563 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:18:01.693105 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:18:01.697906 ignition[1360]: INFO : Ignition 2.19.0 Aug 13 00:18:01.697906 ignition[1360]: INFO : Stage: mount Aug 13 00:18:01.701470 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:01.701470 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:01.706442 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:01.711504 ignition[1360]: INFO : PUT result: OK Aug 13 00:18:01.716534 ignition[1360]: INFO : mount: mount passed Aug 13 00:18:01.718635 ignition[1360]: INFO : Ignition finished successfully Aug 13 00:18:01.723238 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:18:01.733744 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:18:01.910768 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:18:01.932503 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1371) Aug 13 00:18:01.936755 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:18:01.936821 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:18:01.936861 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 00:18:01.943512 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 00:18:01.946176 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:18:01.979648 ignition[1389]: INFO : Ignition 2.19.0 Aug 13 00:18:01.979648 ignition[1389]: INFO : Stage: files Aug 13 00:18:01.984274 ignition[1389]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:01.984274 ignition[1389]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:01.984274 ignition[1389]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:01.991861 ignition[1389]: INFO : PUT result: OK Aug 13 00:18:02.001113 ignition[1389]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:18:02.004694 ignition[1389]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:18:02.004694 ignition[1389]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:18:02.032245 ignition[1389]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:18:02.035137 ignition[1389]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:18:02.035137 ignition[1389]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:18:02.034545 unknown[1389]: wrote ssh authorized keys file for user: core Aug 13 00:18:02.043972 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 13 00:18:02.048181 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Aug 13 00:18:02.160565 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:18:02.605345 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 13 00:18:02.609895 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:18:02.609895 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:18:02.609895 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:18:02.609895 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:18:02.609895 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:18:02.609895 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:18:02.609895 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:18:02.609895 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:18:02.609895 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:18:02.609895 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:18:02.609895 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:18:02.609895 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:18:02.609895 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:18:02.609895 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Aug 13 00:18:02.951812 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 00:18:03.363180 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:18:03.363180 ignition[1389]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 00:18:03.371782 ignition[1389]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:18:03.371782 ignition[1389]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:18:03.371782 ignition[1389]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 00:18:03.371782 ignition[1389]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:18:03.371782 ignition[1389]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:18:03.371782 ignition[1389]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:18:03.371782 ignition[1389]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:18:03.371782 ignition[1389]: INFO : files: files passed Aug 13 00:18:03.371782 ignition[1389]: INFO : Ignition finished successfully Aug 13 00:18:03.389448 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:18:03.410828 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:18:03.421825 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:18:03.433947 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:18:03.436155 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:18:03.453274 initrd-setup-root-after-ignition[1416]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:18:03.453274 initrd-setup-root-after-ignition[1416]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:18:03.461632 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:18:03.466527 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:18:03.472480 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:18:03.480858 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:18:03.532108 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:18:03.532510 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:18:03.540043 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:18:03.542843 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:18:03.549850 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:18:03.561701 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:18:03.592725 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:18:03.601966 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:18:03.631834 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:18:03.634579 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:18:03.638229 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:18:03.643108 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:18:03.643366 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:18:03.653796 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:18:03.656333 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:18:03.658589 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:18:03.666978 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:18:03.669693 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:18:03.673026 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:18:03.677982 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:18:03.686805 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:18:03.689943 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:18:03.696364 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:18:03.698401 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:18:03.698669 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:18:03.707625 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:18:03.710249 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:18:03.715168 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:18:03.715387 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:18:03.720606 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:18:03.722758 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:18:03.727027 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:18:03.727263 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:18:03.728057 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:18:03.728249 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:18:03.756753 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:18:03.761538 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:18:03.764100 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:18:03.775961 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:18:03.781186 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:18:03.781661 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:18:03.789890 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:18:03.790259 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:18:03.812121 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:18:03.814489 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:18:03.824729 ignition[1440]: INFO : Ignition 2.19.0 Aug 13 00:18:03.824729 ignition[1440]: INFO : Stage: umount Aug 13 00:18:03.828643 ignition[1440]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:03.828643 ignition[1440]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:03.833356 ignition[1440]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:03.840043 ignition[1440]: INFO : PUT result: OK Aug 13 00:18:03.844650 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:18:03.847128 ignition[1440]: INFO : umount: umount passed Aug 13 00:18:03.851329 ignition[1440]: INFO : Ignition finished successfully Aug 13 00:18:03.856005 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:18:03.858064 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:18:03.861143 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:18:03.861310 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:18:03.863941 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:18:03.864098 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:18:03.868270 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:18:03.868356 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:18:03.870598 systemd[1]: Stopped target network.target - Network. Aug 13 00:18:03.874545 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:18:03.874992 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:18:03.876654 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:18:03.881042 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:18:03.885191 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:18:03.885316 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:18:03.889682 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:18:03.895962 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:18:03.896048 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:18:03.899533 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:18:03.899606 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:18:03.901843 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:18:03.901932 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:18:03.904115 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:18:03.904196 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:18:03.906692 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:18:03.910544 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:18:03.913290 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:18:03.913504 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:18:03.918540 systemd-networkd[1196]: eth0: DHCPv6 lease lost Aug 13 00:18:03.923060 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:18:03.923239 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:18:03.938737 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:18:03.940997 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:18:03.950677 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:18:03.950987 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:18:03.963039 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:18:03.964933 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:18:04.006752 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:18:04.008736 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:18:04.008855 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:18:04.011888 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:18:04.011990 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:18:04.025640 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:18:04.025747 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:18:04.032142 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:18:04.032237 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:18:04.035098 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:18:04.063261 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:18:04.065732 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:18:04.071935 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:18:04.072273 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:18:04.077698 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:18:04.077840 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:18:04.085685 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:18:04.085770 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:18:04.088658 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:18:04.088755 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:18:04.092840 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:18:04.092929 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:18:04.105717 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:18:04.105825 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:18:04.116728 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:18:04.119145 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:18:04.119255 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:18:04.122002 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:18:04.122085 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:18:04.155912 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:18:04.157952 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:18:04.163671 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:18:04.176698 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:18:04.196279 systemd[1]: Switching root. Aug 13 00:18:04.240501 systemd-journald[251]: Journal stopped Aug 13 00:18:06.121614 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Aug 13 00:18:06.121751 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:18:06.121795 kernel: SELinux: policy capability open_perms=1 Aug 13 00:18:06.121826 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:18:06.121856 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:18:06.121893 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:18:06.121922 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:18:06.121951 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:18:06.121981 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:18:06.122019 kernel: audit: type=1403 audit(1755044284.508:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:18:06.122056 systemd[1]: Successfully loaded SELinux policy in 50.408ms. Aug 13 00:18:06.122101 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.142ms. Aug 13 00:18:06.122135 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:18:06.122169 systemd[1]: Detected virtualization amazon. Aug 13 00:18:06.122203 systemd[1]: Detected architecture arm64. Aug 13 00:18:06.122235 systemd[1]: Detected first boot. Aug 13 00:18:06.122267 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:18:06.122299 zram_generator::config[1483]: No configuration found. Aug 13 00:18:06.122334 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:18:06.122363 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:18:06.122395 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:18:06.122425 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:18:06.124682 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:18:06.124732 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:18:06.124766 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:18:06.124796 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:18:06.124829 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:18:06.124861 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:18:06.124893 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:18:06.126519 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:18:06.126577 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:18:06.126612 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:18:06.126642 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:18:06.126675 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:18:06.126705 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:18:06.126739 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:18:06.126768 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:18:06.126799 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:18:06.126832 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:18:06.126868 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:18:06.126900 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:18:06.126930 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:18:06.126959 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:18:06.126993 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:18:06.127022 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:18:06.127108 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:18:06.127143 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:18:06.127181 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:18:06.127213 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:18:06.127245 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:18:06.127279 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:18:06.127309 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:18:06.127341 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:18:06.127374 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:18:06.127408 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:18:06.127438 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:18:06.127525 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:18:06.127563 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:18:06.127607 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:18:06.127641 systemd[1]: Reached target machines.target - Containers. Aug 13 00:18:06.127673 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:18:06.127706 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:18:06.127739 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:18:06.127768 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:18:06.127798 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:18:06.127832 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:18:06.127862 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:18:06.127894 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:18:06.127926 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:18:06.127957 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:18:06.127989 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:18:06.128018 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:18:06.128047 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:18:06.128080 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:18:06.128110 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:18:06.128140 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:18:06.128169 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:18:06.128198 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:18:06.128228 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:18:06.128259 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:18:06.128291 systemd[1]: Stopped verity-setup.service. Aug 13 00:18:06.128321 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:18:06.128370 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:18:06.128408 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:18:06.128440 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:18:06.136358 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:18:06.136403 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:18:06.136443 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:18:06.136494 kernel: fuse: init (API version 7.39) Aug 13 00:18:06.136527 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:18:06.136556 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:18:06.136587 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:18:06.136618 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:18:06.136646 kernel: loop: module loaded Aug 13 00:18:06.136675 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:18:06.136707 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:18:06.136744 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:18:06.136775 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:18:06.136808 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:18:06.136842 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:18:06.136873 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:18:06.136908 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:18:06.136947 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:18:06.136977 kernel: ACPI: bus type drm_connector registered Aug 13 00:18:06.137060 systemd-journald[1561]: Collecting audit messages is disabled. Aug 13 00:18:06.137115 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:18:06.137150 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:18:06.137183 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:18:06.137224 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:18:06.137258 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:18:06.137293 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:18:06.137323 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:18:06.137356 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:18:06.137387 systemd-journald[1561]: Journal started Aug 13 00:18:06.137438 systemd-journald[1561]: Runtime Journal (/run/log/journal/ec2661e03fbe16e782aec2abd18b183b) is 8.0M, max 75.3M, 67.3M free. Aug 13 00:18:06.149944 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:18:06.150033 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:18:05.486911 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:18:05.508341 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 13 00:18:05.509125 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:18:06.157067 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:18:06.164695 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 00:18:06.178982 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:18:06.196842 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:18:06.196945 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:18:06.220492 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:18:06.220595 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:18:06.247482 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:18:06.247572 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:18:06.255133 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:18:06.257289 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:18:06.260215 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:18:06.276128 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:18:06.339139 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:18:06.349662 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:18:06.355537 kernel: loop0: detected capacity change from 0 to 52536 Aug 13 00:18:06.362827 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:18:06.380336 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 00:18:06.392756 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:18:06.434562 systemd-journald[1561]: Time spent on flushing to /var/log/journal/ec2661e03fbe16e782aec2abd18b183b is 53.450ms for 911 entries. Aug 13 00:18:06.434562 systemd-journald[1561]: System Journal (/var/log/journal/ec2661e03fbe16e782aec2abd18b183b) is 8.0M, max 195.6M, 187.6M free. Aug 13 00:18:06.509091 systemd-journald[1561]: Received client request to flush runtime journal. Aug 13 00:18:06.509181 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:18:06.445927 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:18:06.451572 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 00:18:06.512340 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:18:06.517705 kernel: loop1: detected capacity change from 0 to 114432 Aug 13 00:18:06.551386 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:18:06.566805 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 00:18:06.571829 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:18:06.582596 kernel: loop2: detected capacity change from 0 to 207008 Aug 13 00:18:06.587732 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:18:06.624142 udevadm[1632]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:18:06.683206 systemd-tmpfiles[1634]: ACLs are not supported, ignoring. Aug 13 00:18:06.683243 systemd-tmpfiles[1634]: ACLs are not supported, ignoring. Aug 13 00:18:06.703801 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:18:06.728681 kernel: loop3: detected capacity change from 0 to 114328 Aug 13 00:18:06.787607 kernel: loop4: detected capacity change from 0 to 52536 Aug 13 00:18:06.813561 kernel: loop5: detected capacity change from 0 to 114432 Aug 13 00:18:06.834081 kernel: loop6: detected capacity change from 0 to 207008 Aug 13 00:18:06.874506 kernel: loop7: detected capacity change from 0 to 114328 Aug 13 00:18:06.898840 (sd-merge)[1639]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Aug 13 00:18:06.900715 (sd-merge)[1639]: Merged extensions into '/usr'. Aug 13 00:18:06.919255 systemd[1]: Reloading requested from client PID 1596 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:18:06.919291 systemd[1]: Reloading... Aug 13 00:18:07.137084 ldconfig[1589]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:18:07.160488 zram_generator::config[1666]: No configuration found. Aug 13 00:18:07.413931 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:18:07.540260 systemd[1]: Reloading finished in 617 ms. Aug 13 00:18:07.582520 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:18:07.587131 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:18:07.590371 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:18:07.605910 systemd[1]: Starting ensure-sysext.service... Aug 13 00:18:07.611820 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:18:07.619810 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:18:07.639622 systemd[1]: Reloading requested from client PID 1719 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:18:07.639653 systemd[1]: Reloading... Aug 13 00:18:07.664603 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:18:07.665268 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:18:07.667871 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:18:07.668603 systemd-tmpfiles[1720]: ACLs are not supported, ignoring. Aug 13 00:18:07.668835 systemd-tmpfiles[1720]: ACLs are not supported, ignoring. Aug 13 00:18:07.675842 systemd-tmpfiles[1720]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:18:07.675863 systemd-tmpfiles[1720]: Skipping /boot Aug 13 00:18:07.694749 systemd-tmpfiles[1720]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:18:07.694914 systemd-tmpfiles[1720]: Skipping /boot Aug 13 00:18:07.734631 systemd-udevd[1721]: Using default interface naming scheme 'v255'. Aug 13 00:18:07.853576 zram_generator::config[1748]: No configuration found. Aug 13 00:18:07.955578 (udev-worker)[1769]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:18:08.137502 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1782) Aug 13 00:18:08.273372 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:18:08.444750 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 13 00:18:08.448425 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:18:08.449167 systemd[1]: Reloading finished in 808 ms. Aug 13 00:18:08.486367 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:18:08.491912 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:18:08.533802 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 00:18:08.576786 systemd[1]: Finished ensure-sysext.service. Aug 13 00:18:08.603782 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:18:08.615794 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:18:08.635727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:18:08.642135 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 00:18:08.651506 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:18:08.659993 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:18:08.666415 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:18:08.676982 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:18:08.680428 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:18:08.685924 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:18:08.691749 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:18:08.702220 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:18:08.713742 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:18:08.718795 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:18:08.730941 lvm[1925]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:18:08.728780 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:18:08.738770 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:18:08.761768 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:18:08.782004 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:18:08.782348 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:18:08.805072 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 00:18:08.809339 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:18:08.811898 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:18:08.819833 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:18:08.839859 augenrules[1948]: No rules Aug 13 00:18:08.837599 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 00:18:08.849175 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:18:08.855299 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:18:08.857894 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:18:08.867202 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:18:08.881474 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:18:08.882899 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:18:08.884354 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:18:08.898611 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:18:08.928998 lvm[1950]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:18:08.939600 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:18:08.956907 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:18:08.969260 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:18:08.991386 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:18:08.992431 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:18:09.012717 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 00:18:09.037805 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:18:09.049801 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:18:09.058186 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:18:09.171201 systemd-networkd[1936]: lo: Link UP Aug 13 00:18:09.171223 systemd-networkd[1936]: lo: Gained carrier Aug 13 00:18:09.174154 systemd-networkd[1936]: Enumeration completed Aug 13 00:18:09.174389 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:18:09.180033 systemd-networkd[1936]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:18:09.180060 systemd-networkd[1936]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:18:09.189851 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:18:09.194686 systemd-resolved[1937]: Positive Trust Anchors: Aug 13 00:18:09.194717 systemd-resolved[1937]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:18:09.194781 systemd-resolved[1937]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:18:09.195051 systemd-networkd[1936]: eth0: Link UP Aug 13 00:18:09.195349 systemd-networkd[1936]: eth0: Gained carrier Aug 13 00:18:09.195385 systemd-networkd[1936]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:18:09.205599 systemd-networkd[1936]: eth0: DHCPv4 address 172.31.31.36/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 13 00:18:09.216800 systemd-resolved[1937]: Defaulting to hostname 'linux'. Aug 13 00:18:09.221844 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:18:09.224627 systemd[1]: Reached target network.target - Network. Aug 13 00:18:09.226598 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:18:09.229178 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:18:09.231768 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:18:09.234630 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:18:09.237874 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:18:09.240594 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:18:09.243424 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:18:09.246293 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:18:09.246350 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:18:09.248405 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:18:09.251832 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:18:09.256888 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:18:09.271803 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:18:09.275113 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:18:09.277719 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:18:09.279840 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:18:09.281890 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:18:09.281944 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:18:09.284058 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:18:09.299821 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:18:09.305781 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:18:09.315759 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:18:09.322823 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:18:09.331816 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:18:09.340006 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:18:09.353427 systemd[1]: Started ntpd.service - Network Time Service. Aug 13 00:18:09.367958 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:18:09.375801 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 13 00:18:09.388048 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:18:09.398799 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:18:09.409973 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:18:09.413920 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:18:09.415284 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:18:09.438302 jq[1983]: false Aug 13 00:18:09.438951 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:18:09.450653 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:18:09.461394 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:18:09.461809 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:18:09.506226 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:18:09.505906 dbus-daemon[1982]: [system] SELinux support is enabled Aug 13 00:18:09.512317 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:18:09.512364 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:18:09.516709 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:18:09.516750 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:18:09.541550 dbus-daemon[1982]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1936 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 00:18:09.544809 jq[1996]: true Aug 13 00:18:09.546665 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 00:18:09.563040 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 00:18:09.587287 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:18:09.587691 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:18:09.609911 extend-filesystems[1984]: Found loop4 Aug 13 00:18:09.639604 extend-filesystems[1984]: Found loop5 Aug 13 00:18:09.639604 extend-filesystems[1984]: Found loop6 Aug 13 00:18:09.639604 extend-filesystems[1984]: Found loop7 Aug 13 00:18:09.639604 extend-filesystems[1984]: Found nvme0n1 Aug 13 00:18:09.639604 extend-filesystems[1984]: Found nvme0n1p1 Aug 13 00:18:09.639604 extend-filesystems[1984]: Found nvme0n1p2 Aug 13 00:18:09.639604 extend-filesystems[1984]: Found nvme0n1p3 Aug 13 00:18:09.639604 extend-filesystems[1984]: Found usr Aug 13 00:18:09.639604 extend-filesystems[1984]: Found nvme0n1p4 Aug 13 00:18:09.639604 extend-filesystems[1984]: Found nvme0n1p6 Aug 13 00:18:09.639604 extend-filesystems[1984]: Found nvme0n1p7 Aug 13 00:18:09.639604 extend-filesystems[1984]: Found nvme0n1p9 Aug 13 00:18:09.639604 extend-filesystems[1984]: Checking size of /dev/nvme0n1p9 Aug 13 00:18:09.613992 ntpd[1986]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 21:30:33 UTC 2025 (1): Starting Aug 13 00:18:09.625451 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 21:30:33 UTC 2025 (1): Starting Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: ---------------------------------------------------- Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: ntp-4 is maintained by Network Time Foundation, Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: corporation. Support and training for ntp-4 are Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: available at https://www.nwtime.org/support Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: ---------------------------------------------------- Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: proto: precision = 0.108 usec (-23) Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: basedate set to 2025-07-31 Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: gps base set to 2025-08-03 (week 2378) Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: Listen normally on 3 eth0 172.31.31.36:123 Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: Listen normally on 4 lo [::1]:123 Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: bind(21) AF_INET6 fe80::417:57ff:fe11:2b23%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: unable to create socket on eth0 (5) for fe80::417:57ff:fe11:2b23%2#123 Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: failed to init interface for address fe80::417:57ff:fe11:2b23%2 Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: Listening on routing socket on fd #21 for interface updates Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:18:09.706280 ntpd[1986]: 13 Aug 00:18:09 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:18:09.711403 tar[2004]: linux-arm64/LICENSE Aug 13 00:18:09.711403 tar[2004]: linux-arm64/helm Aug 13 00:18:09.614040 ntpd[1986]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 00:18:09.628105 (ntainerd)[2020]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:18:09.718781 jq[2016]: true Aug 13 00:18:09.614061 ntpd[1986]: ---------------------------------------------------- Aug 13 00:18:09.628162 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:18:09.614079 ntpd[1986]: ntp-4 is maintained by Network Time Foundation, Aug 13 00:18:09.614098 ntpd[1986]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 00:18:09.614117 ntpd[1986]: corporation. Support and training for ntp-4 are Aug 13 00:18:09.614135 ntpd[1986]: available at https://www.nwtime.org/support Aug 13 00:18:09.614153 ntpd[1986]: ---------------------------------------------------- Aug 13 00:18:09.627859 ntpd[1986]: proto: precision = 0.108 usec (-23) Aug 13 00:18:09.631130 ntpd[1986]: basedate set to 2025-07-31 Aug 13 00:18:09.631164 ntpd[1986]: gps base set to 2025-08-03 (week 2378) Aug 13 00:18:09.634973 ntpd[1986]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 00:18:09.635078 ntpd[1986]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 00:18:09.635337 ntpd[1986]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 00:18:09.635399 ntpd[1986]: Listen normally on 3 eth0 172.31.31.36:123 Aug 13 00:18:09.635489 ntpd[1986]: Listen normally on 4 lo [::1]:123 Aug 13 00:18:09.635564 ntpd[1986]: bind(21) AF_INET6 fe80::417:57ff:fe11:2b23%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:18:09.635603 ntpd[1986]: unable to create socket on eth0 (5) for fe80::417:57ff:fe11:2b23%2#123 Aug 13 00:18:09.635653 ntpd[1986]: failed to init interface for address fe80::417:57ff:fe11:2b23%2 Aug 13 00:18:09.635714 ntpd[1986]: Listening on routing socket on fd #21 for interface updates Aug 13 00:18:09.653029 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:18:09.653081 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:18:09.747512 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 13 00:18:09.765739 extend-filesystems[1984]: Resized partition /dev/nvme0n1p9 Aug 13 00:18:09.774018 update_engine[1995]: I20250813 00:18:09.744086 1995 main.cc:92] Flatcar Update Engine starting Aug 13 00:18:09.781712 extend-filesystems[2036]: resize2fs 1.47.1 (20-May-2024) Aug 13 00:18:09.798795 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Aug 13 00:18:09.798849 update_engine[1995]: I20250813 00:18:09.795421 1995 update_check_scheduler.cc:74] Next update check in 5m37s Aug 13 00:18:09.784384 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:18:09.832838 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:18:09.869921 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Aug 13 00:18:09.891315 extend-filesystems[2036]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Aug 13 00:18:09.891315 extend-filesystems[2036]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:18:09.891315 extend-filesystems[2036]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Aug 13 00:18:09.915425 extend-filesystems[1984]: Resized filesystem in /dev/nvme0n1p9 Aug 13 00:18:09.900834 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:18:09.901212 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:18:09.926102 coreos-metadata[1981]: Aug 13 00:18:09.922 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 13 00:18:09.927606 coreos-metadata[1981]: Aug 13 00:18:09.927 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Aug 13 00:18:09.929229 coreos-metadata[1981]: Aug 13 00:18:09.929 INFO Fetch successful Aug 13 00:18:09.933695 coreos-metadata[1981]: Aug 13 00:18:09.929 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Aug 13 00:18:09.936293 coreos-metadata[1981]: Aug 13 00:18:09.936 INFO Fetch successful Aug 13 00:18:09.936293 coreos-metadata[1981]: Aug 13 00:18:09.936 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Aug 13 00:18:09.937938 coreos-metadata[1981]: Aug 13 00:18:09.937 INFO Fetch successful Aug 13 00:18:09.937938 coreos-metadata[1981]: Aug 13 00:18:09.937 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Aug 13 00:18:09.938967 coreos-metadata[1981]: Aug 13 00:18:09.938 INFO Fetch successful Aug 13 00:18:09.938967 coreos-metadata[1981]: Aug 13 00:18:09.938 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Aug 13 00:18:09.939759 coreos-metadata[1981]: Aug 13 00:18:09.939 INFO Fetch failed with 404: resource not found Aug 13 00:18:09.947786 coreos-metadata[1981]: Aug 13 00:18:09.939 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Aug 13 00:18:09.948092 coreos-metadata[1981]: Aug 13 00:18:09.947 INFO Fetch successful Aug 13 00:18:09.948306 coreos-metadata[1981]: Aug 13 00:18:09.948 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Aug 13 00:18:09.950563 coreos-metadata[1981]: Aug 13 00:18:09.950 INFO Fetch successful Aug 13 00:18:09.950765 coreos-metadata[1981]: Aug 13 00:18:09.950 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Aug 13 00:18:09.951945 coreos-metadata[1981]: Aug 13 00:18:09.951 INFO Fetch successful Aug 13 00:18:09.952191 coreos-metadata[1981]: Aug 13 00:18:09.952 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Aug 13 00:18:09.959118 coreos-metadata[1981]: Aug 13 00:18:09.957 INFO Fetch successful Aug 13 00:18:09.959118 coreos-metadata[1981]: Aug 13 00:18:09.957 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Aug 13 00:18:09.960654 coreos-metadata[1981]: Aug 13 00:18:09.960 INFO Fetch successful Aug 13 00:18:10.079284 bash[2065]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:18:10.086200 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:18:10.114258 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1782) Aug 13 00:18:10.143009 systemd[1]: Starting sshkeys.service... Aug 13 00:18:10.151219 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:18:10.155090 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:18:10.157484 systemd-logind[1993]: Watching system buttons on /dev/input/event0 (Power Button) Aug 13 00:18:10.160597 systemd-logind[1993]: Watching system buttons on /dev/input/event1 (Sleep Button) Aug 13 00:18:10.163965 systemd-logind[1993]: New seat seat0. Aug 13 00:18:10.175023 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:18:10.197410 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:18:10.298843 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 00:18:10.302902 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:18:10.306375 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 00:18:10.320721 dbus-daemon[1982]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2012 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 00:18:10.334729 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 00:18:10.383547 containerd[2020]: time="2025-08-13T00:18:10.382507269Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 00:18:10.466808 polkitd[2078]: Started polkitd version 121 Aug 13 00:18:10.503104 polkitd[2078]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 00:18:10.503655 polkitd[2078]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 00:18:10.514883 polkitd[2078]: Finished loading, compiling and executing 2 rules Aug 13 00:18:10.519250 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 00:18:10.519547 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 00:18:10.525470 containerd[2020]: time="2025-08-13T00:18:10.525133162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:10.527927 polkitd[2078]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 00:18:10.546291 containerd[2020]: time="2025-08-13T00:18:10.546217234Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:18:10.548231 containerd[2020]: time="2025-08-13T00:18:10.546436546Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:18:10.548231 containerd[2020]: time="2025-08-13T00:18:10.546507766Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:18:10.548231 containerd[2020]: time="2025-08-13T00:18:10.546828322Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 00:18:10.548231 containerd[2020]: time="2025-08-13T00:18:10.546863722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:10.548231 containerd[2020]: time="2025-08-13T00:18:10.547014034Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:18:10.548231 containerd[2020]: time="2025-08-13T00:18:10.547051114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:10.548231 containerd[2020]: time="2025-08-13T00:18:10.547400866Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:18:10.548231 containerd[2020]: time="2025-08-13T00:18:10.547440274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:10.548231 containerd[2020]: time="2025-08-13T00:18:10.547497802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:18:10.548231 containerd[2020]: time="2025-08-13T00:18:10.547524850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:10.548231 containerd[2020]: time="2025-08-13T00:18:10.547694974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:10.548231 containerd[2020]: time="2025-08-13T00:18:10.548155030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:10.551563 containerd[2020]: time="2025-08-13T00:18:10.551503294Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:18:10.552265 containerd[2020]: time="2025-08-13T00:18:10.551704054Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:18:10.552265 containerd[2020]: time="2025-08-13T00:18:10.551931550Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:18:10.552265 containerd[2020]: time="2025-08-13T00:18:10.552042430Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:18:10.559526 containerd[2020]: time="2025-08-13T00:18:10.559344298Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:18:10.559814 containerd[2020]: time="2025-08-13T00:18:10.559687702Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:18:10.560166 containerd[2020]: time="2025-08-13T00:18:10.559929202Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 00:18:10.560166 containerd[2020]: time="2025-08-13T00:18:10.559998946Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 00:18:10.560166 containerd[2020]: time="2025-08-13T00:18:10.560036038Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:18:10.561476 containerd[2020]: time="2025-08-13T00:18:10.560670562Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:18:10.561609 containerd[2020]: time="2025-08-13T00:18:10.561446482Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:18:10.562051 containerd[2020]: time="2025-08-13T00:18:10.562019626Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 00:18:10.562189 containerd[2020]: time="2025-08-13T00:18:10.562161202Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 00:18:10.563433 containerd[2020]: time="2025-08-13T00:18:10.563368354Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 00:18:10.564258 containerd[2020]: time="2025-08-13T00:18:10.563586994Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:18:10.564258 containerd[2020]: time="2025-08-13T00:18:10.563658646Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:18:10.564258 containerd[2020]: time="2025-08-13T00:18:10.563696674Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:18:10.564258 containerd[2020]: time="2025-08-13T00:18:10.563756326Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:18:10.564258 containerd[2020]: time="2025-08-13T00:18:10.563834890Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:18:10.564258 containerd[2020]: time="2025-08-13T00:18:10.563870722Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:18:10.564258 containerd[2020]: time="2025-08-13T00:18:10.563926198Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:18:10.564258 containerd[2020]: time="2025-08-13T00:18:10.563958358Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:18:10.564258 containerd[2020]: time="2025-08-13T00:18:10.564027106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:18:10.564258 containerd[2020]: time="2025-08-13T00:18:10.564062746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:18:10.564258 containerd[2020]: time="2025-08-13T00:18:10.564118138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:18:10.564258 containerd[2020]: time="2025-08-13T00:18:10.564153598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:18:10.564258 containerd[2020]: time="2025-08-13T00:18:10.564183154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:18:10.564258 containerd[2020]: time="2025-08-13T00:18:10.564214426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:18:10.565156 containerd[2020]: time="2025-08-13T00:18:10.564943366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:18:10.565156 containerd[2020]: time="2025-08-13T00:18:10.564993190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:18:10.565156 containerd[2020]: time="2025-08-13T00:18:10.565025470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 00:18:10.565156 containerd[2020]: time="2025-08-13T00:18:10.565060594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 00:18:10.565156 containerd[2020]: time="2025-08-13T00:18:10.565091710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:18:10.565422 containerd[2020]: time="2025-08-13T00:18:10.565125850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 00:18:10.565904 containerd[2020]: time="2025-08-13T00:18:10.565549450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:18:10.565904 containerd[2020]: time="2025-08-13T00:18:10.565612882Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 00:18:10.565904 containerd[2020]: time="2025-08-13T00:18:10.565662262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 00:18:10.565904 containerd[2020]: time="2025-08-13T00:18:10.565691554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:18:10.565904 containerd[2020]: time="2025-08-13T00:18:10.565718014Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:18:10.566266 containerd[2020]: time="2025-08-13T00:18:10.566210350Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:18:10.566628 containerd[2020]: time="2025-08-13T00:18:10.566593618Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 00:18:10.566764 containerd[2020]: time="2025-08-13T00:18:10.566734066Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:18:10.566897 containerd[2020]: time="2025-08-13T00:18:10.566867902Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 00:18:10.567913 containerd[2020]: time="2025-08-13T00:18:10.567866470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:18:10.568084 containerd[2020]: time="2025-08-13T00:18:10.568056586Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 00:18:10.572508 containerd[2020]: time="2025-08-13T00:18:10.570581722Z" level=info msg="NRI interface is disabled by configuration." Aug 13 00:18:10.572508 containerd[2020]: time="2025-08-13T00:18:10.570639190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:18:10.574292 containerd[2020]: time="2025-08-13T00:18:10.573125734Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:18:10.574292 containerd[2020]: time="2025-08-13T00:18:10.573260602Z" level=info msg="Connect containerd service" Aug 13 00:18:10.574292 containerd[2020]: time="2025-08-13T00:18:10.573363058Z" level=info msg="using legacy CRI server" Aug 13 00:18:10.574292 containerd[2020]: time="2025-08-13T00:18:10.573386098Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:18:10.574292 containerd[2020]: time="2025-08-13T00:18:10.573596230Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:18:10.577500 containerd[2020]: time="2025-08-13T00:18:10.577054390Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:18:10.580482 containerd[2020]: time="2025-08-13T00:18:10.578442022Z" level=info msg="Start subscribing containerd event" Aug 13 00:18:10.580482 containerd[2020]: time="2025-08-13T00:18:10.579925858Z" level=info msg="Start recovering state" Aug 13 00:18:10.580482 containerd[2020]: time="2025-08-13T00:18:10.580066714Z" level=info msg="Start event monitor" Aug 13 00:18:10.580482 containerd[2020]: time="2025-08-13T00:18:10.580091122Z" level=info msg="Start snapshots syncer" Aug 13 00:18:10.580482 containerd[2020]: time="2025-08-13T00:18:10.580114498Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:18:10.580482 containerd[2020]: time="2025-08-13T00:18:10.580133158Z" level=info msg="Start streaming server" Aug 13 00:18:10.589208 containerd[2020]: time="2025-08-13T00:18:10.589152550Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:18:10.590504 containerd[2020]: time="2025-08-13T00:18:10.590447830Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:18:10.592687 containerd[2020]: time="2025-08-13T00:18:10.592642102Z" level=info msg="containerd successfully booted in 0.211987s" Aug 13 00:18:10.592772 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:18:10.614077 coreos-metadata[2076]: Aug 13 00:18:10.611 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 13 00:18:10.617869 coreos-metadata[2076]: Aug 13 00:18:10.614 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Aug 13 00:18:10.617869 coreos-metadata[2076]: Aug 13 00:18:10.616 INFO Fetch successful Aug 13 00:18:10.617869 coreos-metadata[2076]: Aug 13 00:18:10.616 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 13 00:18:10.618121 ntpd[1986]: 13 Aug 00:18:10 ntpd[1986]: bind(24) AF_INET6 fe80::417:57ff:fe11:2b23%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:18:10.618121 ntpd[1986]: 13 Aug 00:18:10 ntpd[1986]: unable to create socket on eth0 (6) for fe80::417:57ff:fe11:2b23%2#123 Aug 13 00:18:10.618121 ntpd[1986]: 13 Aug 00:18:10 ntpd[1986]: failed to init interface for address fe80::417:57ff:fe11:2b23%2 Aug 13 00:18:10.616805 ntpd[1986]: bind(24) AF_INET6 fe80::417:57ff:fe11:2b23%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:18:10.618627 coreos-metadata[2076]: Aug 13 00:18:10.618 INFO Fetch successful Aug 13 00:18:10.616861 ntpd[1986]: unable to create socket on eth0 (6) for fe80::417:57ff:fe11:2b23%2#123 Aug 13 00:18:10.616890 ntpd[1986]: failed to init interface for address fe80::417:57ff:fe11:2b23%2 Aug 13 00:18:10.621946 systemd-hostnamed[2012]: Hostname set to (transient) Aug 13 00:18:10.621947 systemd-resolved[1937]: System hostname changed to 'ip-172-31-31-36'. Aug 13 00:18:10.623780 unknown[2076]: wrote ssh authorized keys file for user: core Aug 13 00:18:10.698005 update-ssh-keys[2146]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:18:10.700233 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:18:10.712632 systemd[1]: Finished sshkeys.service. Aug 13 00:18:10.714546 locksmithd[2037]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:18:11.164052 sshd_keygen[2027]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:18:11.191157 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:18:11.200322 systemd-networkd[1936]: eth0: Gained IPv6LL Aug 13 00:18:11.209677 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:18:11.213286 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:18:11.226758 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Aug 13 00:18:11.242971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:18:11.248993 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:18:11.290307 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:18:11.303060 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:18:11.311999 systemd[1]: Started sshd@0-172.31.31.36:22-139.178.89.65:40176.service - OpenSSH per-connection server daemon (139.178.89.65:40176). Aug 13 00:18:11.367115 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:18:11.367571 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:18:11.380005 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:18:11.385550 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:18:11.439576 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:18:11.454051 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:18:11.465339 amazon-ssm-agent[2194]: Initializing new seelog logger Aug 13 00:18:11.465339 amazon-ssm-agent[2194]: New Seelog Logger Creation Complete Aug 13 00:18:11.465339 amazon-ssm-agent[2194]: 2025/08/13 00:18:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:11.465339 amazon-ssm-agent[2194]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:11.465339 amazon-ssm-agent[2194]: 2025/08/13 00:18:11 processing appconfig overrides Aug 13 00:18:11.462129 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:18:11.468866 amazon-ssm-agent[2194]: 2025/08/13 00:18:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:11.468866 amazon-ssm-agent[2194]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:11.468866 amazon-ssm-agent[2194]: 2025/08/13 00:18:11 processing appconfig overrides Aug 13 00:18:11.465127 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:18:11.469068 amazon-ssm-agent[2194]: 2025/08/13 00:18:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:11.469068 amazon-ssm-agent[2194]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:11.469148 amazon-ssm-agent[2194]: 2025/08/13 00:18:11 processing appconfig overrides Aug 13 00:18:11.474515 amazon-ssm-agent[2194]: 2025-08-13 00:18:11 INFO Proxy environment variables: Aug 13 00:18:11.478525 amazon-ssm-agent[2194]: 2025/08/13 00:18:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:11.478525 amazon-ssm-agent[2194]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:11.478525 amazon-ssm-agent[2194]: 2025/08/13 00:18:11 processing appconfig overrides Aug 13 00:18:11.526768 tar[2004]: linux-arm64/README.md Aug 13 00:18:11.562133 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:18:11.575621 amazon-ssm-agent[2194]: 2025-08-13 00:18:11 INFO no_proxy: Aug 13 00:18:11.608863 sshd[2203]: Accepted publickey for core from 139.178.89.65 port 40176 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:18:11.613656 sshd[2203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:11.637101 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:18:11.652964 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:18:11.661430 systemd-logind[1993]: New session 1 of user core. Aug 13 00:18:11.677660 amazon-ssm-agent[2194]: 2025-08-13 00:18:11 INFO https_proxy: Aug 13 00:18:11.699671 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:18:11.720973 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:18:11.749567 (systemd)[2227]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:18:11.775660 amazon-ssm-agent[2194]: 2025-08-13 00:18:11 INFO http_proxy: Aug 13 00:18:11.874541 amazon-ssm-agent[2194]: 2025-08-13 00:18:11 INFO Checking if agent identity type OnPrem can be assumed Aug 13 00:18:11.973954 amazon-ssm-agent[2194]: 2025-08-13 00:18:11 INFO Checking if agent identity type EC2 can be assumed Aug 13 00:18:12.033408 systemd[2227]: Queued start job for default target default.target. Aug 13 00:18:12.041809 systemd[2227]: Created slice app.slice - User Application Slice. Aug 13 00:18:12.041987 systemd[2227]: Reached target paths.target - Paths. Aug 13 00:18:12.042141 systemd[2227]: Reached target timers.target - Timers. Aug 13 00:18:12.050757 systemd[2227]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:18:12.071151 systemd[2227]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:18:12.071629 systemd[2227]: Reached target sockets.target - Sockets. Aug 13 00:18:12.071774 systemd[2227]: Reached target basic.target - Basic System. Aug 13 00:18:12.071959 systemd[2227]: Reached target default.target - Main User Target. Aug 13 00:18:12.072026 systemd[2227]: Startup finished in 300ms. Aug 13 00:18:12.072216 amazon-ssm-agent[2194]: 2025-08-13 00:18:11 INFO Agent will take identity from EC2 Aug 13 00:18:12.072409 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:18:12.088981 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:18:12.171475 amazon-ssm-agent[2194]: 2025-08-13 00:18:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 00:18:12.254680 systemd[1]: Started sshd@1-172.31.31.36:22-139.178.89.65:40192.service - OpenSSH per-connection server daemon (139.178.89.65:40192). Aug 13 00:18:12.261716 amazon-ssm-agent[2194]: 2025-08-13 00:18:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 00:18:12.261716 amazon-ssm-agent[2194]: 2025-08-13 00:18:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 00:18:12.261716 amazon-ssm-agent[2194]: 2025-08-13 00:18:11 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Aug 13 00:18:12.261716 amazon-ssm-agent[2194]: 2025-08-13 00:18:11 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Aug 13 00:18:12.261716 amazon-ssm-agent[2194]: 2025-08-13 00:18:11 INFO [amazon-ssm-agent] Starting Core Agent Aug 13 00:18:12.261716 amazon-ssm-agent[2194]: 2025-08-13 00:18:11 INFO [amazon-ssm-agent] registrar detected. Attempting registration Aug 13 00:18:12.261716 amazon-ssm-agent[2194]: 2025-08-13 00:18:11 INFO [Registrar] Starting registrar module Aug 13 00:18:12.261716 amazon-ssm-agent[2194]: 2025-08-13 00:18:11 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Aug 13 00:18:12.262142 amazon-ssm-agent[2194]: 2025-08-13 00:18:12 INFO [EC2Identity] EC2 registration was successful. Aug 13 00:18:12.262257 amazon-ssm-agent[2194]: 2025-08-13 00:18:12 INFO [CredentialRefresher] credentialRefresher has started Aug 13 00:18:12.262371 amazon-ssm-agent[2194]: 2025-08-13 00:18:12 INFO [CredentialRefresher] Starting credentials refresher loop Aug 13 00:18:12.262503 amazon-ssm-agent[2194]: 2025-08-13 00:18:12 INFO EC2RoleProvider Successfully connected with instance profile role credentials Aug 13 00:18:12.270316 amazon-ssm-agent[2194]: 2025-08-13 00:18:12 INFO [CredentialRefresher] Next credential rotation will be in 30.84156681036667 minutes Aug 13 00:18:12.438843 sshd[2241]: Accepted publickey for core from 139.178.89.65 port 40192 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:18:12.441678 sshd[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:12.451010 systemd-logind[1993]: New session 2 of user core. Aug 13 00:18:12.456745 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:18:12.588874 sshd[2241]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:12.595147 systemd[1]: sshd@1-172.31.31.36:22-139.178.89.65:40192.service: Deactivated successfully. Aug 13 00:18:12.598056 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:18:12.600935 systemd-logind[1993]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:18:12.603967 systemd-logind[1993]: Removed session 2. Aug 13 00:18:12.626991 systemd[1]: Started sshd@2-172.31.31.36:22-139.178.89.65:40196.service - OpenSSH per-connection server daemon (139.178.89.65:40196). Aug 13 00:18:12.808625 sshd[2248]: Accepted publickey for core from 139.178.89.65 port 40196 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:18:12.811221 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:12.820049 systemd-logind[1993]: New session 3 of user core. Aug 13 00:18:12.829786 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:18:12.961172 sshd[2248]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:12.968017 systemd[1]: sshd@2-172.31.31.36:22-139.178.89.65:40196.service: Deactivated successfully. Aug 13 00:18:12.972275 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:18:12.976036 systemd-logind[1993]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:18:12.978502 systemd-logind[1993]: Removed session 3. Aug 13 00:18:13.292009 amazon-ssm-agent[2194]: 2025-08-13 00:18:13 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Aug 13 00:18:13.393209 amazon-ssm-agent[2194]: 2025-08-13 00:18:13 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2255) started Aug 13 00:18:13.494243 amazon-ssm-agent[2194]: 2025-08-13 00:18:13 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Aug 13 00:18:13.615688 ntpd[1986]: Listen normally on 7 eth0 [fe80::417:57ff:fe11:2b23%2]:123 Aug 13 00:18:13.616239 ntpd[1986]: 13 Aug 00:18:13 ntpd[1986]: Listen normally on 7 eth0 [fe80::417:57ff:fe11:2b23%2]:123 Aug 13 00:18:13.732793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:18:13.736235 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:18:13.739368 systemd[1]: Startup finished in 1.168s (kernel) + 8.698s (initrd) + 9.281s (userspace) = 19.147s. Aug 13 00:18:13.744563 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:18:14.922396 kubelet[2270]: E0813 00:18:14.922332 2270 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:18:14.927217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:18:14.927618 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:18:14.928642 systemd[1]: kubelet.service: Consumed 1.386s CPU time. Aug 13 00:18:23.000661 systemd[1]: Started sshd@3-172.31.31.36:22-139.178.89.65:44296.service - OpenSSH per-connection server daemon (139.178.89.65:44296). Aug 13 00:18:23.177848 sshd[2282]: Accepted publickey for core from 139.178.89.65 port 44296 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:18:23.180541 sshd[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:23.189821 systemd-logind[1993]: New session 4 of user core. Aug 13 00:18:23.198753 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:18:23.324671 sshd[2282]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:23.331166 systemd[1]: sshd@3-172.31.31.36:22-139.178.89.65:44296.service: Deactivated successfully. Aug 13 00:18:23.334334 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:18:23.335621 systemd-logind[1993]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:18:23.337334 systemd-logind[1993]: Removed session 4. Aug 13 00:18:23.362881 systemd[1]: Started sshd@4-172.31.31.36:22-139.178.89.65:44306.service - OpenSSH per-connection server daemon (139.178.89.65:44306). Aug 13 00:18:23.543176 sshd[2289]: Accepted publickey for core from 139.178.89.65 port 44306 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:18:23.545781 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:23.553543 systemd-logind[1993]: New session 5 of user core. Aug 13 00:18:23.562754 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:18:23.680678 sshd[2289]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:23.686873 systemd[1]: sshd@4-172.31.31.36:22-139.178.89.65:44306.service: Deactivated successfully. Aug 13 00:18:23.690183 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:18:23.693056 systemd-logind[1993]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:18:23.694793 systemd-logind[1993]: Removed session 5. Aug 13 00:18:23.722002 systemd[1]: Started sshd@5-172.31.31.36:22-139.178.89.65:44320.service - OpenSSH per-connection server daemon (139.178.89.65:44320). Aug 13 00:18:23.890358 sshd[2296]: Accepted publickey for core from 139.178.89.65 port 44320 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:18:23.892957 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:23.901502 systemd-logind[1993]: New session 6 of user core. Aug 13 00:18:23.910731 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:18:24.036895 sshd[2296]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:24.043858 systemd[1]: sshd@5-172.31.31.36:22-139.178.89.65:44320.service: Deactivated successfully. Aug 13 00:18:24.048049 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:18:24.049362 systemd-logind[1993]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:18:24.051204 systemd-logind[1993]: Removed session 6. Aug 13 00:18:24.080944 systemd[1]: Started sshd@6-172.31.31.36:22-139.178.89.65:44334.service - OpenSSH per-connection server daemon (139.178.89.65:44334). Aug 13 00:18:24.243800 sshd[2303]: Accepted publickey for core from 139.178.89.65 port 44334 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:18:24.246529 sshd[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:24.253985 systemd-logind[1993]: New session 7 of user core. Aug 13 00:18:24.263730 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:18:24.382331 sudo[2306]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:18:24.383038 sudo[2306]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:18:24.401037 sudo[2306]: pam_unix(sudo:session): session closed for user root Aug 13 00:18:24.424436 sshd[2303]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:24.431550 systemd[1]: sshd@6-172.31.31.36:22-139.178.89.65:44334.service: Deactivated successfully. Aug 13 00:18:24.435650 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:18:24.437206 systemd-logind[1993]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:18:24.439138 systemd-logind[1993]: Removed session 7. Aug 13 00:18:24.460983 systemd[1]: Started sshd@7-172.31.31.36:22-139.178.89.65:44342.service - OpenSSH per-connection server daemon (139.178.89.65:44342). Aug 13 00:18:24.632166 sshd[2311]: Accepted publickey for core from 139.178.89.65 port 44342 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:18:24.634881 sshd[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:24.642862 systemd-logind[1993]: New session 8 of user core. Aug 13 00:18:24.650753 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:18:24.754942 sudo[2315]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:18:24.755646 sudo[2315]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:18:24.761834 sudo[2315]: pam_unix(sudo:session): session closed for user root Aug 13 00:18:24.772426 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 00:18:24.773784 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:18:24.795075 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 00:18:24.813354 auditctl[2318]: No rules Aug 13 00:18:24.814193 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:18:24.816522 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 00:18:24.826200 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:18:24.876776 augenrules[2336]: No rules Aug 13 00:18:24.880557 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:18:24.882661 sudo[2314]: pam_unix(sudo:session): session closed for user root Aug 13 00:18:24.905865 sshd[2311]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:24.911304 systemd[1]: sshd@7-172.31.31.36:22-139.178.89.65:44342.service: Deactivated successfully. Aug 13 00:18:24.914314 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:18:24.918042 systemd-logind[1993]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:18:24.920347 systemd-logind[1993]: Removed session 8. Aug 13 00:18:24.940349 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:18:24.949873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:18:24.953105 systemd[1]: Started sshd@8-172.31.31.36:22-139.178.89.65:44356.service - OpenSSH per-connection server daemon (139.178.89.65:44356). Aug 13 00:18:25.133723 sshd[2345]: Accepted publickey for core from 139.178.89.65 port 44356 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:18:25.137154 sshd[2345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:25.148655 systemd-logind[1993]: New session 9 of user core. Aug 13 00:18:25.159800 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:18:25.269051 sudo[2352]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:18:25.269837 sudo[2352]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:18:25.288816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:18:25.299046 (kubelet)[2360]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:18:25.386907 kubelet[2360]: E0813 00:18:25.386821 2360 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:18:25.395923 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:18:25.396260 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:18:25.804886 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:18:25.805056 (dockerd)[2377]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:18:26.223499 dockerd[2377]: time="2025-08-13T00:18:26.223067778Z" level=info msg="Starting up" Aug 13 00:18:26.347957 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport415893209-merged.mount: Deactivated successfully. Aug 13 00:18:26.396131 dockerd[2377]: time="2025-08-13T00:18:26.395509273Z" level=info msg="Loading containers: start." Aug 13 00:18:26.558542 kernel: Initializing XFRM netlink socket Aug 13 00:18:26.594222 (udev-worker)[2401]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:18:26.686682 systemd-networkd[1936]: docker0: Link UP Aug 13 00:18:26.713230 dockerd[2377]: time="2025-08-13T00:18:26.713067219Z" level=info msg="Loading containers: done." Aug 13 00:18:26.751740 dockerd[2377]: time="2025-08-13T00:18:26.751436330Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:18:26.751740 dockerd[2377]: time="2025-08-13T00:18:26.751629975Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 00:18:26.752623 dockerd[2377]: time="2025-08-13T00:18:26.752192708Z" level=info msg="Daemon has completed initialization" Aug 13 00:18:26.820101 dockerd[2377]: time="2025-08-13T00:18:26.819823275Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:18:26.820724 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:18:27.342589 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck558268067-merged.mount: Deactivated successfully. Aug 13 00:18:27.918280 containerd[2020]: time="2025-08-13T00:18:27.918200765Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 00:18:28.583497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount597720752.mount: Deactivated successfully. Aug 13 00:18:30.063845 containerd[2020]: time="2025-08-13T00:18:30.063755545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:30.066037 containerd[2020]: time="2025-08-13T00:18:30.065975142Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=26327781" Aug 13 00:18:30.068513 containerd[2020]: time="2025-08-13T00:18:30.068373089Z" level=info msg="ImageCreate event name:\"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:30.086611 containerd[2020]: time="2025-08-13T00:18:30.085976874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:30.087889 containerd[2020]: time="2025-08-13T00:18:30.087523403Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"26324581\" in 2.169248826s" Aug 13 00:18:30.087889 containerd[2020]: time="2025-08-13T00:18:30.087591453Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\"" Aug 13 00:18:30.088880 containerd[2020]: time="2025-08-13T00:18:30.088838769Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 00:18:31.473159 containerd[2020]: time="2025-08-13T00:18:31.473076275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:31.475999 containerd[2020]: time="2025-08-13T00:18:31.475907242Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=22529696" Aug 13 00:18:31.477654 containerd[2020]: time="2025-08-13T00:18:31.477569821Z" level=info msg="ImageCreate event name:\"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:31.485560 containerd[2020]: time="2025-08-13T00:18:31.485492436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:31.487763 containerd[2020]: time="2025-08-13T00:18:31.487318897Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"24065486\" in 1.398260706s" Aug 13 00:18:31.487763 containerd[2020]: time="2025-08-13T00:18:31.487382841Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\"" Aug 13 00:18:31.488298 containerd[2020]: time="2025-08-13T00:18:31.488232842Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 00:18:32.689122 containerd[2020]: time="2025-08-13T00:18:32.688684240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:32.690831 containerd[2020]: time="2025-08-13T00:18:32.690776441Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=17484138" Aug 13 00:18:32.691505 containerd[2020]: time="2025-08-13T00:18:32.691268976Z" level=info msg="ImageCreate event name:\"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:32.696942 containerd[2020]: time="2025-08-13T00:18:32.696889548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:32.699888 containerd[2020]: time="2025-08-13T00:18:32.699247947Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"19019946\" in 1.210953899s" Aug 13 00:18:32.699888 containerd[2020]: time="2025-08-13T00:18:32.699308565Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\"" Aug 13 00:18:32.700641 containerd[2020]: time="2025-08-13T00:18:32.700355741Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 00:18:33.961222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount328568666.mount: Deactivated successfully. Aug 13 00:18:34.536236 containerd[2020]: time="2025-08-13T00:18:34.534836740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:34.537116 containerd[2020]: time="2025-08-13T00:18:34.537063096Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=27378405" Aug 13 00:18:34.538837 containerd[2020]: time="2025-08-13T00:18:34.538759400Z" level=info msg="ImageCreate event name:\"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:34.542513 containerd[2020]: time="2025-08-13T00:18:34.542380217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:34.544226 containerd[2020]: time="2025-08-13T00:18:34.544013826Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"27377424\" in 1.843600745s" Aug 13 00:18:34.544226 containerd[2020]: time="2025-08-13T00:18:34.544074612Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\"" Aug 13 00:18:34.546013 containerd[2020]: time="2025-08-13T00:18:34.545944739Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:18:35.048884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2801208012.mount: Deactivated successfully. Aug 13 00:18:35.594817 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:18:35.609888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:18:35.975871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:18:35.983632 (kubelet)[2647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:18:36.080084 kubelet[2647]: E0813 00:18:36.079952 2647 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:18:36.085556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:18:36.085952 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:18:36.458697 containerd[2020]: time="2025-08-13T00:18:36.458634071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:36.461671 containerd[2020]: time="2025-08-13T00:18:36.461613744Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Aug 13 00:18:36.463502 containerd[2020]: time="2025-08-13T00:18:36.463421884Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:36.471764 containerd[2020]: time="2025-08-13T00:18:36.471683753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:36.473926 containerd[2020]: time="2025-08-13T00:18:36.473696330Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.927680372s" Aug 13 00:18:36.473926 containerd[2020]: time="2025-08-13T00:18:36.473760671Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 13 00:18:36.474484 containerd[2020]: time="2025-08-13T00:18:36.474395380Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:18:36.962414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1530065536.mount: Deactivated successfully. Aug 13 00:18:36.976424 containerd[2020]: time="2025-08-13T00:18:36.976336817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:36.978244 containerd[2020]: time="2025-08-13T00:18:36.978175379Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Aug 13 00:18:36.980854 containerd[2020]: time="2025-08-13T00:18:36.980781186Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:36.985777 containerd[2020]: time="2025-08-13T00:18:36.985682635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:36.988070 containerd[2020]: time="2025-08-13T00:18:36.987378723Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 512.918331ms" Aug 13 00:18:36.988070 containerd[2020]: time="2025-08-13T00:18:36.987443687Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 13 00:18:36.989121 containerd[2020]: time="2025-08-13T00:18:36.988796139Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 00:18:37.563572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4112616681.mount: Deactivated successfully. Aug 13 00:18:39.901836 containerd[2020]: time="2025-08-13T00:18:39.901771059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:39.931363 containerd[2020]: time="2025-08-13T00:18:39.930902141Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Aug 13 00:18:39.978320 containerd[2020]: time="2025-08-13T00:18:39.978228233Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:40.039064 containerd[2020]: time="2025-08-13T00:18:40.038950691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:40.041976 containerd[2020]: time="2025-08-13T00:18:40.041881152Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.053030746s" Aug 13 00:18:40.042387 containerd[2020]: time="2025-08-13T00:18:40.042177664Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Aug 13 00:18:40.658025 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 00:18:46.095423 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:18:46.101854 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:18:46.471959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:18:46.487002 (kubelet)[2744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:18:46.563492 kubelet[2744]: E0813 00:18:46.561107 2744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:18:46.565782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:18:46.566109 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:18:48.986551 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:18:49.003986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:18:49.060654 systemd[1]: Reloading requested from client PID 2758 ('systemctl') (unit session-9.scope)... Aug 13 00:18:49.060681 systemd[1]: Reloading... Aug 13 00:18:49.326509 zram_generator::config[2810]: No configuration found. Aug 13 00:18:49.548931 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:18:49.727848 systemd[1]: Reloading finished in 666 ms. Aug 13 00:18:49.824744 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:18:49.824937 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:18:49.825523 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:18:49.835075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:18:50.165811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:18:50.170403 (kubelet)[2862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:18:50.251544 kubelet[2862]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:18:50.251544 kubelet[2862]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:18:50.251544 kubelet[2862]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:18:50.252106 kubelet[2862]: I0813 00:18:50.251665 2862 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:18:51.146907 kubelet[2862]: I0813 00:18:51.146858 2862 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:18:51.147125 kubelet[2862]: I0813 00:18:51.147103 2862 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:18:51.147711 kubelet[2862]: I0813 00:18:51.147685 2862 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:18:51.190074 kubelet[2862]: E0813 00:18:51.190005 2862 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:18:51.192209 kubelet[2862]: I0813 00:18:51.192171 2862 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:18:51.207242 kubelet[2862]: E0813 00:18:51.207169 2862 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:18:51.207242 kubelet[2862]: I0813 00:18:51.207241 2862 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:18:51.212675 kubelet[2862]: I0813 00:18:51.212622 2862 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:18:51.215681 kubelet[2862]: I0813 00:18:51.215598 2862 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:18:51.216010 kubelet[2862]: I0813 00:18:51.215668 2862 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-36","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:18:51.216199 kubelet[2862]: I0813 00:18:51.216155 2862 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:18:51.216199 kubelet[2862]: I0813 00:18:51.216179 2862 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:18:51.216582 kubelet[2862]: I0813 00:18:51.216537 2862 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:18:51.222627 kubelet[2862]: I0813 00:18:51.222444 2862 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:18:51.222627 kubelet[2862]: I0813 00:18:51.222507 2862 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:18:51.222627 kubelet[2862]: I0813 00:18:51.222544 2862 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:18:51.222627 kubelet[2862]: I0813 00:18:51.222565 2862 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:18:51.224858 kubelet[2862]: W0813 00:18:51.224628 2862 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-36&limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Aug 13 00:18:51.225289 kubelet[2862]: E0813 00:18:51.225237 2862 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-36&limit=500&resourceVersion=0\": dial tcp 172.31.31.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:18:51.229513 kubelet[2862]: I0813 00:18:51.227961 2862 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:18:51.229513 kubelet[2862]: I0813 00:18:51.229011 2862 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:18:51.229513 kubelet[2862]: W0813 00:18:51.229236 2862 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:18:51.231090 kubelet[2862]: I0813 00:18:51.231037 2862 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:18:51.231225 kubelet[2862]: I0813 00:18:51.231101 2862 server.go:1287] "Started kubelet" Aug 13 00:18:51.231349 kubelet[2862]: W0813 00:18:51.231303 2862 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Aug 13 00:18:51.231418 kubelet[2862]: E0813 00:18:51.231366 2862 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:18:51.245428 kubelet[2862]: I0813 00:18:51.245365 2862 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:18:51.248711 kubelet[2862]: E0813 00:18:51.248236 2862 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.36:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-36.185b2b8548174a39 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-36,UID:ip-172-31-31-36,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-36,},FirstTimestamp:2025-08-13 00:18:51.231070777 +0000 UTC m=+1.054283739,LastTimestamp:2025-08-13 00:18:51.231070777 +0000 UTC m=+1.054283739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-36,}" Aug 13 00:18:51.251909 kubelet[2862]: I0813 00:18:51.251838 2862 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:18:51.254157 kubelet[2862]: I0813 00:18:51.254112 2862 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:18:51.257108 kubelet[2862]: I0813 00:18:51.257007 2862 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:18:51.257665 kubelet[2862]: I0813 00:18:51.257635 2862 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:18:51.258331 kubelet[2862]: I0813 00:18:51.258296 2862 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:18:51.258629 kubelet[2862]: I0813 00:18:51.258327 2862 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:18:51.258797 kubelet[2862]: E0813 00:18:51.258755 2862 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-36\" not found" Aug 13 00:18:51.259888 kubelet[2862]: I0813 00:18:51.258358 2862 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:18:51.260137 kubelet[2862]: I0813 00:18:51.260116 2862 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:18:51.261595 kubelet[2862]: W0813 00:18:51.261523 2862 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Aug 13 00:18:51.261838 kubelet[2862]: E0813 00:18:51.261803 2862 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:18:51.262531 kubelet[2862]: E0813 00:18:51.262363 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-36?timeout=10s\": dial tcp 172.31.31.36:6443: connect: connection refused" interval="200ms" Aug 13 00:18:51.264775 kubelet[2862]: I0813 00:18:51.264121 2862 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:18:51.266440 kubelet[2862]: E0813 00:18:51.266175 2862 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:18:51.268537 kubelet[2862]: I0813 00:18:51.267235 2862 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:18:51.268537 kubelet[2862]: I0813 00:18:51.267270 2862 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:18:51.294833 kubelet[2862]: I0813 00:18:51.294735 2862 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:18:51.303228 kubelet[2862]: I0813 00:18:51.303163 2862 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:18:51.303228 kubelet[2862]: I0813 00:18:51.303220 2862 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:18:51.303422 kubelet[2862]: I0813 00:18:51.303254 2862 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:18:51.303422 kubelet[2862]: I0813 00:18:51.303269 2862 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:18:51.303422 kubelet[2862]: E0813 00:18:51.303340 2862 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:18:51.306353 kubelet[2862]: W0813 00:18:51.306255 2862 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Aug 13 00:18:51.306710 kubelet[2862]: E0813 00:18:51.306643 2862 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:18:51.316287 kubelet[2862]: I0813 00:18:51.316141 2862 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:18:51.316515 kubelet[2862]: I0813 00:18:51.316473 2862 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:18:51.316622 kubelet[2862]: I0813 00:18:51.316605 2862 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:18:51.323535 kubelet[2862]: I0813 00:18:51.323498 2862 policy_none.go:49] "None policy: Start" Aug 13 00:18:51.323743 kubelet[2862]: I0813 00:18:51.323721 2862 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:18:51.323861 kubelet[2862]: I0813 00:18:51.323842 2862 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:18:51.335782 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:18:51.352335 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:18:51.358794 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:18:51.359242 kubelet[2862]: E0813 00:18:51.359018 2862 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-36\" not found" Aug 13 00:18:51.373846 kubelet[2862]: I0813 00:18:51.373526 2862 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:18:51.374015 kubelet[2862]: I0813 00:18:51.373994 2862 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:18:51.375495 kubelet[2862]: I0813 00:18:51.374242 2862 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:18:51.375495 kubelet[2862]: I0813 00:18:51.375253 2862 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:18:51.378234 kubelet[2862]: E0813 00:18:51.378197 2862 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:18:51.378673 kubelet[2862]: E0813 00:18:51.378521 2862 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-36\" not found" Aug 13 00:18:51.427383 systemd[1]: Created slice kubepods-burstable-poda6c1a1b4fcf6eae0b9a8cd054776bb8f.slice - libcontainer container kubepods-burstable-poda6c1a1b4fcf6eae0b9a8cd054776bb8f.slice. Aug 13 00:18:51.454205 kubelet[2862]: E0813 00:18:51.453778 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-36\" not found" node="ip-172-31-31-36" Aug 13 00:18:51.455315 systemd[1]: Created slice kubepods-burstable-pod8135257ac4037cde7970ab2c446eecba.slice - libcontainer container kubepods-burstable-pod8135257ac4037cde7970ab2c446eecba.slice. Aug 13 00:18:51.460034 kubelet[2862]: E0813 00:18:51.459658 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-36\" not found" node="ip-172-31-31-36" Aug 13 00:18:51.463198 kubelet[2862]: E0813 00:18:51.463130 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-36?timeout=10s\": dial tcp 172.31.31.36:6443: connect: connection refused" interval="400ms" Aug 13 00:18:51.464518 systemd[1]: Created slice kubepods-burstable-podb5c93fcaded1a881d5ad06dcece2f1b4.slice - libcontainer container kubepods-burstable-podb5c93fcaded1a881d5ad06dcece2f1b4.slice. Aug 13 00:18:51.468535 kubelet[2862]: E0813 00:18:51.468432 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-36\" not found" node="ip-172-31-31-36" Aug 13 00:18:51.478367 kubelet[2862]: I0813 00:18:51.478276 2862 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-36" Aug 13 00:18:51.479267 kubelet[2862]: E0813 00:18:51.479215 2862 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.36:6443/api/v1/nodes\": dial tcp 172.31.31.36:6443: connect: connection refused" node="ip-172-31-31-36" Aug 13 00:18:51.560816 kubelet[2862]: I0813 00:18:51.560706 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5c93fcaded1a881d5ad06dcece2f1b4-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-36\" (UID: \"b5c93fcaded1a881d5ad06dcece2f1b4\") " pod="kube-system/kube-controller-manager-ip-172-31-31-36" Aug 13 00:18:51.560816 kubelet[2862]: I0813 00:18:51.560769 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5c93fcaded1a881d5ad06dcece2f1b4-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-36\" (UID: \"b5c93fcaded1a881d5ad06dcece2f1b4\") " pod="kube-system/kube-controller-manager-ip-172-31-31-36" Aug 13 00:18:51.560816 kubelet[2862]: I0813 00:18:51.560840 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b5c93fcaded1a881d5ad06dcece2f1b4-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-36\" (UID: \"b5c93fcaded1a881d5ad06dcece2f1b4\") " pod="kube-system/kube-controller-manager-ip-172-31-31-36" Aug 13 00:18:51.561073 kubelet[2862]: I0813 00:18:51.560879 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5c93fcaded1a881d5ad06dcece2f1b4-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-36\" (UID: \"b5c93fcaded1a881d5ad06dcece2f1b4\") " pod="kube-system/kube-controller-manager-ip-172-31-31-36" Aug 13 00:18:51.561073 kubelet[2862]: I0813 00:18:51.560924 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8135257ac4037cde7970ab2c446eecba-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-36\" (UID: \"8135257ac4037cde7970ab2c446eecba\") " pod="kube-system/kube-scheduler-ip-172-31-31-36" Aug 13 00:18:51.561073 kubelet[2862]: I0813 00:18:51.560960 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a6c1a1b4fcf6eae0b9a8cd054776bb8f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-36\" (UID: \"a6c1a1b4fcf6eae0b9a8cd054776bb8f\") " pod="kube-system/kube-apiserver-ip-172-31-31-36" Aug 13 00:18:51.561073 kubelet[2862]: I0813 00:18:51.560995 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a6c1a1b4fcf6eae0b9a8cd054776bb8f-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-36\" (UID: \"a6c1a1b4fcf6eae0b9a8cd054776bb8f\") " pod="kube-system/kube-apiserver-ip-172-31-31-36" Aug 13 00:18:51.561073 kubelet[2862]: I0813 00:18:51.561031 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b5c93fcaded1a881d5ad06dcece2f1b4-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-36\" (UID: \"b5c93fcaded1a881d5ad06dcece2f1b4\") " pod="kube-system/kube-controller-manager-ip-172-31-31-36" Aug 13 00:18:51.561340 kubelet[2862]: I0813 00:18:51.561065 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a6c1a1b4fcf6eae0b9a8cd054776bb8f-ca-certs\") pod \"kube-apiserver-ip-172-31-31-36\" (UID: \"a6c1a1b4fcf6eae0b9a8cd054776bb8f\") " pod="kube-system/kube-apiserver-ip-172-31-31-36" Aug 13 00:18:51.681953 kubelet[2862]: I0813 00:18:51.681716 2862 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-36" Aug 13 00:18:51.682543 kubelet[2862]: E0813 00:18:51.682372 2862 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.36:6443/api/v1/nodes\": dial tcp 172.31.31.36:6443: connect: connection refused" node="ip-172-31-31-36" Aug 13 00:18:51.755977 containerd[2020]: time="2025-08-13T00:18:51.755919962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-36,Uid:a6c1a1b4fcf6eae0b9a8cd054776bb8f,Namespace:kube-system,Attempt:0,}" Aug 13 00:18:51.761786 containerd[2020]: time="2025-08-13T00:18:51.761702304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-36,Uid:8135257ac4037cde7970ab2c446eecba,Namespace:kube-system,Attempt:0,}" Aug 13 00:18:51.770819 containerd[2020]: time="2025-08-13T00:18:51.770597129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-36,Uid:b5c93fcaded1a881d5ad06dcece2f1b4,Namespace:kube-system,Attempt:0,}" Aug 13 00:18:51.863903 kubelet[2862]: E0813 00:18:51.863819 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-36?timeout=10s\": dial tcp 172.31.31.36:6443: connect: connection refused" interval="800ms" Aug 13 00:18:52.085767 kubelet[2862]: I0813 00:18:52.085207 2862 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-36" Aug 13 00:18:52.085767 kubelet[2862]: E0813 00:18:52.085718 2862 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.36:6443/api/v1/nodes\": dial tcp 172.31.31.36:6443: connect: connection refused" node="ip-172-31-31-36" Aug 13 00:18:52.213071 kubelet[2862]: W0813 00:18:52.213019 2862 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Aug 13 00:18:52.213250 kubelet[2862]: E0813 00:18:52.213095 2862 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:18:52.262396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1171339509.mount: Deactivated successfully. Aug 13 00:18:52.277779 containerd[2020]: time="2025-08-13T00:18:52.277694386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:18:52.279941 containerd[2020]: time="2025-08-13T00:18:52.279867340Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:18:52.281931 containerd[2020]: time="2025-08-13T00:18:52.281834752Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Aug 13 00:18:52.283918 containerd[2020]: time="2025-08-13T00:18:52.283865879Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:18:52.286102 containerd[2020]: time="2025-08-13T00:18:52.286050779Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:18:52.289103 containerd[2020]: time="2025-08-13T00:18:52.288886860Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:18:52.290722 containerd[2020]: time="2025-08-13T00:18:52.290621919Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:18:52.294911 containerd[2020]: time="2025-08-13T00:18:52.294823407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:18:52.299908 containerd[2020]: time="2025-08-13T00:18:52.299158209Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 528.434141ms" Aug 13 00:18:52.304846 containerd[2020]: time="2025-08-13T00:18:52.304382704Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 548.346776ms" Aug 13 00:18:52.311256 containerd[2020]: time="2025-08-13T00:18:52.311174318Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 549.162968ms" Aug 13 00:18:52.464634 kubelet[2862]: W0813 00:18:52.464426 2862 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Aug 13 00:18:52.464634 kubelet[2862]: E0813 00:18:52.464568 2862 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:18:52.497814 kubelet[2862]: W0813 00:18:52.497295 2862 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Aug 13 00:18:52.497814 kubelet[2862]: E0813 00:18:52.497707 2862 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:18:52.530540 containerd[2020]: time="2025-08-13T00:18:52.529957821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:18:52.530540 containerd[2020]: time="2025-08-13T00:18:52.530066151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:18:52.530540 containerd[2020]: time="2025-08-13T00:18:52.530093309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:52.530540 containerd[2020]: time="2025-08-13T00:18:52.530283964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:52.533561 containerd[2020]: time="2025-08-13T00:18:52.532536638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:18:52.533561 containerd[2020]: time="2025-08-13T00:18:52.532631821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:18:52.533561 containerd[2020]: time="2025-08-13T00:18:52.532657274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:52.533561 containerd[2020]: time="2025-08-13T00:18:52.532799737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:52.536622 containerd[2020]: time="2025-08-13T00:18:52.535835010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:18:52.539516 containerd[2020]: time="2025-08-13T00:18:52.539341398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:18:52.539516 containerd[2020]: time="2025-08-13T00:18:52.539407443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:52.541097 containerd[2020]: time="2025-08-13T00:18:52.540704512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:52.584822 systemd[1]: Started cri-containerd-4c88937279f2761889470d0c695544068c4f0a23fe5d27425630fe5a792f8e2b.scope - libcontainer container 4c88937279f2761889470d0c695544068c4f0a23fe5d27425630fe5a792f8e2b. Aug 13 00:18:52.588864 systemd[1]: Started cri-containerd-6e49144cc71a649638660f72915de845f481ff16542cfc88820ec321d1058c3b.scope - libcontainer container 6e49144cc71a649638660f72915de845f481ff16542cfc88820ec321d1058c3b. Aug 13 00:18:52.607502 systemd[1]: Started cri-containerd-a40ff0fb44bd3e4e1db7648838c5d364068cec127a3e57956dbee3558b5b84f0.scope - libcontainer container a40ff0fb44bd3e4e1db7648838c5d364068cec127a3e57956dbee3558b5b84f0. Aug 13 00:18:52.634929 kubelet[2862]: W0813 00:18:52.634253 2862 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-36&limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Aug 13 00:18:52.634929 kubelet[2862]: E0813 00:18:52.634349 2862 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-36&limit=500&resourceVersion=0\": dial tcp 172.31.31.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:18:52.672271 kubelet[2862]: E0813 00:18:52.671361 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-36?timeout=10s\": dial tcp 172.31.31.36:6443: connect: connection refused" interval="1.6s" Aug 13 00:18:52.704227 containerd[2020]: time="2025-08-13T00:18:52.703964514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-36,Uid:a6c1a1b4fcf6eae0b9a8cd054776bb8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c88937279f2761889470d0c695544068c4f0a23fe5d27425630fe5a792f8e2b\"" Aug 13 00:18:52.718965 containerd[2020]: time="2025-08-13T00:18:52.718542835Z" level=info msg="CreateContainer within sandbox \"4c88937279f2761889470d0c695544068c4f0a23fe5d27425630fe5a792f8e2b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:18:52.737441 containerd[2020]: time="2025-08-13T00:18:52.737369804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-36,Uid:b5c93fcaded1a881d5ad06dcece2f1b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a40ff0fb44bd3e4e1db7648838c5d364068cec127a3e57956dbee3558b5b84f0\"" Aug 13 00:18:52.748569 containerd[2020]: time="2025-08-13T00:18:52.748516511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-36,Uid:8135257ac4037cde7970ab2c446eecba,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e49144cc71a649638660f72915de845f481ff16542cfc88820ec321d1058c3b\"" Aug 13 00:18:52.749161 containerd[2020]: time="2025-08-13T00:18:52.748922122Z" level=info msg="CreateContainer within sandbox \"a40ff0fb44bd3e4e1db7648838c5d364068cec127a3e57956dbee3558b5b84f0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:18:52.757188 containerd[2020]: time="2025-08-13T00:18:52.757122256Z" level=info msg="CreateContainer within sandbox \"6e49144cc71a649638660f72915de845f481ff16542cfc88820ec321d1058c3b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:18:52.763211 containerd[2020]: time="2025-08-13T00:18:52.763113562Z" level=info msg="CreateContainer within sandbox \"4c88937279f2761889470d0c695544068c4f0a23fe5d27425630fe5a792f8e2b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"71736b57032498a569ae8420a52be485bd03b1c6e38d2ce3cefca448575ff6fa\"" Aug 13 00:18:52.765523 containerd[2020]: time="2025-08-13T00:18:52.764137134Z" level=info msg="StartContainer for \"71736b57032498a569ae8420a52be485bd03b1c6e38d2ce3cefca448575ff6fa\"" Aug 13 00:18:52.797104 containerd[2020]: time="2025-08-13T00:18:52.797003283Z" level=info msg="CreateContainer within sandbox \"6e49144cc71a649638660f72915de845f481ff16542cfc88820ec321d1058c3b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"355ee3411dc2f2ad86d025e85d1dd6ad192a2a53c1b362b63e4b525463e3a99e\"" Aug 13 00:18:52.797941 containerd[2020]: time="2025-08-13T00:18:52.797875471Z" level=info msg="StartContainer for \"355ee3411dc2f2ad86d025e85d1dd6ad192a2a53c1b362b63e4b525463e3a99e\"" Aug 13 00:18:52.801336 containerd[2020]: time="2025-08-13T00:18:52.801256408Z" level=info msg="CreateContainer within sandbox \"a40ff0fb44bd3e4e1db7648838c5d364068cec127a3e57956dbee3558b5b84f0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1b90c1c9247a120895fed52a05a0b9e128c5ae386e0a60c93fd3a94d1f1d2356\"" Aug 13 00:18:52.802187 containerd[2020]: time="2025-08-13T00:18:52.802121657Z" level=info msg="StartContainer for \"1b90c1c9247a120895fed52a05a0b9e128c5ae386e0a60c93fd3a94d1f1d2356\"" Aug 13 00:18:52.816811 systemd[1]: Started cri-containerd-71736b57032498a569ae8420a52be485bd03b1c6e38d2ce3cefca448575ff6fa.scope - libcontainer container 71736b57032498a569ae8420a52be485bd03b1c6e38d2ce3cefca448575ff6fa. Aug 13 00:18:52.890396 systemd[1]: Started cri-containerd-1b90c1c9247a120895fed52a05a0b9e128c5ae386e0a60c93fd3a94d1f1d2356.scope - libcontainer container 1b90c1c9247a120895fed52a05a0b9e128c5ae386e0a60c93fd3a94d1f1d2356. Aug 13 00:18:52.891583 kubelet[2862]: I0813 00:18:52.891533 2862 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-36" Aug 13 00:18:52.894701 kubelet[2862]: E0813 00:18:52.893674 2862 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.36:6443/api/v1/nodes\": dial tcp 172.31.31.36:6443: connect: connection refused" node="ip-172-31-31-36" Aug 13 00:18:52.903906 systemd[1]: Started cri-containerd-355ee3411dc2f2ad86d025e85d1dd6ad192a2a53c1b362b63e4b525463e3a99e.scope - libcontainer container 355ee3411dc2f2ad86d025e85d1dd6ad192a2a53c1b362b63e4b525463e3a99e. Aug 13 00:18:52.953126 containerd[2020]: time="2025-08-13T00:18:52.953029430Z" level=info msg="StartContainer for \"71736b57032498a569ae8420a52be485bd03b1c6e38d2ce3cefca448575ff6fa\" returns successfully" Aug 13 00:18:53.011528 containerd[2020]: time="2025-08-13T00:18:53.010628876Z" level=info msg="StartContainer for \"1b90c1c9247a120895fed52a05a0b9e128c5ae386e0a60c93fd3a94d1f1d2356\" returns successfully" Aug 13 00:18:53.058998 containerd[2020]: time="2025-08-13T00:18:53.058843298Z" level=info msg="StartContainer for \"355ee3411dc2f2ad86d025e85d1dd6ad192a2a53c1b362b63e4b525463e3a99e\" returns successfully" Aug 13 00:18:53.330553 kubelet[2862]: E0813 00:18:53.330170 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-36\" not found" node="ip-172-31-31-36" Aug 13 00:18:53.338218 kubelet[2862]: E0813 00:18:53.337867 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-36\" not found" node="ip-172-31-31-36" Aug 13 00:18:53.345386 kubelet[2862]: E0813 00:18:53.345184 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-36\" not found" node="ip-172-31-31-36" Aug 13 00:18:54.347477 kubelet[2862]: E0813 00:18:54.346221 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-36\" not found" node="ip-172-31-31-36" Aug 13 00:18:54.350591 kubelet[2862]: E0813 00:18:54.348924 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-36\" not found" node="ip-172-31-31-36" Aug 13 00:18:54.350591 kubelet[2862]: E0813 00:18:54.349481 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-36\" not found" node="ip-172-31-31-36" Aug 13 00:18:54.500003 kubelet[2862]: I0813 00:18:54.498746 2862 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-36" Aug 13 00:18:54.593598 update_engine[1995]: I20250813 00:18:54.593498 1995 update_attempter.cc:509] Updating boot flags... Aug 13 00:18:54.721562 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3156) Aug 13 00:18:55.348860 kubelet[2862]: E0813 00:18:55.348811 2862 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-36\" not found" node="ip-172-31-31-36" Aug 13 00:18:57.136507 kubelet[2862]: E0813 00:18:57.136417 2862 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-36\" not found" node="ip-172-31-31-36" Aug 13 00:18:57.170865 kubelet[2862]: E0813 00:18:57.170480 2862 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-31-36.185b2b8548174a39 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-36,UID:ip-172-31-31-36,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-36,},FirstTimestamp:2025-08-13 00:18:51.231070777 +0000 UTC m=+1.054283739,LastTimestamp:2025-08-13 00:18:51.231070777 +0000 UTC m=+1.054283739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-36,}" Aug 13 00:18:57.198863 kubelet[2862]: I0813 00:18:57.198133 2862 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-36" Aug 13 00:18:57.198863 kubelet[2862]: E0813 00:18:57.198192 2862 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-31-36\": node \"ip-172-31-31-36\" not found" Aug 13 00:18:57.230496 kubelet[2862]: I0813 00:18:57.230241 2862 apiserver.go:52] "Watching apiserver" Aug 13 00:18:57.259706 kubelet[2862]: I0813 00:18:57.259199 2862 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-36" Aug 13 00:18:57.260375 kubelet[2862]: I0813 00:18:57.260324 2862 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:18:57.363897 kubelet[2862]: E0813 00:18:57.363835 2862 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-36\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-31-36" Aug 13 00:18:57.363897 kubelet[2862]: I0813 00:18:57.363886 2862 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-36" Aug 13 00:18:57.393198 kubelet[2862]: E0813 00:18:57.393048 2862 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-36\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-31-36" Aug 13 00:18:57.393198 kubelet[2862]: I0813 00:18:57.393095 2862 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-36" Aug 13 00:18:57.432105 kubelet[2862]: E0813 00:18:57.432034 2862 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-36\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-31-36" Aug 13 00:18:59.683694 systemd[1]: Reloading requested from client PID 3242 ('systemctl') (unit session-9.scope)... Aug 13 00:18:59.684256 systemd[1]: Reloading... Aug 13 00:18:59.984638 zram_generator::config[3285]: No configuration found. Aug 13 00:19:00.266346 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:19:00.501873 systemd[1]: Reloading finished in 816 ms. Aug 13 00:19:00.589704 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:19:00.603448 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:19:00.603951 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:19:00.604019 systemd[1]: kubelet.service: Consumed 1.851s CPU time, 130.2M memory peak, 0B memory swap peak. Aug 13 00:19:00.615934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:19:00.996789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:19:01.007074 (kubelet)[3342]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:19:01.106939 kubelet[3342]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:19:01.106939 kubelet[3342]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:19:01.106939 kubelet[3342]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:19:01.106939 kubelet[3342]: I0813 00:19:01.106837 3342 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:19:01.136051 kubelet[3342]: I0813 00:19:01.135977 3342 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:19:01.136051 kubelet[3342]: I0813 00:19:01.136030 3342 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:19:01.138354 kubelet[3342]: I0813 00:19:01.138294 3342 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:19:01.145515 kubelet[3342]: I0813 00:19:01.143857 3342 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:19:01.154536 kubelet[3342]: I0813 00:19:01.154496 3342 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:19:01.164944 kubelet[3342]: E0813 00:19:01.164891 3342 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:19:01.165207 kubelet[3342]: I0813 00:19:01.165176 3342 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:19:01.182960 kubelet[3342]: I0813 00:19:01.182726 3342 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:19:01.183284 kubelet[3342]: I0813 00:19:01.183210 3342 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:19:01.185302 kubelet[3342]: I0813 00:19:01.183279 3342 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-36","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:19:01.185302 kubelet[3342]: I0813 00:19:01.183802 3342 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:19:01.185302 kubelet[3342]: I0813 00:19:01.183828 3342 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:19:01.185302 kubelet[3342]: I0813 00:19:01.183920 3342 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:19:01.185302 kubelet[3342]: I0813 00:19:01.184170 3342 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:19:01.185795 kubelet[3342]: I0813 00:19:01.184214 3342 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:19:01.185795 kubelet[3342]: I0813 00:19:01.184249 3342 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:19:01.185795 kubelet[3342]: I0813 00:19:01.184269 3342 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:19:01.194501 kubelet[3342]: I0813 00:19:01.192256 3342 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:19:01.194740 kubelet[3342]: I0813 00:19:01.194708 3342 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:19:01.195728 kubelet[3342]: I0813 00:19:01.195689 3342 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:19:01.195955 kubelet[3342]: I0813 00:19:01.195933 3342 server.go:1287] "Started kubelet" Aug 13 00:19:01.206242 kubelet[3342]: I0813 00:19:01.206198 3342 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:19:01.217780 kubelet[3342]: I0813 00:19:01.217723 3342 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:19:01.228734 kubelet[3342]: I0813 00:19:01.228686 3342 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:19:01.241501 kubelet[3342]: I0813 00:19:01.229583 3342 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:19:01.242078 kubelet[3342]: I0813 00:19:01.241804 3342 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:19:01.242078 kubelet[3342]: I0813 00:19:01.230545 3342 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:19:01.242078 kubelet[3342]: I0813 00:19:01.230493 3342 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:19:01.242265 kubelet[3342]: I0813 00:19:01.230563 3342 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:19:01.242341 kubelet[3342]: I0813 00:19:01.242329 3342 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:19:01.242399 kubelet[3342]: E0813 00:19:01.230735 3342 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-36\" not found" Aug 13 00:19:01.270232 kubelet[3342]: I0813 00:19:01.268324 3342 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:19:01.270232 kubelet[3342]: I0813 00:19:01.268527 3342 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:19:01.283769 kubelet[3342]: E0813 00:19:01.283727 3342 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:19:01.291576 kubelet[3342]: I0813 00:19:01.291538 3342 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:19:01.299711 kubelet[3342]: I0813 00:19:01.299495 3342 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:19:01.309859 kubelet[3342]: I0813 00:19:01.309783 3342 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:19:01.310013 kubelet[3342]: I0813 00:19:01.309875 3342 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:19:01.310013 kubelet[3342]: I0813 00:19:01.309948 3342 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:19:01.310013 kubelet[3342]: I0813 00:19:01.309962 3342 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:19:01.310176 kubelet[3342]: E0813 00:19:01.310031 3342 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:19:01.410172 kubelet[3342]: E0813 00:19:01.410127 3342 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:19:01.438276 kubelet[3342]: I0813 00:19:01.437444 3342 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:19:01.438276 kubelet[3342]: I0813 00:19:01.437540 3342 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:19:01.438276 kubelet[3342]: I0813 00:19:01.437575 3342 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:19:01.438276 kubelet[3342]: I0813 00:19:01.437860 3342 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:19:01.438276 kubelet[3342]: I0813 00:19:01.437881 3342 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:19:01.438276 kubelet[3342]: I0813 00:19:01.437922 3342 policy_none.go:49] "None policy: Start" Aug 13 00:19:01.438276 kubelet[3342]: I0813 00:19:01.437939 3342 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:19:01.438276 kubelet[3342]: I0813 00:19:01.437959 3342 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:19:01.438276 kubelet[3342]: I0813 00:19:01.438136 3342 state_mem.go:75] "Updated machine memory state" Aug 13 00:19:01.449014 kubelet[3342]: I0813 00:19:01.448907 3342 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:19:01.449916 kubelet[3342]: I0813 00:19:01.449190 3342 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:19:01.449916 kubelet[3342]: I0813 00:19:01.449224 3342 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:19:01.451100 kubelet[3342]: I0813 00:19:01.451056 3342 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:19:01.462288 kubelet[3342]: E0813 00:19:01.459844 3342 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:19:01.569988 kubelet[3342]: I0813 00:19:01.569514 3342 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-36" Aug 13 00:19:01.587956 kubelet[3342]: I0813 00:19:01.587134 3342 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-31-36" Aug 13 00:19:01.587956 kubelet[3342]: I0813 00:19:01.587254 3342 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-36" Aug 13 00:19:01.611691 kubelet[3342]: I0813 00:19:01.611196 3342 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-36" Aug 13 00:19:01.611691 kubelet[3342]: I0813 00:19:01.611314 3342 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-36" Aug 13 00:19:01.611691 kubelet[3342]: I0813 00:19:01.611660 3342 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-36" Aug 13 00:19:01.644276 kubelet[3342]: I0813 00:19:01.644215 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5c93fcaded1a881d5ad06dcece2f1b4-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-36\" (UID: \"b5c93fcaded1a881d5ad06dcece2f1b4\") " pod="kube-system/kube-controller-manager-ip-172-31-31-36" Aug 13 00:19:01.644436 kubelet[3342]: I0813 00:19:01.644335 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8135257ac4037cde7970ab2c446eecba-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-36\" (UID: \"8135257ac4037cde7970ab2c446eecba\") " pod="kube-system/kube-scheduler-ip-172-31-31-36" Aug 13 00:19:01.645324 kubelet[3342]: I0813 00:19:01.644451 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a6c1a1b4fcf6eae0b9a8cd054776bb8f-ca-certs\") pod \"kube-apiserver-ip-172-31-31-36\" (UID: \"a6c1a1b4fcf6eae0b9a8cd054776bb8f\") " pod="kube-system/kube-apiserver-ip-172-31-31-36" Aug 13 00:19:01.645324 kubelet[3342]: I0813 00:19:01.644645 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a6c1a1b4fcf6eae0b9a8cd054776bb8f-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-36\" (UID: \"a6c1a1b4fcf6eae0b9a8cd054776bb8f\") " pod="kube-system/kube-apiserver-ip-172-31-31-36" Aug 13 00:19:01.645324 kubelet[3342]: I0813 00:19:01.644748 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a6c1a1b4fcf6eae0b9a8cd054776bb8f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-36\" (UID: \"a6c1a1b4fcf6eae0b9a8cd054776bb8f\") " pod="kube-system/kube-apiserver-ip-172-31-31-36" Aug 13 00:19:01.645324 kubelet[3342]: I0813 00:19:01.644913 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b5c93fcaded1a881d5ad06dcece2f1b4-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-36\" (UID: \"b5c93fcaded1a881d5ad06dcece2f1b4\") " pod="kube-system/kube-controller-manager-ip-172-31-31-36" Aug 13 00:19:01.645324 kubelet[3342]: I0813 00:19:01.645028 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5c93fcaded1a881d5ad06dcece2f1b4-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-36\" (UID: \"b5c93fcaded1a881d5ad06dcece2f1b4\") " pod="kube-system/kube-controller-manager-ip-172-31-31-36" Aug 13 00:19:01.645674 kubelet[3342]: I0813 00:19:01.645133 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b5c93fcaded1a881d5ad06dcece2f1b4-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-36\" (UID: \"b5c93fcaded1a881d5ad06dcece2f1b4\") " pod="kube-system/kube-controller-manager-ip-172-31-31-36" Aug 13 00:19:01.645674 kubelet[3342]: I0813 00:19:01.645246 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5c93fcaded1a881d5ad06dcece2f1b4-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-36\" (UID: \"b5c93fcaded1a881d5ad06dcece2f1b4\") " pod="kube-system/kube-controller-manager-ip-172-31-31-36" Aug 13 00:19:02.201219 kubelet[3342]: I0813 00:19:02.201160 3342 apiserver.go:52] "Watching apiserver" Aug 13 00:19:02.242502 kubelet[3342]: I0813 00:19:02.242428 3342 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:19:02.404752 kubelet[3342]: I0813 00:19:02.404700 3342 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-36" Aug 13 00:19:02.430082 kubelet[3342]: E0813 00:19:02.429735 3342 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-36\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-31-36" Aug 13 00:19:02.463366 kubelet[3342]: I0813 00:19:02.463160 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-36" podStartSLOduration=1.46313599 podStartE2EDuration="1.46313599s" podCreationTimestamp="2025-08-13 00:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:19:02.445032634 +0000 UTC m=+1.427545640" watchObservedRunningTime="2025-08-13 00:19:02.46313599 +0000 UTC m=+1.445648984" Aug 13 00:19:02.485844 kubelet[3342]: I0813 00:19:02.485473 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-36" podStartSLOduration=1.4854037660000001 podStartE2EDuration="1.485403766s" podCreationTimestamp="2025-08-13 00:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:19:02.464670934 +0000 UTC m=+1.447183964" watchObservedRunningTime="2025-08-13 00:19:02.485403766 +0000 UTC m=+1.467916772" Aug 13 00:19:02.510411 kubelet[3342]: I0813 00:19:02.510167 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-36" podStartSLOduration=1.51014341 podStartE2EDuration="1.51014341s" podCreationTimestamp="2025-08-13 00:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:19:02.487542622 +0000 UTC m=+1.470055652" watchObservedRunningTime="2025-08-13 00:19:02.51014341 +0000 UTC m=+1.492656416" Aug 13 00:19:06.725761 kubelet[3342]: I0813 00:19:06.725694 3342 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:19:06.726634 containerd[2020]: time="2025-08-13T00:19:06.726524415Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:19:06.727438 kubelet[3342]: I0813 00:19:06.727140 3342 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:19:07.748412 systemd[1]: Created slice kubepods-besteffort-pod46222fb2_b551_440d_b78b_672c18982dc6.slice - libcontainer container kubepods-besteffort-pod46222fb2_b551_440d_b78b_672c18982dc6.slice. Aug 13 00:19:07.786278 kubelet[3342]: I0813 00:19:07.786208 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46222fb2-b551-440d-b78b-672c18982dc6-xtables-lock\") pod \"kube-proxy-nztdx\" (UID: \"46222fb2-b551-440d-b78b-672c18982dc6\") " pod="kube-system/kube-proxy-nztdx" Aug 13 00:19:07.786880 kubelet[3342]: I0813 00:19:07.786283 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46222fb2-b551-440d-b78b-672c18982dc6-lib-modules\") pod \"kube-proxy-nztdx\" (UID: \"46222fb2-b551-440d-b78b-672c18982dc6\") " pod="kube-system/kube-proxy-nztdx" Aug 13 00:19:07.786880 kubelet[3342]: I0813 00:19:07.786325 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/46222fb2-b551-440d-b78b-672c18982dc6-kube-proxy\") pod \"kube-proxy-nztdx\" (UID: \"46222fb2-b551-440d-b78b-672c18982dc6\") " pod="kube-system/kube-proxy-nztdx" Aug 13 00:19:07.786880 kubelet[3342]: I0813 00:19:07.786359 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnt6l\" (UniqueName: \"kubernetes.io/projected/46222fb2-b551-440d-b78b-672c18982dc6-kube-api-access-vnt6l\") pod \"kube-proxy-nztdx\" (UID: \"46222fb2-b551-440d-b78b-672c18982dc6\") " pod="kube-system/kube-proxy-nztdx" Aug 13 00:19:07.988159 kubelet[3342]: I0813 00:19:07.988103 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grdc7\" (UniqueName: \"kubernetes.io/projected/15c55c70-51de-427f-a71b-7ced83a2b08b-kube-api-access-grdc7\") pod \"tigera-operator-747864d56d-88hvl\" (UID: \"15c55c70-51de-427f-a71b-7ced83a2b08b\") " pod="tigera-operator/tigera-operator-747864d56d-88hvl" Aug 13 00:19:07.988982 kubelet[3342]: I0813 00:19:07.988606 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/15c55c70-51de-427f-a71b-7ced83a2b08b-var-lib-calico\") pod \"tigera-operator-747864d56d-88hvl\" (UID: \"15c55c70-51de-427f-a71b-7ced83a2b08b\") " pod="tigera-operator/tigera-operator-747864d56d-88hvl" Aug 13 00:19:08.000888 systemd[1]: Created slice kubepods-besteffort-pod15c55c70_51de_427f_a71b_7ced83a2b08b.slice - libcontainer container kubepods-besteffort-pod15c55c70_51de_427f_a71b_7ced83a2b08b.slice. Aug 13 00:19:08.061650 containerd[2020]: time="2025-08-13T00:19:08.061553294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nztdx,Uid:46222fb2-b551-440d-b78b-672c18982dc6,Namespace:kube-system,Attempt:0,}" Aug 13 00:19:08.107368 containerd[2020]: time="2025-08-13T00:19:08.106954634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:08.107368 containerd[2020]: time="2025-08-13T00:19:08.107061122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:08.107368 containerd[2020]: time="2025-08-13T00:19:08.107116802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:08.107368 containerd[2020]: time="2025-08-13T00:19:08.107295170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:08.155788 systemd[1]: Started cri-containerd-b7f3213ceb0fbcf27d6f6189bbe173cf81255a4c6b3fbc137c1445540c9651bc.scope - libcontainer container b7f3213ceb0fbcf27d6f6189bbe173cf81255a4c6b3fbc137c1445540c9651bc. Aug 13 00:19:08.194889 containerd[2020]: time="2025-08-13T00:19:08.194799819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nztdx,Uid:46222fb2-b551-440d-b78b-672c18982dc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7f3213ceb0fbcf27d6f6189bbe173cf81255a4c6b3fbc137c1445540c9651bc\"" Aug 13 00:19:08.204036 containerd[2020]: time="2025-08-13T00:19:08.203830287Z" level=info msg="CreateContainer within sandbox \"b7f3213ceb0fbcf27d6f6189bbe173cf81255a4c6b3fbc137c1445540c9651bc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:19:08.236179 containerd[2020]: time="2025-08-13T00:19:08.236041839Z" level=info msg="CreateContainer within sandbox \"b7f3213ceb0fbcf27d6f6189bbe173cf81255a4c6b3fbc137c1445540c9651bc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"41235f152e9f967ee1f7cd7cea3fb314783e00576769cd2b187c5e9b0773495c\"" Aug 13 00:19:08.237166 containerd[2020]: time="2025-08-13T00:19:08.237102459Z" level=info msg="StartContainer for \"41235f152e9f967ee1f7cd7cea3fb314783e00576769cd2b187c5e9b0773495c\"" Aug 13 00:19:08.283874 systemd[1]: Started cri-containerd-41235f152e9f967ee1f7cd7cea3fb314783e00576769cd2b187c5e9b0773495c.scope - libcontainer container 41235f152e9f967ee1f7cd7cea3fb314783e00576769cd2b187c5e9b0773495c. Aug 13 00:19:08.312008 containerd[2020]: time="2025-08-13T00:19:08.311383923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-88hvl,Uid:15c55c70-51de-427f-a71b-7ced83a2b08b,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:19:08.343443 containerd[2020]: time="2025-08-13T00:19:08.343251219Z" level=info msg="StartContainer for \"41235f152e9f967ee1f7cd7cea3fb314783e00576769cd2b187c5e9b0773495c\" returns successfully" Aug 13 00:19:08.382581 containerd[2020]: time="2025-08-13T00:19:08.381875728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:08.382581 containerd[2020]: time="2025-08-13T00:19:08.381985744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:08.382581 containerd[2020]: time="2025-08-13T00:19:08.382014976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:08.382581 containerd[2020]: time="2025-08-13T00:19:08.382172776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:08.454646 kubelet[3342]: I0813 00:19:08.453952 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nztdx" podStartSLOduration=1.453924952 podStartE2EDuration="1.453924952s" podCreationTimestamp="2025-08-13 00:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:19:08.452924692 +0000 UTC m=+7.435437710" watchObservedRunningTime="2025-08-13 00:19:08.453924952 +0000 UTC m=+7.436437946" Aug 13 00:19:08.469144 systemd[1]: Started cri-containerd-31fac44eaf62ab5d64327501d0e38373a3777805199ca4a12c0368f3ad068400.scope - libcontainer container 31fac44eaf62ab5d64327501d0e38373a3777805199ca4a12c0368f3ad068400. Aug 13 00:19:08.549269 containerd[2020]: time="2025-08-13T00:19:08.548313316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-88hvl,Uid:15c55c70-51de-427f-a71b-7ced83a2b08b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"31fac44eaf62ab5d64327501d0e38373a3777805199ca4a12c0368f3ad068400\"" Aug 13 00:19:08.557229 containerd[2020]: time="2025-08-13T00:19:08.557162525Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:19:10.072526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3586239752.mount: Deactivated successfully. Aug 13 00:19:11.407151 containerd[2020]: time="2025-08-13T00:19:11.407041951Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:11.413704 containerd[2020]: time="2025-08-13T00:19:11.413633155Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 13 00:19:11.416496 containerd[2020]: time="2025-08-13T00:19:11.416374435Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:11.431659 containerd[2020]: time="2025-08-13T00:19:11.431544523Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:11.432627 containerd[2020]: time="2025-08-13T00:19:11.432559267Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.87532689s" Aug 13 00:19:11.432790 containerd[2020]: time="2025-08-13T00:19:11.432625831Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 13 00:19:11.437887 containerd[2020]: time="2025-08-13T00:19:11.437785027Z" level=info msg="CreateContainer within sandbox \"31fac44eaf62ab5d64327501d0e38373a3777805199ca4a12c0368f3ad068400\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:19:11.473570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1935346466.mount: Deactivated successfully. Aug 13 00:19:11.482501 containerd[2020]: time="2025-08-13T00:19:11.480833731Z" level=info msg="CreateContainer within sandbox \"31fac44eaf62ab5d64327501d0e38373a3777805199ca4a12c0368f3ad068400\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f3e9916a5110d1d348ff295f47a2afac5933998c7aa9410e214aa387349d3cf0\"" Aug 13 00:19:11.482501 containerd[2020]: time="2025-08-13T00:19:11.481767235Z" level=info msg="StartContainer for \"f3e9916a5110d1d348ff295f47a2afac5933998c7aa9410e214aa387349d3cf0\"" Aug 13 00:19:11.551765 systemd[1]: Started cri-containerd-f3e9916a5110d1d348ff295f47a2afac5933998c7aa9410e214aa387349d3cf0.scope - libcontainer container f3e9916a5110d1d348ff295f47a2afac5933998c7aa9410e214aa387349d3cf0. Aug 13 00:19:11.601059 containerd[2020]: time="2025-08-13T00:19:11.600917396Z" level=info msg="StartContainer for \"f3e9916a5110d1d348ff295f47a2afac5933998c7aa9410e214aa387349d3cf0\" returns successfully" Aug 13 00:19:19.978362 sudo[2352]: pam_unix(sudo:session): session closed for user root Aug 13 00:19:20.005003 sshd[2345]: pam_unix(sshd:session): session closed for user core Aug 13 00:19:20.014129 systemd[1]: sshd@8-172.31.31.36:22-139.178.89.65:44356.service: Deactivated successfully. Aug 13 00:19:20.023188 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:19:20.023550 systemd[1]: session-9.scope: Consumed 12.447s CPU time, 155.9M memory peak, 0B memory swap peak. Aug 13 00:19:20.026866 systemd-logind[1993]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:19:20.032016 systemd-logind[1993]: Removed session 9. Aug 13 00:19:31.565996 kubelet[3342]: I0813 00:19:31.565751 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-88hvl" podStartSLOduration=21.684402948 podStartE2EDuration="24.565727979s" podCreationTimestamp="2025-08-13 00:19:07 +0000 UTC" firstStartedPulling="2025-08-13 00:19:08.553409632 +0000 UTC m=+7.535922638" lastFinishedPulling="2025-08-13 00:19:11.434734675 +0000 UTC m=+10.417247669" observedRunningTime="2025-08-13 00:19:12.457140188 +0000 UTC m=+11.439653182" watchObservedRunningTime="2025-08-13 00:19:31.565727979 +0000 UTC m=+30.548240973" Aug 13 00:19:31.585907 systemd[1]: Created slice kubepods-besteffort-pod3e331953_47bd_4126_9caa_35344997f7b5.slice - libcontainer container kubepods-besteffort-pod3e331953_47bd_4126_9caa_35344997f7b5.slice. Aug 13 00:19:31.661926 kubelet[3342]: I0813 00:19:31.661687 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e331953-47bd-4126-9caa-35344997f7b5-tigera-ca-bundle\") pod \"calico-typha-88c5dcf69-lxljg\" (UID: \"3e331953-47bd-4126-9caa-35344997f7b5\") " pod="calico-system/calico-typha-88c5dcf69-lxljg" Aug 13 00:19:31.661926 kubelet[3342]: I0813 00:19:31.661843 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3e331953-47bd-4126-9caa-35344997f7b5-typha-certs\") pod \"calico-typha-88c5dcf69-lxljg\" (UID: \"3e331953-47bd-4126-9caa-35344997f7b5\") " pod="calico-system/calico-typha-88c5dcf69-lxljg" Aug 13 00:19:31.662281 kubelet[3342]: I0813 00:19:31.662169 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qccbd\" (UniqueName: \"kubernetes.io/projected/3e331953-47bd-4126-9caa-35344997f7b5-kube-api-access-qccbd\") pod \"calico-typha-88c5dcf69-lxljg\" (UID: \"3e331953-47bd-4126-9caa-35344997f7b5\") " pod="calico-system/calico-typha-88c5dcf69-lxljg" Aug 13 00:19:31.897769 containerd[2020]: time="2025-08-13T00:19:31.897699412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-88c5dcf69-lxljg,Uid:3e331953-47bd-4126-9caa-35344997f7b5,Namespace:calico-system,Attempt:0,}" Aug 13 00:19:31.970513 containerd[2020]: time="2025-08-13T00:19:31.970267193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:31.970513 containerd[2020]: time="2025-08-13T00:19:31.970388177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:31.970513 containerd[2020]: time="2025-08-13T00:19:31.970426865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:31.972832 containerd[2020]: time="2025-08-13T00:19:31.972717149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:32.042020 systemd[1]: Started cri-containerd-288daa0b65a8b1d56ea85d719cf89f247d37040bc6e6735f0c99628793daee6d.scope - libcontainer container 288daa0b65a8b1d56ea85d719cf89f247d37040bc6e6735f0c99628793daee6d. Aug 13 00:19:32.067100 systemd[1]: Created slice kubepods-besteffort-podaecea146_6ee8_45e4_861b_731c27205813.slice - libcontainer container kubepods-besteffort-podaecea146_6ee8_45e4_861b_731c27205813.slice. Aug 13 00:19:32.166857 kubelet[3342]: I0813 00:19:32.166432 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aecea146-6ee8-45e4-861b-731c27205813-cni-log-dir\") pod \"calico-node-96q5g\" (UID: \"aecea146-6ee8-45e4-861b-731c27205813\") " pod="calico-system/calico-node-96q5g" Aug 13 00:19:32.167015 kubelet[3342]: I0813 00:19:32.166804 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aecea146-6ee8-45e4-861b-731c27205813-tigera-ca-bundle\") pod \"calico-node-96q5g\" (UID: \"aecea146-6ee8-45e4-861b-731c27205813\") " pod="calico-system/calico-node-96q5g" Aug 13 00:19:32.167209 kubelet[3342]: I0813 00:19:32.167166 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aecea146-6ee8-45e4-861b-731c27205813-var-run-calico\") pod \"calico-node-96q5g\" (UID: \"aecea146-6ee8-45e4-861b-731c27205813\") " pod="calico-system/calico-node-96q5g" Aug 13 00:19:32.167417 kubelet[3342]: I0813 00:19:32.167367 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aecea146-6ee8-45e4-861b-731c27205813-flexvol-driver-host\") pod \"calico-node-96q5g\" (UID: \"aecea146-6ee8-45e4-861b-731c27205813\") " pod="calico-system/calico-node-96q5g" Aug 13 00:19:32.167543 kubelet[3342]: I0813 00:19:32.167521 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aecea146-6ee8-45e4-861b-731c27205813-xtables-lock\") pod \"calico-node-96q5g\" (UID: \"aecea146-6ee8-45e4-861b-731c27205813\") " pod="calico-system/calico-node-96q5g" Aug 13 00:19:32.169148 kubelet[3342]: I0813 00:19:32.168542 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aecea146-6ee8-45e4-861b-731c27205813-cni-net-dir\") pod \"calico-node-96q5g\" (UID: \"aecea146-6ee8-45e4-861b-731c27205813\") " pod="calico-system/calico-node-96q5g" Aug 13 00:19:32.169148 kubelet[3342]: I0813 00:19:32.168646 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aecea146-6ee8-45e4-861b-731c27205813-lib-modules\") pod \"calico-node-96q5g\" (UID: \"aecea146-6ee8-45e4-861b-731c27205813\") " pod="calico-system/calico-node-96q5g" Aug 13 00:19:32.169148 kubelet[3342]: I0813 00:19:32.168727 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aecea146-6ee8-45e4-861b-731c27205813-var-lib-calico\") pod \"calico-node-96q5g\" (UID: \"aecea146-6ee8-45e4-861b-731c27205813\") " pod="calico-system/calico-node-96q5g" Aug 13 00:19:32.169148 kubelet[3342]: I0813 00:19:32.168809 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aecea146-6ee8-45e4-861b-731c27205813-node-certs\") pod \"calico-node-96q5g\" (UID: \"aecea146-6ee8-45e4-861b-731c27205813\") " pod="calico-system/calico-node-96q5g" Aug 13 00:19:32.169148 kubelet[3342]: I0813 00:19:32.168883 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aecea146-6ee8-45e4-861b-731c27205813-policysync\") pod \"calico-node-96q5g\" (UID: \"aecea146-6ee8-45e4-861b-731c27205813\") " pod="calico-system/calico-node-96q5g" Aug 13 00:19:32.169655 kubelet[3342]: I0813 00:19:32.168924 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aecea146-6ee8-45e4-861b-731c27205813-cni-bin-dir\") pod \"calico-node-96q5g\" (UID: \"aecea146-6ee8-45e4-861b-731c27205813\") " pod="calico-system/calico-node-96q5g" Aug 13 00:19:32.169655 kubelet[3342]: I0813 00:19:32.168991 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tsl8\" (UniqueName: \"kubernetes.io/projected/aecea146-6ee8-45e4-861b-731c27205813-kube-api-access-5tsl8\") pod \"calico-node-96q5g\" (UID: \"aecea146-6ee8-45e4-861b-731c27205813\") " pod="calico-system/calico-node-96q5g" Aug 13 00:19:32.245189 containerd[2020]: time="2025-08-13T00:19:32.245122886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-88c5dcf69-lxljg,Uid:3e331953-47bd-4126-9caa-35344997f7b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"288daa0b65a8b1d56ea85d719cf89f247d37040bc6e6735f0c99628793daee6d\"" Aug 13 00:19:32.250937 containerd[2020]: time="2025-08-13T00:19:32.250876010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:19:32.283625 kubelet[3342]: E0813 00:19:32.282442 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.283625 kubelet[3342]: W0813 00:19:32.282525 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.283625 kubelet[3342]: E0813 00:19:32.282868 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.285644 kubelet[3342]: E0813 00:19:32.285599 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.285750 kubelet[3342]: W0813 00:19:32.285662 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.285750 kubelet[3342]: E0813 00:19:32.285698 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.311651 kubelet[3342]: E0813 00:19:32.311556 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.311651 kubelet[3342]: W0813 00:19:32.311638 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.311884 kubelet[3342]: E0813 00:19:32.311696 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.322355 kubelet[3342]: E0813 00:19:32.322083 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vh7v8" podUID="7cfa030b-4e22-4159-b794-1031c8aae80f" Aug 13 00:19:32.350241 kubelet[3342]: E0813 00:19:32.350186 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.350241 kubelet[3342]: W0813 00:19:32.350228 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.350635 kubelet[3342]: E0813 00:19:32.350263 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.351948 kubelet[3342]: E0813 00:19:32.351897 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.351948 kubelet[3342]: W0813 00:19:32.351936 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.352332 kubelet[3342]: E0813 00:19:32.351971 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.352759 kubelet[3342]: E0813 00:19:32.352715 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.352759 kubelet[3342]: W0813 00:19:32.352751 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.352940 kubelet[3342]: E0813 00:19:32.352781 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.354131 kubelet[3342]: E0813 00:19:32.353283 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.354296 kubelet[3342]: W0813 00:19:32.354123 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.354296 kubelet[3342]: E0813 00:19:32.354182 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.354839 kubelet[3342]: E0813 00:19:32.354793 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.354839 kubelet[3342]: W0813 00:19:32.354825 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.355115 kubelet[3342]: E0813 00:19:32.354853 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.356110 kubelet[3342]: E0813 00:19:32.356060 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.356110 kubelet[3342]: W0813 00:19:32.356098 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.356340 kubelet[3342]: E0813 00:19:32.356225 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.358988 kubelet[3342]: E0813 00:19:32.358503 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.358988 kubelet[3342]: W0813 00:19:32.358544 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.358988 kubelet[3342]: E0813 00:19:32.358577 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.360290 kubelet[3342]: E0813 00:19:32.359781 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.360290 kubelet[3342]: W0813 00:19:32.360283 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.360543 kubelet[3342]: E0813 00:19:32.360353 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.362651 kubelet[3342]: E0813 00:19:32.362577 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.362651 kubelet[3342]: W0813 00:19:32.362637 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.362895 kubelet[3342]: E0813 00:19:32.362687 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.363179 kubelet[3342]: E0813 00:19:32.363119 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.363179 kubelet[3342]: W0813 00:19:32.363171 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.363324 kubelet[3342]: E0813 00:19:32.363232 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.363847 kubelet[3342]: E0813 00:19:32.363804 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.363847 kubelet[3342]: W0813 00:19:32.363838 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.364022 kubelet[3342]: E0813 00:19:32.363890 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.364415 kubelet[3342]: E0813 00:19:32.364376 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.364415 kubelet[3342]: W0813 00:19:32.364408 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.364647 kubelet[3342]: E0813 00:19:32.364434 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.365022 kubelet[3342]: E0813 00:19:32.364979 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.365022 kubelet[3342]: W0813 00:19:32.365013 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.365339 kubelet[3342]: E0813 00:19:32.365039 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.367562 kubelet[3342]: E0813 00:19:32.367449 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.367562 kubelet[3342]: W0813 00:19:32.367554 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.367786 kubelet[3342]: E0813 00:19:32.367590 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.369050 kubelet[3342]: E0813 00:19:32.368001 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.369050 kubelet[3342]: W0813 00:19:32.368035 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.369050 kubelet[3342]: E0813 00:19:32.368062 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.371071 kubelet[3342]: E0813 00:19:32.370856 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.371071 kubelet[3342]: W0813 00:19:32.371057 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.371312 kubelet[3342]: E0813 00:19:32.371108 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.372914 kubelet[3342]: E0813 00:19:32.372834 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.372914 kubelet[3342]: W0813 00:19:32.372896 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.373115 kubelet[3342]: E0813 00:19:32.372953 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.374573 kubelet[3342]: E0813 00:19:32.374370 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.374573 kubelet[3342]: W0813 00:19:32.374410 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.374573 kubelet[3342]: E0813 00:19:32.374441 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.376286 kubelet[3342]: E0813 00:19:32.376096 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.376934 kubelet[3342]: W0813 00:19:32.376518 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.376934 kubelet[3342]: E0813 00:19:32.376564 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.377597 kubelet[3342]: E0813 00:19:32.377565 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.378480 kubelet[3342]: W0813 00:19:32.377739 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.378642 kubelet[3342]: E0813 00:19:32.378615 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.380296 kubelet[3342]: E0813 00:19:32.380257 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.380552 kubelet[3342]: W0813 00:19:32.380505 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.380962 kubelet[3342]: E0813 00:19:32.380910 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.381055 kubelet[3342]: I0813 00:19:32.380998 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7cfa030b-4e22-4159-b794-1031c8aae80f-varrun\") pod \"csi-node-driver-vh7v8\" (UID: \"7cfa030b-4e22-4159-b794-1031c8aae80f\") " pod="calico-system/csi-node-driver-vh7v8" Aug 13 00:19:32.382054 containerd[2020]: time="2025-08-13T00:19:32.381970347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-96q5g,Uid:aecea146-6ee8-45e4-861b-731c27205813,Namespace:calico-system,Attempt:0,}" Aug 13 00:19:32.384114 kubelet[3342]: E0813 00:19:32.384059 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.384114 kubelet[3342]: W0813 00:19:32.384098 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.385195 kubelet[3342]: E0813 00:19:32.384179 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.385195 kubelet[3342]: I0813 00:19:32.384809 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjwj4\" (UniqueName: \"kubernetes.io/projected/7cfa030b-4e22-4159-b794-1031c8aae80f-kube-api-access-rjwj4\") pod \"csi-node-driver-vh7v8\" (UID: \"7cfa030b-4e22-4159-b794-1031c8aae80f\") " pod="calico-system/csi-node-driver-vh7v8" Aug 13 00:19:32.386030 kubelet[3342]: E0813 00:19:32.385762 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.386030 kubelet[3342]: W0813 00:19:32.385796 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.386030 kubelet[3342]: E0813 00:19:32.385841 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.387146 kubelet[3342]: E0813 00:19:32.387115 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.387282 kubelet[3342]: W0813 00:19:32.387258 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.387405 kubelet[3342]: E0813 00:19:32.387381 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.395491 kubelet[3342]: E0813 00:19:32.392634 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.395491 kubelet[3342]: W0813 00:19:32.392682 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.395491 kubelet[3342]: E0813 00:19:32.392732 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.395491 kubelet[3342]: I0813 00:19:32.392778 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cfa030b-4e22-4159-b794-1031c8aae80f-kubelet-dir\") pod \"csi-node-driver-vh7v8\" (UID: \"7cfa030b-4e22-4159-b794-1031c8aae80f\") " pod="calico-system/csi-node-driver-vh7v8" Aug 13 00:19:32.396658 kubelet[3342]: E0813 00:19:32.395791 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.396658 kubelet[3342]: W0813 00:19:32.395941 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.396658 kubelet[3342]: E0813 00:19:32.396135 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.397130 kubelet[3342]: I0813 00:19:32.396869 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7cfa030b-4e22-4159-b794-1031c8aae80f-registration-dir\") pod \"csi-node-driver-vh7v8\" (UID: \"7cfa030b-4e22-4159-b794-1031c8aae80f\") " pod="calico-system/csi-node-driver-vh7v8" Aug 13 00:19:32.397513 kubelet[3342]: E0813 00:19:32.397473 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.397513 kubelet[3342]: W0813 00:19:32.397508 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.397737 kubelet[3342]: E0813 00:19:32.397647 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.400491 kubelet[3342]: E0813 00:19:32.399944 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.400491 kubelet[3342]: W0813 00:19:32.399983 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.400491 kubelet[3342]: E0813 00:19:32.400199 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.405039 kubelet[3342]: E0813 00:19:32.404753 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.405039 kubelet[3342]: W0813 00:19:32.404790 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.407292 kubelet[3342]: E0813 00:19:32.407067 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.409484 kubelet[3342]: E0813 00:19:32.408342 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.409484 kubelet[3342]: W0813 00:19:32.408377 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.409484 kubelet[3342]: E0813 00:19:32.408478 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.409484 kubelet[3342]: I0813 00:19:32.408529 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7cfa030b-4e22-4159-b794-1031c8aae80f-socket-dir\") pod \"csi-node-driver-vh7v8\" (UID: \"7cfa030b-4e22-4159-b794-1031c8aae80f\") " pod="calico-system/csi-node-driver-vh7v8" Aug 13 00:19:32.410087 kubelet[3342]: E0813 00:19:32.410057 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.411095 kubelet[3342]: W0813 00:19:32.410204 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.411095 kubelet[3342]: E0813 00:19:32.410243 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.417437 kubelet[3342]: E0813 00:19:32.417206 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.418653 kubelet[3342]: W0813 00:19:32.418568 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.419114 kubelet[3342]: E0813 00:19:32.418779 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.421831 kubelet[3342]: E0813 00:19:32.421793 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.422225 kubelet[3342]: W0813 00:19:32.421965 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.422225 kubelet[3342]: E0813 00:19:32.422004 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.427721 kubelet[3342]: E0813 00:19:32.427682 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.428202 kubelet[3342]: W0813 00:19:32.427911 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.428202 kubelet[3342]: E0813 00:19:32.427955 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.428705 kubelet[3342]: E0813 00:19:32.428676 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.428839 kubelet[3342]: W0813 00:19:32.428812 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.428953 kubelet[3342]: E0813 00:19:32.428929 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.483063 containerd[2020]: time="2025-08-13T00:19:32.482549523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:32.483063 containerd[2020]: time="2025-08-13T00:19:32.482692683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:32.483063 containerd[2020]: time="2025-08-13T00:19:32.482720211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:32.483063 containerd[2020]: time="2025-08-13T00:19:32.482897055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:32.514286 kubelet[3342]: E0813 00:19:32.512336 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.514286 kubelet[3342]: W0813 00:19:32.512439 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.514286 kubelet[3342]: E0813 00:19:32.512690 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.514286 kubelet[3342]: E0813 00:19:32.513795 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.514729 kubelet[3342]: W0813 00:19:32.513823 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.514729 kubelet[3342]: E0813 00:19:32.514624 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.515759 kubelet[3342]: E0813 00:19:32.515197 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.515759 kubelet[3342]: W0813 00:19:32.515234 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.515759 kubelet[3342]: E0813 00:19:32.515290 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.516984 kubelet[3342]: E0813 00:19:32.516739 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.517121 kubelet[3342]: W0813 00:19:32.516789 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.518498 kubelet[3342]: E0813 00:19:32.517083 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.519870 kubelet[3342]: E0813 00:19:32.519539 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.519870 kubelet[3342]: W0813 00:19:32.519578 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.519870 kubelet[3342]: E0813 00:19:32.519627 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.521510 kubelet[3342]: E0813 00:19:32.520830 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.521510 kubelet[3342]: W0813 00:19:32.521088 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.521510 kubelet[3342]: E0813 00:19:32.521361 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.523768 kubelet[3342]: E0813 00:19:32.522739 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.523768 kubelet[3342]: W0813 00:19:32.522772 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.524375 kubelet[3342]: E0813 00:19:32.524114 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.526013 kubelet[3342]: E0813 00:19:32.525140 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.526013 kubelet[3342]: W0813 00:19:32.525174 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.526013 kubelet[3342]: E0813 00:19:32.525226 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.531658 kubelet[3342]: E0813 00:19:32.530367 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.531658 kubelet[3342]: W0813 00:19:32.530416 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.531658 kubelet[3342]: E0813 00:19:32.530492 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.532551 kubelet[3342]: E0813 00:19:32.532397 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.532910 kubelet[3342]: W0813 00:19:32.532430 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.534753 kubelet[3342]: E0813 00:19:32.533246 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.535279 kubelet[3342]: E0813 00:19:32.535248 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.535598 kubelet[3342]: W0813 00:19:32.535434 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.536579 kubelet[3342]: E0813 00:19:32.535923 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.540616 kubelet[3342]: E0813 00:19:32.540574 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.541364 kubelet[3342]: W0813 00:19:32.540777 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.544739 kubelet[3342]: E0813 00:19:32.544554 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.544898 kubelet[3342]: E0813 00:19:32.544822 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.544898 kubelet[3342]: W0813 00:19:32.544843 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.546076 kubelet[3342]: E0813 00:19:32.545525 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.546076 kubelet[3342]: W0813 00:19:32.545597 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.546076 kubelet[3342]: E0813 00:19:32.545531 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.546076 kubelet[3342]: E0813 00:19:32.545682 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.547821 kubelet[3342]: E0813 00:19:32.546595 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.547821 kubelet[3342]: W0813 00:19:32.546633 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.547821 kubelet[3342]: E0813 00:19:32.546948 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.548189 kubelet[3342]: E0813 00:19:32.547822 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.548189 kubelet[3342]: W0813 00:19:32.547882 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.551538 kubelet[3342]: E0813 00:19:32.548888 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.551538 kubelet[3342]: W0813 00:19:32.549060 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.551538 kubelet[3342]: E0813 00:19:32.549807 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.551538 kubelet[3342]: W0813 00:19:32.549864 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.551538 kubelet[3342]: E0813 00:19:32.549950 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.551538 kubelet[3342]: E0813 00:19:32.550017 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.551538 kubelet[3342]: E0813 00:19:32.550060 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.551538 kubelet[3342]: E0813 00:19:32.550371 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.551538 kubelet[3342]: W0813 00:19:32.550390 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.551538 kubelet[3342]: E0813 00:19:32.550807 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.552109 kubelet[3342]: W0813 00:19:32.550850 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.552109 kubelet[3342]: E0813 00:19:32.551320 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.552109 kubelet[3342]: W0813 00:19:32.551339 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.552109 kubelet[3342]: E0813 00:19:32.551414 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.552109 kubelet[3342]: E0813 00:19:32.551914 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.552109 kubelet[3342]: W0813 00:19:32.551958 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.552109 kubelet[3342]: E0813 00:19:32.551984 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.552109 kubelet[3342]: E0813 00:19:32.552061 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.556547 kubelet[3342]: E0813 00:19:32.552818 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.556547 kubelet[3342]: W0813 00:19:32.553112 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.556547 kubelet[3342]: E0813 00:19:32.553154 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.556547 kubelet[3342]: E0813 00:19:32.554846 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.553902 systemd[1]: Started cri-containerd-709d06bb8cc38e2a23cc67342969a36583cc3617884df775b4ebe27a3c36ddae.scope - libcontainer container 709d06bb8cc38e2a23cc67342969a36583cc3617884df775b4ebe27a3c36ddae. Aug 13 00:19:32.559537 kubelet[3342]: E0813 00:19:32.558977 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.559537 kubelet[3342]: W0813 00:19:32.559013 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.559537 kubelet[3342]: E0813 00:19:32.559057 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.560770 kubelet[3342]: E0813 00:19:32.560643 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.560770 kubelet[3342]: W0813 00:19:32.560679 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.560770 kubelet[3342]: E0813 00:19:32.560712 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.622659 kubelet[3342]: E0813 00:19:32.622619 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:32.624913 kubelet[3342]: W0813 00:19:32.624667 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:32.624913 kubelet[3342]: E0813 00:19:32.624722 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:32.731688 containerd[2020]: time="2025-08-13T00:19:32.731439569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-96q5g,Uid:aecea146-6ee8-45e4-861b-731c27205813,Namespace:calico-system,Attempt:0,} returns sandbox id \"709d06bb8cc38e2a23cc67342969a36583cc3617884df775b4ebe27a3c36ddae\"" Aug 13 00:19:32.793385 systemd[1]: run-containerd-runc-k8s.io-288daa0b65a8b1d56ea85d719cf89f247d37040bc6e6735f0c99628793daee6d-runc.vaWtWJ.mount: Deactivated successfully. Aug 13 00:19:33.662132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount705538041.mount: Deactivated successfully. Aug 13 00:19:34.311155 kubelet[3342]: E0813 00:19:34.311069 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vh7v8" podUID="7cfa030b-4e22-4159-b794-1031c8aae80f" Aug 13 00:19:35.079102 containerd[2020]: time="2025-08-13T00:19:35.079033144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:35.080565 containerd[2020]: time="2025-08-13T00:19:35.080499688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Aug 13 00:19:35.081722 containerd[2020]: time="2025-08-13T00:19:35.081667732Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:35.085493 containerd[2020]: time="2025-08-13T00:19:35.085371088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:35.087166 containerd[2020]: time="2025-08-13T00:19:35.086987644Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.836047386s" Aug 13 00:19:35.087166 containerd[2020]: time="2025-08-13T00:19:35.087037708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 13 00:19:35.089339 containerd[2020]: time="2025-08-13T00:19:35.089044192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:19:35.122517 containerd[2020]: time="2025-08-13T00:19:35.122381260Z" level=info msg="CreateContainer within sandbox \"288daa0b65a8b1d56ea85d719cf89f247d37040bc6e6735f0c99628793daee6d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:19:35.144554 containerd[2020]: time="2025-08-13T00:19:35.142284065Z" level=info msg="CreateContainer within sandbox \"288daa0b65a8b1d56ea85d719cf89f247d37040bc6e6735f0c99628793daee6d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"73a564160ec7fcdb60296d9211d8834a2aab673983d374bb581361d80883c2c3\"" Aug 13 00:19:35.144831 containerd[2020]: time="2025-08-13T00:19:35.144732077Z" level=info msg="StartContainer for \"73a564160ec7fcdb60296d9211d8834a2aab673983d374bb581361d80883c2c3\"" Aug 13 00:19:35.150387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3137789117.mount: Deactivated successfully. Aug 13 00:19:35.194786 systemd[1]: Started cri-containerd-73a564160ec7fcdb60296d9211d8834a2aab673983d374bb581361d80883c2c3.scope - libcontainer container 73a564160ec7fcdb60296d9211d8834a2aab673983d374bb581361d80883c2c3. Aug 13 00:19:35.265852 containerd[2020]: time="2025-08-13T00:19:35.265781597Z" level=info msg="StartContainer for \"73a564160ec7fcdb60296d9211d8834a2aab673983d374bb581361d80883c2c3\" returns successfully" Aug 13 00:19:35.600248 kubelet[3342]: E0813 00:19:35.599945 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.600248 kubelet[3342]: W0813 00:19:35.599985 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.600248 kubelet[3342]: E0813 00:19:35.600040 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.601651 kubelet[3342]: E0813 00:19:35.601145 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.601651 kubelet[3342]: W0813 00:19:35.601183 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.601651 kubelet[3342]: E0813 00:19:35.601261 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.602932 kubelet[3342]: E0813 00:19:35.602638 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.602932 kubelet[3342]: W0813 00:19:35.602677 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.602932 kubelet[3342]: E0813 00:19:35.602710 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.604622 kubelet[3342]: E0813 00:19:35.604577 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.604856 kubelet[3342]: W0813 00:19:35.604824 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.604981 kubelet[3342]: E0813 00:19:35.604954 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.605808 kubelet[3342]: E0813 00:19:35.605566 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.605808 kubelet[3342]: W0813 00:19:35.605599 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.605808 kubelet[3342]: E0813 00:19:35.605628 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.606273 kubelet[3342]: E0813 00:19:35.606202 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.607524 kubelet[3342]: W0813 00:19:35.606368 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.607977 kubelet[3342]: E0813 00:19:35.607740 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.608521 kubelet[3342]: E0813 00:19:35.608243 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.608521 kubelet[3342]: W0813 00:19:35.608272 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.608521 kubelet[3342]: E0813 00:19:35.608299 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.608916 kubelet[3342]: E0813 00:19:35.608890 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.609034 kubelet[3342]: W0813 00:19:35.609010 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.609154 kubelet[3342]: E0813 00:19:35.609131 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.609709 kubelet[3342]: E0813 00:19:35.609680 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.609841 kubelet[3342]: W0813 00:19:35.609817 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.610105 kubelet[3342]: E0813 00:19:35.609926 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.610838 kubelet[3342]: E0813 00:19:35.610561 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.610838 kubelet[3342]: W0813 00:19:35.610594 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.610838 kubelet[3342]: E0813 00:19:35.610623 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.612724 kubelet[3342]: E0813 00:19:35.612684 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.612906 kubelet[3342]: W0813 00:19:35.612878 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.613267 kubelet[3342]: E0813 00:19:35.613036 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.614931 kubelet[3342]: E0813 00:19:35.614634 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.614931 kubelet[3342]: W0813 00:19:35.614670 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.614931 kubelet[3342]: E0813 00:19:35.614703 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.615343 kubelet[3342]: E0813 00:19:35.615317 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.615450 kubelet[3342]: W0813 00:19:35.615427 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.615606 kubelet[3342]: E0813 00:19:35.615580 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.616342 kubelet[3342]: E0813 00:19:35.616107 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.616526 kubelet[3342]: W0813 00:19:35.616497 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.616980 kubelet[3342]: E0813 00:19:35.616615 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.618496 kubelet[3342]: E0813 00:19:35.617189 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.618714 kubelet[3342]: W0813 00:19:35.618674 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.618945 kubelet[3342]: E0813 00:19:35.618844 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.654821 kubelet[3342]: E0813 00:19:35.654776 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.655269 kubelet[3342]: W0813 00:19:35.655021 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.655269 kubelet[3342]: E0813 00:19:35.655066 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.658339 kubelet[3342]: E0813 00:19:35.656741 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.658339 kubelet[3342]: W0813 00:19:35.656776 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.658339 kubelet[3342]: E0813 00:19:35.656820 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.659210 kubelet[3342]: E0813 00:19:35.659167 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.659488 kubelet[3342]: W0813 00:19:35.659336 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.659876 kubelet[3342]: E0813 00:19:35.659776 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.662830 kubelet[3342]: E0813 00:19:35.662357 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.662830 kubelet[3342]: W0813 00:19:35.662392 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.662830 kubelet[3342]: E0813 00:19:35.662622 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.665118 kubelet[3342]: E0813 00:19:35.665056 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.665118 kubelet[3342]: W0813 00:19:35.665090 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.665394 kubelet[3342]: E0813 00:19:35.665287 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.665630 kubelet[3342]: E0813 00:19:35.665593 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.665630 kubelet[3342]: W0813 00:19:35.665623 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.665852 kubelet[3342]: E0813 00:19:35.665733 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.666168 kubelet[3342]: E0813 00:19:35.666120 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.666168 kubelet[3342]: W0813 00:19:35.666149 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.666292 kubelet[3342]: E0813 00:19:35.666265 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.666629 kubelet[3342]: E0813 00:19:35.666590 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.666629 kubelet[3342]: W0813 00:19:35.666621 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.666793 kubelet[3342]: E0813 00:19:35.666663 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.667985 kubelet[3342]: E0813 00:19:35.667935 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.667985 kubelet[3342]: W0813 00:19:35.667974 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.668205 kubelet[3342]: E0813 00:19:35.668096 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.669586 kubelet[3342]: E0813 00:19:35.669529 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.669586 kubelet[3342]: W0813 00:19:35.669576 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.669796 kubelet[3342]: E0813 00:19:35.669745 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.670254 kubelet[3342]: E0813 00:19:35.670209 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.670344 kubelet[3342]: W0813 00:19:35.670243 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.670600 kubelet[3342]: E0813 00:19:35.670556 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.670886 kubelet[3342]: E0813 00:19:35.670835 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.670886 kubelet[3342]: W0813 00:19:35.670866 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.672264 kubelet[3342]: E0813 00:19:35.671014 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.672863 kubelet[3342]: E0813 00:19:35.672813 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.672863 kubelet[3342]: W0813 00:19:35.672853 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.673258 kubelet[3342]: E0813 00:19:35.673214 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.673369 kubelet[3342]: E0813 00:19:35.673334 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.673369 kubelet[3342]: W0813 00:19:35.673362 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.673649 kubelet[3342]: E0813 00:19:35.673532 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.675348 kubelet[3342]: E0813 00:19:35.675280 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.675348 kubelet[3342]: W0813 00:19:35.675320 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.677195 kubelet[3342]: E0813 00:19:35.676359 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.677195 kubelet[3342]: E0813 00:19:35.676641 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.677195 kubelet[3342]: W0813 00:19:35.676663 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.678663 kubelet[3342]: E0813 00:19:35.677581 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.679093 kubelet[3342]: E0813 00:19:35.679042 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.679555 kubelet[3342]: W0813 00:19:35.679081 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.679706 kubelet[3342]: E0813 00:19:35.679576 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:35.681817 kubelet[3342]: E0813 00:19:35.681746 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:35.681817 kubelet[3342]: W0813 00:19:35.681790 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:35.681817 kubelet[3342]: E0813 00:19:35.681826 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:36.311155 kubelet[3342]: E0813 00:19:36.311103 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vh7v8" podUID="7cfa030b-4e22-4159-b794-1031c8aae80f" Aug 13 00:19:36.402517 containerd[2020]: time="2025-08-13T00:19:36.402346135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:36.403985 containerd[2020]: time="2025-08-13T00:19:36.403928803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Aug 13 00:19:36.404878 containerd[2020]: time="2025-08-13T00:19:36.404794411Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:36.409276 containerd[2020]: time="2025-08-13T00:19:36.409191463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:36.411434 containerd[2020]: time="2025-08-13T00:19:36.411244471Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.322133595s" Aug 13 00:19:36.411434 containerd[2020]: time="2025-08-13T00:19:36.411307615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 13 00:19:36.417497 containerd[2020]: time="2025-08-13T00:19:36.417111631Z" level=info msg="CreateContainer within sandbox \"709d06bb8cc38e2a23cc67342969a36583cc3617884df775b4ebe27a3c36ddae\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:19:36.439004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2416004570.mount: Deactivated successfully. Aug 13 00:19:36.443850 containerd[2020]: time="2025-08-13T00:19:36.443794459Z" level=info msg="CreateContainer within sandbox \"709d06bb8cc38e2a23cc67342969a36583cc3617884df775b4ebe27a3c36ddae\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0e655357899d92430ab4b28df99d3167a77b0e880a0916c53a719e76080af2e8\"" Aug 13 00:19:36.446063 containerd[2020]: time="2025-08-13T00:19:36.445355599Z" level=info msg="StartContainer for \"0e655357899d92430ab4b28df99d3167a77b0e880a0916c53a719e76080af2e8\"" Aug 13 00:19:36.508779 systemd[1]: Started cri-containerd-0e655357899d92430ab4b28df99d3167a77b0e880a0916c53a719e76080af2e8.scope - libcontainer container 0e655357899d92430ab4b28df99d3167a77b0e880a0916c53a719e76080af2e8. Aug 13 00:19:36.565926 containerd[2020]: time="2025-08-13T00:19:36.565080980Z" level=info msg="StartContainer for \"0e655357899d92430ab4b28df99d3167a77b0e880a0916c53a719e76080af2e8\" returns successfully" Aug 13 00:19:36.572038 kubelet[3342]: I0813 00:19:36.570874 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-88c5dcf69-lxljg" podStartSLOduration=2.731797614 podStartE2EDuration="5.570849296s" podCreationTimestamp="2025-08-13 00:19:31 +0000 UTC" firstStartedPulling="2025-08-13 00:19:32.249426626 +0000 UTC m=+31.231939620" lastFinishedPulling="2025-08-13 00:19:35.088478224 +0000 UTC m=+34.070991302" observedRunningTime="2025-08-13 00:19:35.646168003 +0000 UTC m=+34.628681021" watchObservedRunningTime="2025-08-13 00:19:36.570849296 +0000 UTC m=+35.553362302" Aug 13 00:19:36.625900 kubelet[3342]: E0813 00:19:36.625227 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:36.625900 kubelet[3342]: W0813 00:19:36.625261 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:36.625900 kubelet[3342]: E0813 00:19:36.625293 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:36.627772 kubelet[3342]: E0813 00:19:36.627333 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:36.627772 kubelet[3342]: W0813 00:19:36.627396 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:36.627772 kubelet[3342]: E0813 00:19:36.627430 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:36.628421 kubelet[3342]: E0813 00:19:36.628280 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:36.628421 kubelet[3342]: W0813 00:19:36.628311 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:36.628421 kubelet[3342]: E0813 00:19:36.628366 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:36.629879 kubelet[3342]: E0813 00:19:36.629667 3342 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:19:36.629879 kubelet[3342]: W0813 00:19:36.629723 3342 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:19:36.629879 kubelet[3342]: E0813 00:19:36.629800 3342 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:19:36.633076 systemd[1]: cri-containerd-0e655357899d92430ab4b28df99d3167a77b0e880a0916c53a719e76080af2e8.scope: Deactivated successfully. Aug 13 00:19:37.051394 containerd[2020]: time="2025-08-13T00:19:37.051290730Z" level=info msg="shim disconnected" id=0e655357899d92430ab4b28df99d3167a77b0e880a0916c53a719e76080af2e8 namespace=k8s.io Aug 13 00:19:37.051394 containerd[2020]: time="2025-08-13T00:19:37.051390318Z" level=warning msg="cleaning up after shim disconnected" id=0e655357899d92430ab4b28df99d3167a77b0e880a0916c53a719e76080af2e8 namespace=k8s.io Aug 13 00:19:37.051848 containerd[2020]: time="2025-08-13T00:19:37.051413034Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:19:37.098503 systemd[1]: run-containerd-runc-k8s.io-0e655357899d92430ab4b28df99d3167a77b0e880a0916c53a719e76080af2e8-runc.GFKHG7.mount: Deactivated successfully. Aug 13 00:19:37.098693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e655357899d92430ab4b28df99d3167a77b0e880a0916c53a719e76080af2e8-rootfs.mount: Deactivated successfully. Aug 13 00:19:37.553093 containerd[2020]: time="2025-08-13T00:19:37.552766521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:19:38.310484 kubelet[3342]: E0813 00:19:38.310404 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vh7v8" podUID="7cfa030b-4e22-4159-b794-1031c8aae80f" Aug 13 00:19:40.311871 kubelet[3342]: E0813 00:19:40.311791 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vh7v8" podUID="7cfa030b-4e22-4159-b794-1031c8aae80f" Aug 13 00:19:41.414240 containerd[2020]: time="2025-08-13T00:19:41.414183804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:41.416356 containerd[2020]: time="2025-08-13T00:19:41.416279868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 13 00:19:41.418862 containerd[2020]: time="2025-08-13T00:19:41.418776120Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:41.423633 containerd[2020]: time="2025-08-13T00:19:41.423580608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:41.425322 containerd[2020]: time="2025-08-13T00:19:41.425112504Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.872282911s" Aug 13 00:19:41.425322 containerd[2020]: time="2025-08-13T00:19:41.425169648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 13 00:19:41.431725 containerd[2020]: time="2025-08-13T00:19:41.430677996Z" level=info msg="CreateContainer within sandbox \"709d06bb8cc38e2a23cc67342969a36583cc3617884df775b4ebe27a3c36ddae\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:19:41.463481 containerd[2020]: time="2025-08-13T00:19:41.463383804Z" level=info msg="CreateContainer within sandbox \"709d06bb8cc38e2a23cc67342969a36583cc3617884df775b4ebe27a3c36ddae\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3da59ac269f94809b61a5a20c089e6958edc4e2710850fd4d2b1043d8acc6c1b\"" Aug 13 00:19:41.466522 containerd[2020]: time="2025-08-13T00:19:41.465104352Z" level=info msg="StartContainer for \"3da59ac269f94809b61a5a20c089e6958edc4e2710850fd4d2b1043d8acc6c1b\"" Aug 13 00:19:41.524787 systemd[1]: Started cri-containerd-3da59ac269f94809b61a5a20c089e6958edc4e2710850fd4d2b1043d8acc6c1b.scope - libcontainer container 3da59ac269f94809b61a5a20c089e6958edc4e2710850fd4d2b1043d8acc6c1b. Aug 13 00:19:41.584449 containerd[2020]: time="2025-08-13T00:19:41.584328793Z" level=info msg="StartContainer for \"3da59ac269f94809b61a5a20c089e6958edc4e2710850fd4d2b1043d8acc6c1b\" returns successfully" Aug 13 00:19:42.310902 kubelet[3342]: E0813 00:19:42.310786 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vh7v8" podUID="7cfa030b-4e22-4159-b794-1031c8aae80f" Aug 13 00:19:42.674736 containerd[2020]: time="2025-08-13T00:19:42.674620706Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:19:42.678992 systemd[1]: cri-containerd-3da59ac269f94809b61a5a20c089e6958edc4e2710850fd4d2b1043d8acc6c1b.scope: Deactivated successfully. Aug 13 00:19:42.710535 kubelet[3342]: I0813 00:19:42.710133 3342 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:19:42.732124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3da59ac269f94809b61a5a20c089e6958edc4e2710850fd4d2b1043d8acc6c1b-rootfs.mount: Deactivated successfully. Aug 13 00:19:42.779706 kubelet[3342]: W0813 00:19:42.779636 3342 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-31-36" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-36' and this object Aug 13 00:19:42.779960 kubelet[3342]: E0813 00:19:42.779712 3342 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-31-36\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-36' and this object" logger="UnhandledError" Aug 13 00:19:42.779960 kubelet[3342]: I0813 00:19:42.779798 3342 status_manager.go:890] "Failed to get status for pod" podUID="4a84ee61-fe58-4097-b314-181a986dece7" pod="kube-system/coredns-668d6bf9bc-6mfph" err="pods \"coredns-668d6bf9bc-6mfph\" is forbidden: User \"system:node:ip-172-31-31-36\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-36' and this object" Aug 13 00:19:42.813275 systemd[1]: Created slice kubepods-burstable-pod4a84ee61_fe58_4097_b314_181a986dece7.slice - libcontainer container kubepods-burstable-pod4a84ee61_fe58_4097_b314_181a986dece7.slice. Aug 13 00:19:42.818574 kubelet[3342]: I0813 00:19:42.818517 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a84ee61-fe58-4097-b314-181a986dece7-config-volume\") pod \"coredns-668d6bf9bc-6mfph\" (UID: \"4a84ee61-fe58-4097-b314-181a986dece7\") " pod="kube-system/coredns-668d6bf9bc-6mfph" Aug 13 00:19:42.818755 kubelet[3342]: I0813 00:19:42.818582 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a64667d-5251-47e9-8797-9a7e4c011870-config-volume\") pod \"coredns-668d6bf9bc-wfn57\" (UID: \"5a64667d-5251-47e9-8797-9a7e4c011870\") " pod="kube-system/coredns-668d6bf9bc-wfn57" Aug 13 00:19:42.818755 kubelet[3342]: I0813 00:19:42.818629 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9h4g\" (UniqueName: \"kubernetes.io/projected/5a64667d-5251-47e9-8797-9a7e4c011870-kube-api-access-g9h4g\") pod \"coredns-668d6bf9bc-wfn57\" (UID: \"5a64667d-5251-47e9-8797-9a7e4c011870\") " pod="kube-system/coredns-668d6bf9bc-wfn57" Aug 13 00:19:42.818755 kubelet[3342]: I0813 00:19:42.818690 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knc2q\" (UniqueName: \"kubernetes.io/projected/4a84ee61-fe58-4097-b314-181a986dece7-kube-api-access-knc2q\") pod \"coredns-668d6bf9bc-6mfph\" (UID: \"4a84ee61-fe58-4097-b314-181a986dece7\") " pod="kube-system/coredns-668d6bf9bc-6mfph" Aug 13 00:19:42.839955 systemd[1]: Created slice kubepods-burstable-pod5a64667d_5251_47e9_8797_9a7e4c011870.slice - libcontainer container kubepods-burstable-pod5a64667d_5251_47e9_8797_9a7e4c011870.slice. Aug 13 00:19:42.861875 systemd[1]: Created slice kubepods-besteffort-pod28081b87_c735_43fc_9236_d52ebf6d339c.slice - libcontainer container kubepods-besteffort-pod28081b87_c735_43fc_9236_d52ebf6d339c.slice. Aug 13 00:19:42.883542 systemd[1]: Created slice kubepods-besteffort-pod5d408832_1099_4d99_a077_a600c984323a.slice - libcontainer container kubepods-besteffort-pod5d408832_1099_4d99_a077_a600c984323a.slice. Aug 13 00:19:42.898007 systemd[1]: Created slice kubepods-besteffort-pod1d377c06_c55d_4e39_863b_173de05fa641.slice - libcontainer container kubepods-besteffort-pod1d377c06_c55d_4e39_863b_173de05fa641.slice. Aug 13 00:19:42.918929 systemd[1]: Created slice kubepods-besteffort-pod3467894e_7a39_43fd_92a5_fb1ca4e54ea3.slice - libcontainer container kubepods-besteffort-pod3467894e_7a39_43fd_92a5_fb1ca4e54ea3.slice. Aug 13 00:19:42.926938 kubelet[3342]: I0813 00:19:42.919665 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtbp9\" (UniqueName: \"kubernetes.io/projected/28081b87-c735-43fc-9236-d52ebf6d339c-kube-api-access-qtbp9\") pod \"calico-apiserver-bd6797c4b-4bgrn\" (UID: \"28081b87-c735-43fc-9236-d52ebf6d339c\") " pod="calico-apiserver/calico-apiserver-bd6797c4b-4bgrn" Aug 13 00:19:42.926938 kubelet[3342]: I0813 00:19:42.919839 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49aa71c9-5b84-402e-9316-844c45ada5f3-config\") pod \"goldmane-768f4c5c69-nvbpr\" (UID: \"49aa71c9-5b84-402e-9316-844c45ada5f3\") " pod="calico-system/goldmane-768f4c5c69-nvbpr" Aug 13 00:19:42.926938 kubelet[3342]: I0813 00:19:42.920112 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1d377c06-c55d-4e39-863b-173de05fa641-calico-apiserver-certs\") pod \"calico-apiserver-bd6797c4b-r7mhx\" (UID: \"1d377c06-c55d-4e39-863b-173de05fa641\") " pod="calico-apiserver/calico-apiserver-bd6797c4b-r7mhx" Aug 13 00:19:42.926938 kubelet[3342]: I0813 00:19:42.920184 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nw2c\" (UniqueName: \"kubernetes.io/projected/1d377c06-c55d-4e39-863b-173de05fa641-kube-api-access-8nw2c\") pod \"calico-apiserver-bd6797c4b-r7mhx\" (UID: \"1d377c06-c55d-4e39-863b-173de05fa641\") " pod="calico-apiserver/calico-apiserver-bd6797c4b-r7mhx" Aug 13 00:19:42.926938 kubelet[3342]: I0813 00:19:42.920290 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sbjf\" (UniqueName: \"kubernetes.io/projected/49aa71c9-5b84-402e-9316-844c45ada5f3-kube-api-access-5sbjf\") pod \"goldmane-768f4c5c69-nvbpr\" (UID: \"49aa71c9-5b84-402e-9316-844c45ada5f3\") " pod="calico-system/goldmane-768f4c5c69-nvbpr" Aug 13 00:19:42.927324 kubelet[3342]: I0813 00:19:42.920357 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d408832-1099-4d99-a077-a600c984323a-tigera-ca-bundle\") pod \"calico-kube-controllers-7689d867cd-nh2hh\" (UID: \"5d408832-1099-4d99-a077-a600c984323a\") " pod="calico-system/calico-kube-controllers-7689d867cd-nh2hh" Aug 13 00:19:42.927324 kubelet[3342]: I0813 00:19:42.920398 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49aa71c9-5b84-402e-9316-844c45ada5f3-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-nvbpr\" (UID: \"49aa71c9-5b84-402e-9316-844c45ada5f3\") " pod="calico-system/goldmane-768f4c5c69-nvbpr" Aug 13 00:19:42.927324 kubelet[3342]: I0813 00:19:42.920615 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3467894e-7a39-43fd-92a5-fb1ca4e54ea3-whisker-ca-bundle\") pod \"whisker-dc6d9647-98mdm\" (UID: \"3467894e-7a39-43fd-92a5-fb1ca4e54ea3\") " pod="calico-system/whisker-dc6d9647-98mdm" Aug 13 00:19:42.927324 kubelet[3342]: I0813 00:19:42.921046 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/28081b87-c735-43fc-9236-d52ebf6d339c-calico-apiserver-certs\") pod \"calico-apiserver-bd6797c4b-4bgrn\" (UID: \"28081b87-c735-43fc-9236-d52ebf6d339c\") " pod="calico-apiserver/calico-apiserver-bd6797c4b-4bgrn" Aug 13 00:19:42.927324 kubelet[3342]: I0813 00:19:42.921245 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3467894e-7a39-43fd-92a5-fb1ca4e54ea3-whisker-backend-key-pair\") pod \"whisker-dc6d9647-98mdm\" (UID: \"3467894e-7a39-43fd-92a5-fb1ca4e54ea3\") " pod="calico-system/whisker-dc6d9647-98mdm" Aug 13 00:19:42.927635 kubelet[3342]: I0813 00:19:42.921420 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lch7f\" (UniqueName: \"kubernetes.io/projected/3467894e-7a39-43fd-92a5-fb1ca4e54ea3-kube-api-access-lch7f\") pod \"whisker-dc6d9647-98mdm\" (UID: \"3467894e-7a39-43fd-92a5-fb1ca4e54ea3\") " pod="calico-system/whisker-dc6d9647-98mdm" Aug 13 00:19:42.927635 kubelet[3342]: I0813 00:19:42.923232 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnbrk\" (UniqueName: \"kubernetes.io/projected/5d408832-1099-4d99-a077-a600c984323a-kube-api-access-bnbrk\") pod \"calico-kube-controllers-7689d867cd-nh2hh\" (UID: \"5d408832-1099-4d99-a077-a600c984323a\") " pod="calico-system/calico-kube-controllers-7689d867cd-nh2hh" Aug 13 00:19:42.927635 kubelet[3342]: I0813 00:19:42.923426 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/49aa71c9-5b84-402e-9316-844c45ada5f3-goldmane-key-pair\") pod \"goldmane-768f4c5c69-nvbpr\" (UID: \"49aa71c9-5b84-402e-9316-844c45ada5f3\") " pod="calico-system/goldmane-768f4c5c69-nvbpr" Aug 13 00:19:42.947178 systemd[1]: Created slice kubepods-besteffort-pod49aa71c9_5b84_402e_9316_844c45ada5f3.slice - libcontainer container kubepods-besteffort-pod49aa71c9_5b84_402e_9316_844c45ada5f3.slice. Aug 13 00:19:43.171722 containerd[2020]: time="2025-08-13T00:19:43.171529308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd6797c4b-4bgrn,Uid:28081b87-c735-43fc-9236-d52ebf6d339c,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:19:43.193666 containerd[2020]: time="2025-08-13T00:19:43.193090825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7689d867cd-nh2hh,Uid:5d408832-1099-4d99-a077-a600c984323a,Namespace:calico-system,Attempt:0,}" Aug 13 00:19:43.232730 containerd[2020]: time="2025-08-13T00:19:43.232114537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd6797c4b-r7mhx,Uid:1d377c06-c55d-4e39-863b-173de05fa641,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:19:43.237635 containerd[2020]: time="2025-08-13T00:19:43.237524449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dc6d9647-98mdm,Uid:3467894e-7a39-43fd-92a5-fb1ca4e54ea3,Namespace:calico-system,Attempt:0,}" Aug 13 00:19:43.243268 containerd[2020]: time="2025-08-13T00:19:43.242862385Z" level=info msg="shim disconnected" id=3da59ac269f94809b61a5a20c089e6958edc4e2710850fd4d2b1043d8acc6c1b namespace=k8s.io Aug 13 00:19:43.243268 containerd[2020]: time="2025-08-13T00:19:43.242980537Z" level=warning msg="cleaning up after shim disconnected" id=3da59ac269f94809b61a5a20c089e6958edc4e2710850fd4d2b1043d8acc6c1b namespace=k8s.io Aug 13 00:19:43.243268 containerd[2020]: time="2025-08-13T00:19:43.243000469Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:19:43.257097 containerd[2020]: time="2025-08-13T00:19:43.256959361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nvbpr,Uid:49aa71c9-5b84-402e-9316-844c45ada5f3,Namespace:calico-system,Attempt:0,}" Aug 13 00:19:43.586840 containerd[2020]: time="2025-08-13T00:19:43.586115091Z" level=error msg="Failed to destroy network for sandbox \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:43.587377 containerd[2020]: time="2025-08-13T00:19:43.587331147Z" level=error msg="encountered an error cleaning up failed sandbox \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:43.588359 containerd[2020]: time="2025-08-13T00:19:43.588234987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd6797c4b-r7mhx,Uid:1d377c06-c55d-4e39-863b-173de05fa641,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:43.590552 kubelet[3342]: E0813 00:19:43.589767 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:43.590552 kubelet[3342]: E0813 00:19:43.589851 3342 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd6797c4b-r7mhx" Aug 13 00:19:43.590552 kubelet[3342]: E0813 00:19:43.589938 3342 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd6797c4b-r7mhx" Aug 13 00:19:43.591297 kubelet[3342]: E0813 00:19:43.590015 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bd6797c4b-r7mhx_calico-apiserver(1d377c06-c55d-4e39-863b-173de05fa641)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bd6797c4b-r7mhx_calico-apiserver(1d377c06-c55d-4e39-863b-173de05fa641)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bd6797c4b-r7mhx" podUID="1d377c06-c55d-4e39-863b-173de05fa641" Aug 13 00:19:43.611295 containerd[2020]: time="2025-08-13T00:19:43.607100091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:19:43.637545 containerd[2020]: time="2025-08-13T00:19:43.637427427Z" level=error msg="Failed to destroy network for sandbox \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:43.638564 containerd[2020]: time="2025-08-13T00:19:43.638440011Z" level=error msg="encountered an error cleaning up failed sandbox \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:43.638873 containerd[2020]: time="2025-08-13T00:19:43.638809743Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7689d867cd-nh2hh,Uid:5d408832-1099-4d99-a077-a600c984323a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:43.639385 containerd[2020]: time="2025-08-13T00:19:43.639254199Z" level=error msg="Failed to destroy network for sandbox \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:43.640923 kubelet[3342]: E0813 00:19:43.639763 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:43.640923 kubelet[3342]: E0813 00:19:43.639838 3342 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7689d867cd-nh2hh" Aug 13 00:19:43.640923 kubelet[3342]: E0813 00:19:43.639871 3342 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7689d867cd-nh2hh" Aug 13 00:19:43.641215 containerd[2020]: time="2025-08-13T00:19:43.640089327Z" level=error msg="Failed to destroy network for sandbox \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:43.642550 kubelet[3342]: E0813 00:19:43.639948 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7689d867cd-nh2hh_calico-system(5d408832-1099-4d99-a077-a600c984323a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7689d867cd-nh2hh_calico-system(5d408832-1099-4d99-a077-a600c984323a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7689d867cd-nh2hh" podUID="5d408832-1099-4d99-a077-a600c984323a" Aug 13 00:19:43.644080 containerd[2020]: time="2025-08-13T00:19:43.643726743Z" level=error msg="Failed to destroy network for sandbox \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:43.646389 containerd[2020]: time="2025-08-13T00:19:43.646047531Z" level=error msg="encountered an error cleaning up failed sandbox \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:43.646389 containerd[2020]: time="2025-08-13T00:19:43.646321371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dc6d9647-98mdm,Uid:3467894e-7a39-43fd-92a5-fb1ca4e54ea3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:43.647759 containerd[2020]: time="2025-08-13T00:19:43.646724619Z" level=error msg="encountered an error cleaning up failed sandbox \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:43.647759 containerd[2020]: time="2025-08-13T00:19:43.646825275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd6797c4b-4bgrn,Uid:28081b87-c735-43fc-9236-d52ebf6d339c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:43.647759 containerd[2020]: time="2025-08-13T00:19:43.647310759Z" level=error msg="encountered an error cleaning up failed sandbox \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:43.647759 containerd[2020]: time="2025-08-13T00:19:43.647397663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nvbpr,Uid:49aa71c9-5b84-402e-9316-844c45ada5f3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:43.650041 kubelet[3342]: E0813 00:19:43.648307 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:43.650041 kubelet[3342]: E0813 00:19:43.648391 3342 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-nvbpr" Aug 13 00:19:43.650041 kubelet[3342]: E0813 00:19:43.648425 3342 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-nvbpr" Aug 13 00:19:43.650315 kubelet[3342]: E0813 00:19:43.648551 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-nvbpr_calico-system(49aa71c9-5b84-402e-9316-844c45ada5f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-nvbpr_calico-system(49aa71c9-5b84-402e-9316-844c45ada5f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-nvbpr" podUID="49aa71c9-5b84-402e-9316-844c45ada5f3" Aug 13 00:19:43.650315 kubelet[3342]: E0813 00:19:43.648620 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:43.650315 kubelet[3342]: E0813 00:19:43.648660 3342 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd6797c4b-4bgrn" Aug 13 00:19:43.652794 kubelet[3342]: E0813 00:19:43.648690 3342 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd6797c4b-4bgrn" Aug 13 00:19:43.652794 kubelet[3342]: E0813 00:19:43.648738 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bd6797c4b-4bgrn_calico-apiserver(28081b87-c735-43fc-9236-d52ebf6d339c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bd6797c4b-4bgrn_calico-apiserver(28081b87-c735-43fc-9236-d52ebf6d339c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bd6797c4b-4bgrn" podUID="28081b87-c735-43fc-9236-d52ebf6d339c" Aug 13 00:19:43.652794 kubelet[3342]: E0813 00:19:43.648800 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:43.654049 kubelet[3342]: E0813 00:19:43.648835 3342 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dc6d9647-98mdm" Aug 13 00:19:43.654049 kubelet[3342]: E0813 00:19:43.648883 3342 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dc6d9647-98mdm" Aug 13 00:19:43.654049 kubelet[3342]: E0813 00:19:43.648928 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-dc6d9647-98mdm_calico-system(3467894e-7a39-43fd-92a5-fb1ca4e54ea3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-dc6d9647-98mdm_calico-system(3467894e-7a39-43fd-92a5-fb1ca4e54ea3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-dc6d9647-98mdm" podUID="3467894e-7a39-43fd-92a5-fb1ca4e54ea3" Aug 13 00:19:43.921165 kubelet[3342]: E0813 00:19:43.921070 3342 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Aug 13 00:19:43.921907 kubelet[3342]: E0813 00:19:43.921070 3342 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Aug 13 00:19:43.921907 kubelet[3342]: E0813 00:19:43.921452 3342 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a64667d-5251-47e9-8797-9a7e4c011870-config-volume podName:5a64667d-5251-47e9-8797-9a7e4c011870 nodeName:}" failed. No retries permitted until 2025-08-13 00:19:44.421408256 +0000 UTC m=+43.403921238 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5a64667d-5251-47e9-8797-9a7e4c011870-config-volume") pod "coredns-668d6bf9bc-wfn57" (UID: "5a64667d-5251-47e9-8797-9a7e4c011870") : failed to sync configmap cache: timed out waiting for the condition Aug 13 00:19:43.922150 kubelet[3342]: E0813 00:19:43.921931 3342 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4a84ee61-fe58-4097-b314-181a986dece7-config-volume podName:4a84ee61-fe58-4097-b314-181a986dece7 nodeName:}" failed. No retries permitted until 2025-08-13 00:19:44.421892192 +0000 UTC m=+43.404405174 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4a84ee61-fe58-4097-b314-181a986dece7-config-volume") pod "coredns-668d6bf9bc-6mfph" (UID: "4a84ee61-fe58-4097-b314-181a986dece7") : failed to sync configmap cache: timed out waiting for the condition Aug 13 00:19:44.321071 systemd[1]: Created slice kubepods-besteffort-pod7cfa030b_4e22_4159_b794_1031c8aae80f.slice - libcontainer container kubepods-besteffort-pod7cfa030b_4e22_4159_b794_1031c8aae80f.slice. Aug 13 00:19:44.327816 containerd[2020]: time="2025-08-13T00:19:44.327675410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vh7v8,Uid:7cfa030b-4e22-4159-b794-1031c8aae80f,Namespace:calico-system,Attempt:0,}" Aug 13 00:19:44.459239 containerd[2020]: time="2025-08-13T00:19:44.459154551Z" level=error msg="Failed to destroy network for sandbox \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:44.460155 containerd[2020]: time="2025-08-13T00:19:44.459873567Z" level=error msg="encountered an error cleaning up failed sandbox \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:44.460155 containerd[2020]: time="2025-08-13T00:19:44.459965331Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vh7v8,Uid:7cfa030b-4e22-4159-b794-1031c8aae80f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:44.462538 kubelet[3342]: E0813 00:19:44.460537 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:44.462538 kubelet[3342]: E0813 00:19:44.460616 3342 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vh7v8" Aug 13 00:19:44.462538 kubelet[3342]: E0813 00:19:44.460650 3342 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vh7v8" Aug 13 00:19:44.462810 kubelet[3342]: E0813 00:19:44.460716 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vh7v8_calico-system(7cfa030b-4e22-4159-b794-1031c8aae80f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vh7v8_calico-system(7cfa030b-4e22-4159-b794-1031c8aae80f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vh7v8" podUID="7cfa030b-4e22-4159-b794-1031c8aae80f" Aug 13 00:19:44.466760 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a-shm.mount: Deactivated successfully. Aug 13 00:19:44.606584 kubelet[3342]: I0813 00:19:44.606547 3342 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Aug 13 00:19:44.608507 containerd[2020]: time="2025-08-13T00:19:44.607714408Z" level=info msg="StopPodSandbox for \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\"" Aug 13 00:19:44.608507 containerd[2020]: time="2025-08-13T00:19:44.607996612Z" level=info msg="Ensure that sandbox bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c in task-service has been cleanup successfully" Aug 13 00:19:44.610904 kubelet[3342]: I0813 00:19:44.610805 3342 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Aug 13 00:19:44.614384 containerd[2020]: time="2025-08-13T00:19:44.614287420Z" level=info msg="StopPodSandbox for \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\"" Aug 13 00:19:44.615706 containerd[2020]: time="2025-08-13T00:19:44.615412036Z" level=info msg="Ensure that sandbox b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a in task-service has been cleanup successfully" Aug 13 00:19:44.620596 kubelet[3342]: I0813 00:19:44.620558 3342 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Aug 13 00:19:44.623521 containerd[2020]: time="2025-08-13T00:19:44.623436916Z" level=info msg="StopPodSandbox for \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\"" Aug 13 00:19:44.623962 containerd[2020]: time="2025-08-13T00:19:44.623787088Z" level=info msg="Ensure that sandbox 085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc in task-service has been cleanup successfully" Aug 13 00:19:44.629590 containerd[2020]: time="2025-08-13T00:19:44.629215612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6mfph,Uid:4a84ee61-fe58-4097-b314-181a986dece7,Namespace:kube-system,Attempt:0,}" Aug 13 00:19:44.632961 kubelet[3342]: I0813 00:19:44.632892 3342 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Aug 13 00:19:44.637216 containerd[2020]: time="2025-08-13T00:19:44.636533116Z" level=info msg="StopPodSandbox for \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\"" Aug 13 00:19:44.637216 containerd[2020]: time="2025-08-13T00:19:44.636817504Z" level=info msg="Ensure that sandbox cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25 in task-service has been cleanup successfully" Aug 13 00:19:44.641399 kubelet[3342]: I0813 00:19:44.641359 3342 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Aug 13 00:19:44.646752 containerd[2020]: time="2025-08-13T00:19:44.645060724Z" level=info msg="StopPodSandbox for \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\"" Aug 13 00:19:44.650163 containerd[2020]: time="2025-08-13T00:19:44.650056804Z" level=info msg="Ensure that sandbox 4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe in task-service has been cleanup successfully" Aug 13 00:19:44.655828 containerd[2020]: time="2025-08-13T00:19:44.655750216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wfn57,Uid:5a64667d-5251-47e9-8797-9a7e4c011870,Namespace:kube-system,Attempt:0,}" Aug 13 00:19:44.658119 kubelet[3342]: I0813 00:19:44.657391 3342 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Aug 13 00:19:44.661607 containerd[2020]: time="2025-08-13T00:19:44.661524232Z" level=info msg="StopPodSandbox for \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\"" Aug 13 00:19:44.666633 containerd[2020]: time="2025-08-13T00:19:44.666576496Z" level=info msg="Ensure that sandbox 3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863 in task-service has been cleanup successfully" Aug 13 00:19:44.801958 containerd[2020]: time="2025-08-13T00:19:44.801870821Z" level=error msg="StopPodSandbox for \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\" failed" error="failed to destroy network for sandbox \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:44.802367 kubelet[3342]: E0813 00:19:44.802194 3342 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Aug 13 00:19:44.802367 kubelet[3342]: E0813 00:19:44.802286 3342 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c"} Aug 13 00:19:44.802723 kubelet[3342]: E0813 00:19:44.802371 3342 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d408832-1099-4d99-a077-a600c984323a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:19:44.802723 kubelet[3342]: E0813 00:19:44.802412 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d408832-1099-4d99-a077-a600c984323a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7689d867cd-nh2hh" podUID="5d408832-1099-4d99-a077-a600c984323a" Aug 13 00:19:44.872096 containerd[2020]: time="2025-08-13T00:19:44.871914941Z" level=error msg="StopPodSandbox for \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\" failed" error="failed to destroy network for sandbox \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:44.874207 kubelet[3342]: E0813 00:19:44.874040 3342 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Aug 13 00:19:44.874207 kubelet[3342]: E0813 00:19:44.874114 3342 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a"} Aug 13 00:19:44.874207 kubelet[3342]: E0813 00:19:44.874171 3342 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7cfa030b-4e22-4159-b794-1031c8aae80f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:19:44.874549 kubelet[3342]: E0813 00:19:44.874210 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7cfa030b-4e22-4159-b794-1031c8aae80f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vh7v8" podUID="7cfa030b-4e22-4159-b794-1031c8aae80f" Aug 13 00:19:44.903145 containerd[2020]: time="2025-08-13T00:19:44.903075413Z" level=error msg="StopPodSandbox for \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\" failed" error="failed to destroy network for sandbox \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:44.903677 kubelet[3342]: E0813 00:19:44.903614 3342 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Aug 13 00:19:44.903789 kubelet[3342]: E0813 00:19:44.903696 3342 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe"} Aug 13 00:19:44.903789 kubelet[3342]: E0813 00:19:44.903754 3342 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1d377c06-c55d-4e39-863b-173de05fa641\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:19:44.903981 kubelet[3342]: E0813 00:19:44.903793 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1d377c06-c55d-4e39-863b-173de05fa641\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bd6797c4b-r7mhx" podUID="1d377c06-c55d-4e39-863b-173de05fa641" Aug 13 00:19:44.907013 containerd[2020]: time="2025-08-13T00:19:44.906723545Z" level=error msg="StopPodSandbox for \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\" failed" error="failed to destroy network for sandbox \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:44.907872 kubelet[3342]: E0813 00:19:44.907618 3342 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Aug 13 00:19:44.907872 kubelet[3342]: E0813 00:19:44.907690 3342 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25"} Aug 13 00:19:44.907872 kubelet[3342]: E0813 00:19:44.907743 3342 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3467894e-7a39-43fd-92a5-fb1ca4e54ea3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:19:44.907872 kubelet[3342]: E0813 00:19:44.907784 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3467894e-7a39-43fd-92a5-fb1ca4e54ea3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-dc6d9647-98mdm" podUID="3467894e-7a39-43fd-92a5-fb1ca4e54ea3" Aug 13 00:19:44.908260 containerd[2020]: time="2025-08-13T00:19:44.908108609Z" level=error msg="StopPodSandbox for \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\" failed" error="failed to destroy network for sandbox \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:44.908556 kubelet[3342]: E0813 00:19:44.908400 3342 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Aug 13 00:19:44.908556 kubelet[3342]: E0813 00:19:44.908528 3342 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc"} Aug 13 00:19:44.909112 kubelet[3342]: E0813 00:19:44.908688 3342 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"49aa71c9-5b84-402e-9316-844c45ada5f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:19:44.909112 kubelet[3342]: E0813 00:19:44.908731 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"49aa71c9-5b84-402e-9316-844c45ada5f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-nvbpr" podUID="49aa71c9-5b84-402e-9316-844c45ada5f3" Aug 13 00:19:44.923415 containerd[2020]: time="2025-08-13T00:19:44.923054405Z" level=error msg="StopPodSandbox for \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\" failed" error="failed to destroy network for sandbox \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:44.923713 kubelet[3342]: E0813 00:19:44.923653 3342 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Aug 13 00:19:44.923888 kubelet[3342]: E0813 00:19:44.923728 3342 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863"} Aug 13 00:19:44.923888 kubelet[3342]: E0813 00:19:44.923791 3342 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"28081b87-c735-43fc-9236-d52ebf6d339c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:19:44.923888 kubelet[3342]: E0813 00:19:44.923831 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"28081b87-c735-43fc-9236-d52ebf6d339c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bd6797c4b-4bgrn" podUID="28081b87-c735-43fc-9236-d52ebf6d339c" Aug 13 00:19:45.031497 containerd[2020]: time="2025-08-13T00:19:45.030492686Z" level=error msg="Failed to destroy network for sandbox \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:45.034098 containerd[2020]: time="2025-08-13T00:19:45.033995834Z" level=error msg="encountered an error cleaning up failed sandbox \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:45.034223 containerd[2020]: time="2025-08-13T00:19:45.034116290Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wfn57,Uid:5a64667d-5251-47e9-8797-9a7e4c011870,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:45.036532 kubelet[3342]: E0813 00:19:45.035825 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:45.036532 kubelet[3342]: E0813 00:19:45.035909 3342 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wfn57" Aug 13 00:19:45.036532 kubelet[3342]: E0813 00:19:45.035945 3342 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wfn57" Aug 13 00:19:45.039928 kubelet[3342]: E0813 00:19:45.036050 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wfn57_kube-system(5a64667d-5251-47e9-8797-9a7e4c011870)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wfn57_kube-system(5a64667d-5251-47e9-8797-9a7e4c011870)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wfn57" podUID="5a64667d-5251-47e9-8797-9a7e4c011870" Aug 13 00:19:45.038793 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447-shm.mount: Deactivated successfully. Aug 13 00:19:45.054235 containerd[2020]: time="2025-08-13T00:19:45.054172658Z" level=error msg="Failed to destroy network for sandbox \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:45.055981 containerd[2020]: time="2025-08-13T00:19:45.055915118Z" level=error msg="encountered an error cleaning up failed sandbox \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:45.056204 containerd[2020]: time="2025-08-13T00:19:45.056161442Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6mfph,Uid:4a84ee61-fe58-4097-b314-181a986dece7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:45.059280 kubelet[3342]: E0813 00:19:45.056647 3342 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:45.059280 kubelet[3342]: E0813 00:19:45.056741 3342 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6mfph" Aug 13 00:19:45.059280 kubelet[3342]: E0813 00:19:45.056775 3342 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6mfph" Aug 13 00:19:45.059001 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8-shm.mount: Deactivated successfully. Aug 13 00:19:45.059693 kubelet[3342]: E0813 00:19:45.056838 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6mfph_kube-system(4a84ee61-fe58-4097-b314-181a986dece7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6mfph_kube-system(4a84ee61-fe58-4097-b314-181a986dece7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6mfph" podUID="4a84ee61-fe58-4097-b314-181a986dece7" Aug 13 00:19:45.666944 kubelet[3342]: I0813 00:19:45.666884 3342 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Aug 13 00:19:45.668315 containerd[2020]: time="2025-08-13T00:19:45.668248073Z" level=info msg="StopPodSandbox for \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\"" Aug 13 00:19:45.668876 containerd[2020]: time="2025-08-13T00:19:45.668630717Z" level=info msg="Ensure that sandbox 06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8 in task-service has been cleanup successfully" Aug 13 00:19:45.676272 kubelet[3342]: I0813 00:19:45.674035 3342 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Aug 13 00:19:45.676407 containerd[2020]: time="2025-08-13T00:19:45.675310361Z" level=info msg="StopPodSandbox for \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\"" Aug 13 00:19:45.677210 containerd[2020]: time="2025-08-13T00:19:45.677020781Z" level=info msg="Ensure that sandbox 679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447 in task-service has been cleanup successfully" Aug 13 00:19:45.747720 containerd[2020]: time="2025-08-13T00:19:45.747643997Z" level=error msg="StopPodSandbox for \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\" failed" error="failed to destroy network for sandbox \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:45.748178 kubelet[3342]: E0813 00:19:45.747982 3342 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Aug 13 00:19:45.748178 kubelet[3342]: E0813 00:19:45.748072 3342 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447"} Aug 13 00:19:45.748562 kubelet[3342]: E0813 00:19:45.748231 3342 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5a64667d-5251-47e9-8797-9a7e4c011870\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:19:45.748562 kubelet[3342]: E0813 00:19:45.748445 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5a64667d-5251-47e9-8797-9a7e4c011870\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wfn57" podUID="5a64667d-5251-47e9-8797-9a7e4c011870" Aug 13 00:19:45.761616 containerd[2020]: time="2025-08-13T00:19:45.761539709Z" level=error msg="StopPodSandbox for \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\" failed" error="failed to destroy network for sandbox \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:19:45.761873 kubelet[3342]: E0813 00:19:45.761818 3342 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Aug 13 00:19:45.761951 kubelet[3342]: E0813 00:19:45.761892 3342 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8"} Aug 13 00:19:45.762019 kubelet[3342]: E0813 00:19:45.761957 3342 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4a84ee61-fe58-4097-b314-181a986dece7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:19:45.762019 kubelet[3342]: E0813 00:19:45.761996 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4a84ee61-fe58-4097-b314-181a986dece7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6mfph" podUID="4a84ee61-fe58-4097-b314-181a986dece7" Aug 13 00:19:51.323171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1659778966.mount: Deactivated successfully. Aug 13 00:19:51.405697 containerd[2020]: time="2025-08-13T00:19:51.405600393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:51.407763 containerd[2020]: time="2025-08-13T00:19:51.407675121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 13 00:19:51.410410 containerd[2020]: time="2025-08-13T00:19:51.410335833Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:51.415267 containerd[2020]: time="2025-08-13T00:19:51.415197753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:51.416989 containerd[2020]: time="2025-08-13T00:19:51.416782089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 7.809615362s" Aug 13 00:19:51.416989 containerd[2020]: time="2025-08-13T00:19:51.416842101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 13 00:19:51.457409 containerd[2020]: time="2025-08-13T00:19:51.457090810Z" level=info msg="CreateContainer within sandbox \"709d06bb8cc38e2a23cc67342969a36583cc3617884df775b4ebe27a3c36ddae\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:19:51.491435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2727512069.mount: Deactivated successfully. Aug 13 00:19:51.497951 containerd[2020]: time="2025-08-13T00:19:51.497762554Z" level=info msg="CreateContainer within sandbox \"709d06bb8cc38e2a23cc67342969a36583cc3617884df775b4ebe27a3c36ddae\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"712b7c19a42956014e0b071ef2655bc2219235450fad87284880d7cd7032943e\"" Aug 13 00:19:51.500376 containerd[2020]: time="2025-08-13T00:19:51.498857338Z" level=info msg="StartContainer for \"712b7c19a42956014e0b071ef2655bc2219235450fad87284880d7cd7032943e\"" Aug 13 00:19:51.544747 systemd[1]: Started cri-containerd-712b7c19a42956014e0b071ef2655bc2219235450fad87284880d7cd7032943e.scope - libcontainer container 712b7c19a42956014e0b071ef2655bc2219235450fad87284880d7cd7032943e. Aug 13 00:19:51.613227 containerd[2020]: time="2025-08-13T00:19:51.613095454Z" level=info msg="StartContainer for \"712b7c19a42956014e0b071ef2655bc2219235450fad87284880d7cd7032943e\" returns successfully" Aug 13 00:19:51.746862 kubelet[3342]: I0813 00:19:51.745792 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-96q5g" podStartSLOduration=2.064878147 podStartE2EDuration="20.745765187s" podCreationTimestamp="2025-08-13 00:19:31 +0000 UTC" firstStartedPulling="2025-08-13 00:19:32.737679749 +0000 UTC m=+31.720192743" lastFinishedPulling="2025-08-13 00:19:51.418566801 +0000 UTC m=+50.401079783" observedRunningTime="2025-08-13 00:19:51.740557067 +0000 UTC m=+50.723070097" watchObservedRunningTime="2025-08-13 00:19:51.745765187 +0000 UTC m=+50.728278169" Aug 13 00:19:51.976419 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:19:51.976910 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:19:52.172912 containerd[2020]: time="2025-08-13T00:19:52.172840425Z" level=info msg="StopPodSandbox for \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\"" Aug 13 00:19:52.517141 containerd[2020]: 2025-08-13 00:19:52.358 [INFO][4554] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Aug 13 00:19:52.517141 containerd[2020]: 2025-08-13 00:19:52.361 [INFO][4554] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" iface="eth0" netns="/var/run/netns/cni-14547cad-f713-0767-de1e-6bb2bd662080" Aug 13 00:19:52.517141 containerd[2020]: 2025-08-13 00:19:52.362 [INFO][4554] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" iface="eth0" netns="/var/run/netns/cni-14547cad-f713-0767-de1e-6bb2bd662080" Aug 13 00:19:52.517141 containerd[2020]: 2025-08-13 00:19:52.365 [INFO][4554] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" iface="eth0" netns="/var/run/netns/cni-14547cad-f713-0767-de1e-6bb2bd662080" Aug 13 00:19:52.517141 containerd[2020]: 2025-08-13 00:19:52.365 [INFO][4554] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Aug 13 00:19:52.517141 containerd[2020]: 2025-08-13 00:19:52.365 [INFO][4554] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Aug 13 00:19:52.517141 containerd[2020]: 2025-08-13 00:19:52.474 [INFO][4569] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" HandleID="k8s-pod-network.cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Workload="ip--172--31--31--36-k8s-whisker--dc6d9647--98mdm-eth0" Aug 13 00:19:52.517141 containerd[2020]: 2025-08-13 00:19:52.475 [INFO][4569] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:19:52.517141 containerd[2020]: 2025-08-13 00:19:52.475 [INFO][4569] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:19:52.517141 containerd[2020]: 2025-08-13 00:19:52.497 [WARNING][4569] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" HandleID="k8s-pod-network.cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Workload="ip--172--31--31--36-k8s-whisker--dc6d9647--98mdm-eth0" Aug 13 00:19:52.517141 containerd[2020]: 2025-08-13 00:19:52.497 [INFO][4569] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" HandleID="k8s-pod-network.cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Workload="ip--172--31--31--36-k8s-whisker--dc6d9647--98mdm-eth0" Aug 13 00:19:52.517141 containerd[2020]: 2025-08-13 00:19:52.502 [INFO][4569] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:19:52.517141 containerd[2020]: 2025-08-13 00:19:52.510 [INFO][4554] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Aug 13 00:19:52.517141 containerd[2020]: time="2025-08-13T00:19:52.514307555Z" level=info msg="TearDown network for sandbox \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\" successfully" Aug 13 00:19:52.517141 containerd[2020]: time="2025-08-13T00:19:52.514348475Z" level=info msg="StopPodSandbox for \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\" returns successfully" Aug 13 00:19:52.520862 systemd[1]: run-netns-cni\x2d14547cad\x2df713\x2d0767\x2dde1e\x2d6bb2bd662080.mount: Deactivated successfully. Aug 13 00:19:52.609176 kubelet[3342]: I0813 00:19:52.609096 3342 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lch7f\" (UniqueName: \"kubernetes.io/projected/3467894e-7a39-43fd-92a5-fb1ca4e54ea3-kube-api-access-lch7f\") pod \"3467894e-7a39-43fd-92a5-fb1ca4e54ea3\" (UID: \"3467894e-7a39-43fd-92a5-fb1ca4e54ea3\") " Aug 13 00:19:52.609876 kubelet[3342]: I0813 00:19:52.609185 3342 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3467894e-7a39-43fd-92a5-fb1ca4e54ea3-whisker-ca-bundle\") pod \"3467894e-7a39-43fd-92a5-fb1ca4e54ea3\" (UID: \"3467894e-7a39-43fd-92a5-fb1ca4e54ea3\") " Aug 13 00:19:52.609876 kubelet[3342]: I0813 00:19:52.609254 3342 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3467894e-7a39-43fd-92a5-fb1ca4e54ea3-whisker-backend-key-pair\") pod \"3467894e-7a39-43fd-92a5-fb1ca4e54ea3\" (UID: \"3467894e-7a39-43fd-92a5-fb1ca4e54ea3\") " Aug 13 00:19:52.614232 kubelet[3342]: I0813 00:19:52.614165 3342 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3467894e-7a39-43fd-92a5-fb1ca4e54ea3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3467894e-7a39-43fd-92a5-fb1ca4e54ea3" (UID: "3467894e-7a39-43fd-92a5-fb1ca4e54ea3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:19:52.621323 kubelet[3342]: I0813 00:19:52.621247 3342 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3467894e-7a39-43fd-92a5-fb1ca4e54ea3-kube-api-access-lch7f" (OuterVolumeSpecName: "kube-api-access-lch7f") pod "3467894e-7a39-43fd-92a5-fb1ca4e54ea3" (UID: "3467894e-7a39-43fd-92a5-fb1ca4e54ea3"). InnerVolumeSpecName "kube-api-access-lch7f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:19:52.623840 kubelet[3342]: I0813 00:19:52.623759 3342 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3467894e-7a39-43fd-92a5-fb1ca4e54ea3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3467894e-7a39-43fd-92a5-fb1ca4e54ea3" (UID: "3467894e-7a39-43fd-92a5-fb1ca4e54ea3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:19:52.624367 systemd[1]: var-lib-kubelet-pods-3467894e\x2d7a39\x2d43fd\x2d92a5\x2dfb1ca4e54ea3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlch7f.mount: Deactivated successfully. Aug 13 00:19:52.624702 systemd[1]: var-lib-kubelet-pods-3467894e\x2d7a39\x2d43fd\x2d92a5\x2dfb1ca4e54ea3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:19:52.710869 kubelet[3342]: I0813 00:19:52.710426 3342 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lch7f\" (UniqueName: \"kubernetes.io/projected/3467894e-7a39-43fd-92a5-fb1ca4e54ea3-kube-api-access-lch7f\") on node \"ip-172-31-31-36\" DevicePath \"\"" Aug 13 00:19:52.711581 kubelet[3342]: I0813 00:19:52.710982 3342 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3467894e-7a39-43fd-92a5-fb1ca4e54ea3-whisker-ca-bundle\") on node \"ip-172-31-31-36\" DevicePath \"\"" Aug 13 00:19:52.711581 kubelet[3342]: I0813 00:19:52.711011 3342 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3467894e-7a39-43fd-92a5-fb1ca4e54ea3-whisker-backend-key-pair\") on node \"ip-172-31-31-36\" DevicePath \"\"" Aug 13 00:19:52.723169 systemd[1]: Removed slice kubepods-besteffort-pod3467894e_7a39_43fd_92a5_fb1ca4e54ea3.slice - libcontainer container kubepods-besteffort-pod3467894e_7a39_43fd_92a5_fb1ca4e54ea3.slice. Aug 13 00:19:52.891547 systemd[1]: Created slice kubepods-besteffort-podc8a4d4b6_03a1_4e5d_94e3_fdb03e14b30a.slice - libcontainer container kubepods-besteffort-podc8a4d4b6_03a1_4e5d_94e3_fdb03e14b30a.slice. Aug 13 00:19:53.016809 kubelet[3342]: I0813 00:19:53.016697 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c8a4d4b6-03a1-4e5d-94e3-fdb03e14b30a-whisker-backend-key-pair\") pod \"whisker-864598d977-rp5hw\" (UID: \"c8a4d4b6-03a1-4e5d-94e3-fdb03e14b30a\") " pod="calico-system/whisker-864598d977-rp5hw" Aug 13 00:19:53.016809 kubelet[3342]: I0813 00:19:53.016772 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmvff\" (UniqueName: \"kubernetes.io/projected/c8a4d4b6-03a1-4e5d-94e3-fdb03e14b30a-kube-api-access-gmvff\") pod \"whisker-864598d977-rp5hw\" (UID: \"c8a4d4b6-03a1-4e5d-94e3-fdb03e14b30a\") " pod="calico-system/whisker-864598d977-rp5hw" Aug 13 00:19:53.017421 kubelet[3342]: I0813 00:19:53.016898 3342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8a4d4b6-03a1-4e5d-94e3-fdb03e14b30a-whisker-ca-bundle\") pod \"whisker-864598d977-rp5hw\" (UID: \"c8a4d4b6-03a1-4e5d-94e3-fdb03e14b30a\") " pod="calico-system/whisker-864598d977-rp5hw" Aug 13 00:19:53.206045 containerd[2020]: time="2025-08-13T00:19:53.205798198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-864598d977-rp5hw,Uid:c8a4d4b6-03a1-4e5d-94e3-fdb03e14b30a,Namespace:calico-system,Attempt:0,}" Aug 13 00:19:53.317973 kubelet[3342]: I0813 00:19:53.317567 3342 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3467894e-7a39-43fd-92a5-fb1ca4e54ea3" path="/var/lib/kubelet/pods/3467894e-7a39-43fd-92a5-fb1ca4e54ea3/volumes" Aug 13 00:19:53.429690 (udev-worker)[4539]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:19:53.432866 systemd-networkd[1936]: calia8c0aa0d297: Link UP Aug 13 00:19:53.434953 systemd-networkd[1936]: calia8c0aa0d297: Gained carrier Aug 13 00:19:53.467918 containerd[2020]: 2025-08-13 00:19:53.267 [INFO][4615] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:19:53.467918 containerd[2020]: 2025-08-13 00:19:53.289 [INFO][4615] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--36-k8s-whisker--864598d977--rp5hw-eth0 whisker-864598d977- calico-system c8a4d4b6-03a1-4e5d-94e3-fdb03e14b30a 949 0 2025-08-13 00:19:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:864598d977 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-31-36 whisker-864598d977-rp5hw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia8c0aa0d297 [] [] }} ContainerID="cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960" Namespace="calico-system" Pod="whisker-864598d977-rp5hw" WorkloadEndpoint="ip--172--31--31--36-k8s-whisker--864598d977--rp5hw-" Aug 13 00:19:53.467918 containerd[2020]: 2025-08-13 00:19:53.290 [INFO][4615] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960" Namespace="calico-system" Pod="whisker-864598d977-rp5hw" WorkloadEndpoint="ip--172--31--31--36-k8s-whisker--864598d977--rp5hw-eth0" Aug 13 00:19:53.467918 containerd[2020]: 2025-08-13 00:19:53.349 [INFO][4626] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960" HandleID="k8s-pod-network.cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960" Workload="ip--172--31--31--36-k8s-whisker--864598d977--rp5hw-eth0" Aug 13 00:19:53.467918 containerd[2020]: 2025-08-13 00:19:53.349 [INFO][4626] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960" HandleID="k8s-pod-network.cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960" Workload="ip--172--31--31--36-k8s-whisker--864598d977--rp5hw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb640), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-36", "pod":"whisker-864598d977-rp5hw", "timestamp":"2025-08-13 00:19:53.349027679 +0000 UTC"}, Hostname:"ip-172-31-31-36", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:19:53.467918 containerd[2020]: 2025-08-13 00:19:53.349 [INFO][4626] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:19:53.467918 containerd[2020]: 2025-08-13 00:19:53.349 [INFO][4626] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:19:53.467918 containerd[2020]: 2025-08-13 00:19:53.349 [INFO][4626] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-36' Aug 13 00:19:53.467918 containerd[2020]: 2025-08-13 00:19:53.363 [INFO][4626] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960" host="ip-172-31-31-36" Aug 13 00:19:53.467918 containerd[2020]: 2025-08-13 00:19:53.372 [INFO][4626] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-36" Aug 13 00:19:53.467918 containerd[2020]: 2025-08-13 00:19:53.384 [INFO][4626] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:19:53.467918 containerd[2020]: 2025-08-13 00:19:53.392 [INFO][4626] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:19:53.467918 containerd[2020]: 2025-08-13 00:19:53.396 [INFO][4626] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:19:53.467918 containerd[2020]: 2025-08-13 00:19:53.397 [INFO][4626] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960" host="ip-172-31-31-36" Aug 13 00:19:53.467918 containerd[2020]: 2025-08-13 00:19:53.399 [INFO][4626] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960 Aug 13 00:19:53.467918 containerd[2020]: 2025-08-13 00:19:53.407 [INFO][4626] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960" host="ip-172-31-31-36" Aug 13 00:19:53.467918 containerd[2020]: 2025-08-13 00:19:53.415 [INFO][4626] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.65/26] block=192.168.99.64/26 handle="k8s-pod-network.cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960" host="ip-172-31-31-36" Aug 13 00:19:53.467918 containerd[2020]: 2025-08-13 00:19:53.415 [INFO][4626] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.65/26] handle="k8s-pod-network.cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960" host="ip-172-31-31-36" Aug 13 00:19:53.467918 containerd[2020]: 2025-08-13 00:19:53.415 [INFO][4626] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:19:53.467918 containerd[2020]: 2025-08-13 00:19:53.415 [INFO][4626] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.65/26] IPv6=[] ContainerID="cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960" HandleID="k8s-pod-network.cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960" Workload="ip--172--31--31--36-k8s-whisker--864598d977--rp5hw-eth0" Aug 13 00:19:53.470185 containerd[2020]: 2025-08-13 00:19:53.419 [INFO][4615] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960" Namespace="calico-system" Pod="whisker-864598d977-rp5hw" WorkloadEndpoint="ip--172--31--31--36-k8s-whisker--864598d977--rp5hw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-whisker--864598d977--rp5hw-eth0", GenerateName:"whisker-864598d977-", Namespace:"calico-system", SelfLink:"", UID:"c8a4d4b6-03a1-4e5d-94e3-fdb03e14b30a", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"864598d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"", Pod:"whisker-864598d977-rp5hw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.99.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia8c0aa0d297", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:19:53.470185 containerd[2020]: 2025-08-13 00:19:53.419 [INFO][4615] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.65/32] ContainerID="cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960" Namespace="calico-system" Pod="whisker-864598d977-rp5hw" WorkloadEndpoint="ip--172--31--31--36-k8s-whisker--864598d977--rp5hw-eth0" Aug 13 00:19:53.470185 containerd[2020]: 2025-08-13 00:19:53.420 [INFO][4615] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8c0aa0d297 ContainerID="cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960" Namespace="calico-system" Pod="whisker-864598d977-rp5hw" WorkloadEndpoint="ip--172--31--31--36-k8s-whisker--864598d977--rp5hw-eth0" Aug 13 00:19:53.470185 containerd[2020]: 2025-08-13 00:19:53.435 [INFO][4615] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960" Namespace="calico-system" Pod="whisker-864598d977-rp5hw" WorkloadEndpoint="ip--172--31--31--36-k8s-whisker--864598d977--rp5hw-eth0" Aug 13 00:19:53.470185 containerd[2020]: 2025-08-13 00:19:53.436 [INFO][4615] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960" Namespace="calico-system" Pod="whisker-864598d977-rp5hw" WorkloadEndpoint="ip--172--31--31--36-k8s-whisker--864598d977--rp5hw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-whisker--864598d977--rp5hw-eth0", GenerateName:"whisker-864598d977-", Namespace:"calico-system", SelfLink:"", UID:"c8a4d4b6-03a1-4e5d-94e3-fdb03e14b30a", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"864598d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960", Pod:"whisker-864598d977-rp5hw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.99.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia8c0aa0d297", MAC:"e2:05:9e:85:17:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:19:53.470185 containerd[2020]: 2025-08-13 00:19:53.458 [INFO][4615] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960" Namespace="calico-system" Pod="whisker-864598d977-rp5hw" WorkloadEndpoint="ip--172--31--31--36-k8s-whisker--864598d977--rp5hw-eth0" Aug 13 00:19:53.499552 containerd[2020]: time="2025-08-13T00:19:53.499260768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:53.502542 containerd[2020]: time="2025-08-13T00:19:53.500156628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:53.502542 containerd[2020]: time="2025-08-13T00:19:53.500208504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:53.502542 containerd[2020]: time="2025-08-13T00:19:53.500383152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:53.553792 systemd[1]: Started cri-containerd-cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960.scope - libcontainer container cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960. Aug 13 00:19:53.619224 containerd[2020]: time="2025-08-13T00:19:53.619073616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-864598d977-rp5hw,Uid:c8a4d4b6-03a1-4e5d-94e3-fdb03e14b30a,Namespace:calico-system,Attempt:0,} returns sandbox id \"cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960\"" Aug 13 00:19:53.623079 containerd[2020]: time="2025-08-13T00:19:53.623006424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:19:54.931499 kernel: bpftool[4824]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 00:19:55.268952 systemd-networkd[1936]: vxlan.calico: Link UP Aug 13 00:19:55.268976 systemd-networkd[1936]: vxlan.calico: Gained carrier Aug 13 00:19:55.275645 systemd-networkd[1936]: calia8c0aa0d297: Gained IPv6LL Aug 13 00:19:55.307027 (udev-worker)[4540]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:19:56.045532 containerd[2020]: time="2025-08-13T00:19:56.045033900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:56.047337 containerd[2020]: time="2025-08-13T00:19:56.047255292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 13 00:19:56.049928 containerd[2020]: time="2025-08-13T00:19:56.049847304Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:56.054971 containerd[2020]: time="2025-08-13T00:19:56.054900036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:56.057563 containerd[2020]: time="2025-08-13T00:19:56.056604192Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 2.43351084s" Aug 13 00:19:56.057563 containerd[2020]: time="2025-08-13T00:19:56.056662932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 13 00:19:56.063494 containerd[2020]: time="2025-08-13T00:19:56.063142164Z" level=info msg="CreateContainer within sandbox \"cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:19:56.098636 containerd[2020]: time="2025-08-13T00:19:56.098578189Z" level=info msg="CreateContainer within sandbox \"cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a2b7e17b41ce5758538f1bf8fb5216b7a7bcd7bc1a63eaf2429779c7fe7ef3d8\"" Aug 13 00:19:56.100423 containerd[2020]: time="2025-08-13T00:19:56.100197481Z" level=info msg="StartContainer for \"a2b7e17b41ce5758538f1bf8fb5216b7a7bcd7bc1a63eaf2429779c7fe7ef3d8\"" Aug 13 00:19:56.165814 systemd[1]: Started cri-containerd-a2b7e17b41ce5758538f1bf8fb5216b7a7bcd7bc1a63eaf2429779c7fe7ef3d8.scope - libcontainer container a2b7e17b41ce5758538f1bf8fb5216b7a7bcd7bc1a63eaf2429779c7fe7ef3d8. Aug 13 00:19:56.243704 containerd[2020]: time="2025-08-13T00:19:56.243525001Z" level=info msg="StartContainer for \"a2b7e17b41ce5758538f1bf8fb5216b7a7bcd7bc1a63eaf2429779c7fe7ef3d8\" returns successfully" Aug 13 00:19:56.248264 containerd[2020]: time="2025-08-13T00:19:56.246795673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:19:56.318588 containerd[2020]: time="2025-08-13T00:19:56.316972418Z" level=info msg="StopPodSandbox for \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\"" Aug 13 00:19:56.318588 containerd[2020]: time="2025-08-13T00:19:56.318508742Z" level=info msg="StopPodSandbox for \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\"" Aug 13 00:19:56.479630 systemd-networkd[1936]: vxlan.calico: Gained IPv6LL Aug 13 00:19:56.559652 containerd[2020]: 2025-08-13 00:19:56.449 [INFO][4958] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Aug 13 00:19:56.559652 containerd[2020]: 2025-08-13 00:19:56.450 [INFO][4958] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" iface="eth0" netns="/var/run/netns/cni-71b52265-c8cf-aea5-68fb-b2c1a2a86df4" Aug 13 00:19:56.559652 containerd[2020]: 2025-08-13 00:19:56.451 [INFO][4958] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" iface="eth0" netns="/var/run/netns/cni-71b52265-c8cf-aea5-68fb-b2c1a2a86df4" Aug 13 00:19:56.559652 containerd[2020]: 2025-08-13 00:19:56.451 [INFO][4958] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" iface="eth0" netns="/var/run/netns/cni-71b52265-c8cf-aea5-68fb-b2c1a2a86df4" Aug 13 00:19:56.559652 containerd[2020]: 2025-08-13 00:19:56.452 [INFO][4958] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Aug 13 00:19:56.559652 containerd[2020]: 2025-08-13 00:19:56.452 [INFO][4958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Aug 13 00:19:56.559652 containerd[2020]: 2025-08-13 00:19:56.519 [INFO][4973] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" HandleID="k8s-pod-network.3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0" Aug 13 00:19:56.559652 containerd[2020]: 2025-08-13 00:19:56.520 [INFO][4973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:19:56.559652 containerd[2020]: 2025-08-13 00:19:56.520 [INFO][4973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:19:56.559652 containerd[2020]: 2025-08-13 00:19:56.544 [WARNING][4973] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" HandleID="k8s-pod-network.3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0" Aug 13 00:19:56.559652 containerd[2020]: 2025-08-13 00:19:56.544 [INFO][4973] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" HandleID="k8s-pod-network.3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0" Aug 13 00:19:56.559652 containerd[2020]: 2025-08-13 00:19:56.552 [INFO][4973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:19:56.559652 containerd[2020]: 2025-08-13 00:19:56.556 [INFO][4958] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Aug 13 00:19:56.559652 containerd[2020]: time="2025-08-13T00:19:56.559332519Z" level=info msg="TearDown network for sandbox \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\" successfully" Aug 13 00:19:56.559652 containerd[2020]: time="2025-08-13T00:19:56.559373031Z" level=info msg="StopPodSandbox for \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\" returns successfully" Aug 13 00:19:56.577705 containerd[2020]: time="2025-08-13T00:19:56.562324551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd6797c4b-4bgrn,Uid:28081b87-c735-43fc-9236-d52ebf6d339c,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:19:56.567879 systemd[1]: run-netns-cni\x2d71b52265\x2dc8cf\x2daea5\x2d68fb\x2db2c1a2a86df4.mount: Deactivated successfully. Aug 13 00:19:56.596571 containerd[2020]: 2025-08-13 00:19:56.472 [INFO][4959] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Aug 13 00:19:56.596571 containerd[2020]: 2025-08-13 00:19:56.472 [INFO][4959] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" iface="eth0" netns="/var/run/netns/cni-d9fd7a79-a3d7-5afe-c1bb-7c8efae76d85" Aug 13 00:19:56.596571 containerd[2020]: 2025-08-13 00:19:56.473 [INFO][4959] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" iface="eth0" netns="/var/run/netns/cni-d9fd7a79-a3d7-5afe-c1bb-7c8efae76d85" Aug 13 00:19:56.596571 containerd[2020]: 2025-08-13 00:19:56.473 [INFO][4959] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" iface="eth0" netns="/var/run/netns/cni-d9fd7a79-a3d7-5afe-c1bb-7c8efae76d85" Aug 13 00:19:56.596571 containerd[2020]: 2025-08-13 00:19:56.473 [INFO][4959] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Aug 13 00:19:56.596571 containerd[2020]: 2025-08-13 00:19:56.473 [INFO][4959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Aug 13 00:19:56.596571 containerd[2020]: 2025-08-13 00:19:56.539 [INFO][4978] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" HandleID="k8s-pod-network.679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0" Aug 13 00:19:56.596571 containerd[2020]: 2025-08-13 00:19:56.540 [INFO][4978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:19:56.596571 containerd[2020]: 2025-08-13 00:19:56.552 [INFO][4978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:19:56.596571 containerd[2020]: 2025-08-13 00:19:56.581 [WARNING][4978] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" HandleID="k8s-pod-network.679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0" Aug 13 00:19:56.596571 containerd[2020]: 2025-08-13 00:19:56.583 [INFO][4978] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" HandleID="k8s-pod-network.679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0" Aug 13 00:19:56.596571 containerd[2020]: 2025-08-13 00:19:56.586 [INFO][4978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:19:56.596571 containerd[2020]: 2025-08-13 00:19:56.589 [INFO][4959] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Aug 13 00:19:56.597528 containerd[2020]: time="2025-08-13T00:19:56.596842359Z" level=info msg="TearDown network for sandbox \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\" successfully" Aug 13 00:19:56.597528 containerd[2020]: time="2025-08-13T00:19:56.596897319Z" level=info msg="StopPodSandbox for \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\" returns successfully" Aug 13 00:19:56.598674 containerd[2020]: time="2025-08-13T00:19:56.598448199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wfn57,Uid:5a64667d-5251-47e9-8797-9a7e4c011870,Namespace:kube-system,Attempt:1,}" Aug 13 00:19:56.857577 systemd-networkd[1936]: cali5deeb3c7584: Link UP Aug 13 00:19:56.861552 systemd-networkd[1936]: cali5deeb3c7584: Gained carrier Aug 13 00:19:56.907367 containerd[2020]: 2025-08-13 00:19:56.683 [INFO][4987] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0 calico-apiserver-bd6797c4b- calico-apiserver 28081b87-c735-43fc-9236-d52ebf6d339c 968 0 2025-08-13 00:19:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bd6797c4b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-36 calico-apiserver-bd6797c4b-4bgrn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5deeb3c7584 [] [] }} ContainerID="b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17" Namespace="calico-apiserver" Pod="calico-apiserver-bd6797c4b-4bgrn" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-" Aug 13 00:19:56.907367 containerd[2020]: 2025-08-13 00:19:56.683 [INFO][4987] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17" Namespace="calico-apiserver" Pod="calico-apiserver-bd6797c4b-4bgrn" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0" Aug 13 00:19:56.907367 containerd[2020]: 2025-08-13 00:19:56.764 [INFO][5010] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17" HandleID="k8s-pod-network.b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0" Aug 13 00:19:56.907367 containerd[2020]: 2025-08-13 00:19:56.764 [INFO][5010] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17" HandleID="k8s-pod-network.b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004de00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-36", "pod":"calico-apiserver-bd6797c4b-4bgrn", "timestamp":"2025-08-13 00:19:56.764375956 +0000 UTC"}, Hostname:"ip-172-31-31-36", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:19:56.907367 containerd[2020]: 2025-08-13 00:19:56.765 [INFO][5010] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:19:56.907367 containerd[2020]: 2025-08-13 00:19:56.765 [INFO][5010] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:19:56.907367 containerd[2020]: 2025-08-13 00:19:56.765 [INFO][5010] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-36' Aug 13 00:19:56.907367 containerd[2020]: 2025-08-13 00:19:56.789 [INFO][5010] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17" host="ip-172-31-31-36" Aug 13 00:19:56.907367 containerd[2020]: 2025-08-13 00:19:56.799 [INFO][5010] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-36" Aug 13 00:19:56.907367 containerd[2020]: 2025-08-13 00:19:56.809 [INFO][5010] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:19:56.907367 containerd[2020]: 2025-08-13 00:19:56.814 [INFO][5010] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:19:56.907367 containerd[2020]: 2025-08-13 00:19:56.818 [INFO][5010] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:19:56.907367 containerd[2020]: 2025-08-13 00:19:56.819 [INFO][5010] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17" host="ip-172-31-31-36" Aug 13 00:19:56.907367 containerd[2020]: 2025-08-13 00:19:56.822 [INFO][5010] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17 Aug 13 00:19:56.907367 containerd[2020]: 2025-08-13 00:19:56.831 [INFO][5010] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17" host="ip-172-31-31-36" Aug 13 00:19:56.907367 containerd[2020]: 2025-08-13 00:19:56.841 [INFO][5010] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.66/26] block=192.168.99.64/26 handle="k8s-pod-network.b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17" host="ip-172-31-31-36" Aug 13 00:19:56.907367 containerd[2020]: 2025-08-13 00:19:56.842 [INFO][5010] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.66/26] handle="k8s-pod-network.b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17" host="ip-172-31-31-36" Aug 13 00:19:56.907367 containerd[2020]: 2025-08-13 00:19:56.842 [INFO][5010] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:19:56.907367 containerd[2020]: 2025-08-13 00:19:56.842 [INFO][5010] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.66/26] IPv6=[] ContainerID="b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17" HandleID="k8s-pod-network.b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0" Aug 13 00:19:56.911001 containerd[2020]: 2025-08-13 00:19:56.850 [INFO][4987] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17" Namespace="calico-apiserver" Pod="calico-apiserver-bd6797c4b-4bgrn" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0", GenerateName:"calico-apiserver-bd6797c4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"28081b87-c735-43fc-9236-d52ebf6d339c", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bd6797c4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"", Pod:"calico-apiserver-bd6797c4b-4bgrn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5deeb3c7584", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:19:56.911001 containerd[2020]: 2025-08-13 00:19:56.851 [INFO][4987] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.66/32] ContainerID="b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17" Namespace="calico-apiserver" Pod="calico-apiserver-bd6797c4b-4bgrn" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0" Aug 13 00:19:56.911001 containerd[2020]: 2025-08-13 00:19:56.851 [INFO][4987] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5deeb3c7584 ContainerID="b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17" Namespace="calico-apiserver" Pod="calico-apiserver-bd6797c4b-4bgrn" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0" Aug 13 00:19:56.911001 containerd[2020]: 2025-08-13 00:19:56.866 [INFO][4987] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17" Namespace="calico-apiserver" Pod="calico-apiserver-bd6797c4b-4bgrn" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0" Aug 13 00:19:56.911001 containerd[2020]: 2025-08-13 00:19:56.869 [INFO][4987] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17" Namespace="calico-apiserver" Pod="calico-apiserver-bd6797c4b-4bgrn" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0", GenerateName:"calico-apiserver-bd6797c4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"28081b87-c735-43fc-9236-d52ebf6d339c", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bd6797c4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17", Pod:"calico-apiserver-bd6797c4b-4bgrn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5deeb3c7584", MAC:"aa:00:ee:23:50:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:19:56.911001 containerd[2020]: 2025-08-13 00:19:56.896 [INFO][4987] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17" Namespace="calico-apiserver" Pod="calico-apiserver-bd6797c4b-4bgrn" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0" Aug 13 00:19:56.960066 containerd[2020]: time="2025-08-13T00:19:56.959851229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:56.961578 containerd[2020]: time="2025-08-13T00:19:56.961299785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:56.961578 containerd[2020]: time="2025-08-13T00:19:56.961441253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:56.963085 containerd[2020]: time="2025-08-13T00:19:56.962147081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:57.011106 systemd[1]: Started cri-containerd-b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17.scope - libcontainer container b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17. Aug 13 00:19:57.018209 systemd-networkd[1936]: calid2a0434ef12: Link UP Aug 13 00:19:57.021727 systemd-networkd[1936]: calid2a0434ef12: Gained carrier Aug 13 00:19:57.063198 containerd[2020]: 2025-08-13 00:19:56.728 [INFO][4998] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0 coredns-668d6bf9bc- kube-system 5a64667d-5251-47e9-8797-9a7e4c011870 969 0 2025-08-13 00:19:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-36 coredns-668d6bf9bc-wfn57 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid2a0434ef12 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba" Namespace="kube-system" Pod="coredns-668d6bf9bc-wfn57" WorkloadEndpoint="ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-" Aug 13 00:19:57.063198 containerd[2020]: 2025-08-13 00:19:56.728 [INFO][4998] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba" Namespace="kube-system" Pod="coredns-668d6bf9bc-wfn57" WorkloadEndpoint="ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0" Aug 13 00:19:57.063198 containerd[2020]: 2025-08-13 00:19:56.819 [INFO][5017] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba" HandleID="k8s-pod-network.0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0" Aug 13 00:19:57.063198 containerd[2020]: 2025-08-13 00:19:56.820 [INFO][5017] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba" HandleID="k8s-pod-network.0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024bd30), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-36", "pod":"coredns-668d6bf9bc-wfn57", "timestamp":"2025-08-13 00:19:56.81992524 +0000 UTC"}, Hostname:"ip-172-31-31-36", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:19:57.063198 containerd[2020]: 2025-08-13 00:19:56.820 [INFO][5017] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:19:57.063198 containerd[2020]: 2025-08-13 00:19:56.842 [INFO][5017] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:19:57.063198 containerd[2020]: 2025-08-13 00:19:56.842 [INFO][5017] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-36' Aug 13 00:19:57.063198 containerd[2020]: 2025-08-13 00:19:56.894 [INFO][5017] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba" host="ip-172-31-31-36" Aug 13 00:19:57.063198 containerd[2020]: 2025-08-13 00:19:56.914 [INFO][5017] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-36" Aug 13 00:19:57.063198 containerd[2020]: 2025-08-13 00:19:56.928 [INFO][5017] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:19:57.063198 containerd[2020]: 2025-08-13 00:19:56.933 [INFO][5017] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:19:57.063198 containerd[2020]: 2025-08-13 00:19:56.942 [INFO][5017] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:19:57.063198 containerd[2020]: 2025-08-13 00:19:56.944 [INFO][5017] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba" host="ip-172-31-31-36" Aug 13 00:19:57.063198 containerd[2020]: 2025-08-13 00:19:56.951 [INFO][5017] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba Aug 13 00:19:57.063198 containerd[2020]: 2025-08-13 00:19:56.972 [INFO][5017] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba" host="ip-172-31-31-36" Aug 13 00:19:57.063198 containerd[2020]: 2025-08-13 00:19:56.999 [INFO][5017] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.67/26] block=192.168.99.64/26 handle="k8s-pod-network.0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba" host="ip-172-31-31-36" Aug 13 00:19:57.063198 containerd[2020]: 2025-08-13 00:19:56.999 [INFO][5017] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.67/26] handle="k8s-pod-network.0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba" host="ip-172-31-31-36" Aug 13 00:19:57.063198 containerd[2020]: 2025-08-13 00:19:57.000 [INFO][5017] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:19:57.063198 containerd[2020]: 2025-08-13 00:19:57.000 [INFO][5017] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.67/26] IPv6=[] ContainerID="0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba" HandleID="k8s-pod-network.0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0" Aug 13 00:19:57.067423 containerd[2020]: 2025-08-13 00:19:57.006 [INFO][4998] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba" Namespace="kube-system" Pod="coredns-668d6bf9bc-wfn57" WorkloadEndpoint="ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5a64667d-5251-47e9-8797-9a7e4c011870", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"", Pod:"coredns-668d6bf9bc-wfn57", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2a0434ef12", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:19:57.067423 containerd[2020]: 2025-08-13 00:19:57.007 [INFO][4998] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.67/32] ContainerID="0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba" Namespace="kube-system" Pod="coredns-668d6bf9bc-wfn57" WorkloadEndpoint="ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0" Aug 13 00:19:57.067423 containerd[2020]: 2025-08-13 00:19:57.007 [INFO][4998] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2a0434ef12 ContainerID="0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba" Namespace="kube-system" Pod="coredns-668d6bf9bc-wfn57" WorkloadEndpoint="ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0" Aug 13 00:19:57.067423 containerd[2020]: 2025-08-13 00:19:57.021 [INFO][4998] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba" Namespace="kube-system" Pod="coredns-668d6bf9bc-wfn57" WorkloadEndpoint="ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0" Aug 13 00:19:57.067423 containerd[2020]: 2025-08-13 00:19:57.023 [INFO][4998] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba" Namespace="kube-system" Pod="coredns-668d6bf9bc-wfn57" WorkloadEndpoint="ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5a64667d-5251-47e9-8797-9a7e4c011870", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba", Pod:"coredns-668d6bf9bc-wfn57", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2a0434ef12", MAC:"aa:98:82:d3:f0:35", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:19:57.067423 containerd[2020]: 2025-08-13 00:19:57.052 [INFO][4998] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba" Namespace="kube-system" Pod="coredns-668d6bf9bc-wfn57" WorkloadEndpoint="ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0" Aug 13 00:19:57.101267 systemd[1]: run-netns-cni\x2dd9fd7a79\x2da3d7\x2d5afe\x2dc1bb\x2d7c8efae76d85.mount: Deactivated successfully. Aug 13 00:19:57.137829 containerd[2020]: time="2025-08-13T00:19:57.137175146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:57.137829 containerd[2020]: time="2025-08-13T00:19:57.137298146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:57.137829 containerd[2020]: time="2025-08-13T00:19:57.137336750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:57.137829 containerd[2020]: time="2025-08-13T00:19:57.137557046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:57.198006 containerd[2020]: time="2025-08-13T00:19:57.197903174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd6797c4b-4bgrn,Uid:28081b87-c735-43fc-9236-d52ebf6d339c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17\"" Aug 13 00:19:57.220877 systemd[1]: Started cri-containerd-0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba.scope - libcontainer container 0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba. Aug 13 00:19:57.291716 containerd[2020]: time="2025-08-13T00:19:57.291528279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wfn57,Uid:5a64667d-5251-47e9-8797-9a7e4c011870,Namespace:kube-system,Attempt:1,} returns sandbox id \"0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba\"" Aug 13 00:19:57.298341 containerd[2020]: time="2025-08-13T00:19:57.297609435Z" level=info msg="CreateContainer within sandbox \"0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:19:57.315314 containerd[2020]: time="2025-08-13T00:19:57.314595435Z" level=info msg="StopPodSandbox for \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\"" Aug 13 00:19:57.320747 containerd[2020]: time="2025-08-13T00:19:57.320553147Z" level=info msg="StopPodSandbox for \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\"" Aug 13 00:19:57.355918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount175879855.mount: Deactivated successfully. Aug 13 00:19:57.356126 containerd[2020]: time="2025-08-13T00:19:57.355828479Z" level=info msg="CreateContainer within sandbox \"0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4e3f4c0ef0d18ef881e075521d9c8dbff713e6c54516e7c6ced27948668267c\"" Aug 13 00:19:57.362017 containerd[2020]: time="2025-08-13T00:19:57.361947159Z" level=info msg="StartContainer for \"d4e3f4c0ef0d18ef881e075521d9c8dbff713e6c54516e7c6ced27948668267c\"" Aug 13 00:19:57.521543 systemd[1]: Started cri-containerd-d4e3f4c0ef0d18ef881e075521d9c8dbff713e6c54516e7c6ced27948668267c.scope - libcontainer container d4e3f4c0ef0d18ef881e075521d9c8dbff713e6c54516e7c6ced27948668267c. Aug 13 00:19:57.781055 containerd[2020]: 2025-08-13 00:19:57.535 [INFO][5148] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Aug 13 00:19:57.781055 containerd[2020]: 2025-08-13 00:19:57.535 [INFO][5148] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" iface="eth0" netns="/var/run/netns/cni-43521de5-2c96-e292-0408-ca891db3c58a" Aug 13 00:19:57.781055 containerd[2020]: 2025-08-13 00:19:57.539 [INFO][5148] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" iface="eth0" netns="/var/run/netns/cni-43521de5-2c96-e292-0408-ca891db3c58a" Aug 13 00:19:57.781055 containerd[2020]: 2025-08-13 00:19:57.541 [INFO][5148] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" iface="eth0" netns="/var/run/netns/cni-43521de5-2c96-e292-0408-ca891db3c58a" Aug 13 00:19:57.781055 containerd[2020]: 2025-08-13 00:19:57.542 [INFO][5148] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Aug 13 00:19:57.781055 containerd[2020]: 2025-08-13 00:19:57.542 [INFO][5148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Aug 13 00:19:57.781055 containerd[2020]: 2025-08-13 00:19:57.647 [INFO][5181] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" HandleID="k8s-pod-network.bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Workload="ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0" Aug 13 00:19:57.781055 containerd[2020]: 2025-08-13 00:19:57.649 [INFO][5181] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:19:57.781055 containerd[2020]: 2025-08-13 00:19:57.650 [INFO][5181] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:19:57.781055 containerd[2020]: 2025-08-13 00:19:57.716 [WARNING][5181] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" HandleID="k8s-pod-network.bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Workload="ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0" Aug 13 00:19:57.781055 containerd[2020]: 2025-08-13 00:19:57.716 [INFO][5181] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" HandleID="k8s-pod-network.bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Workload="ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0" Aug 13 00:19:57.781055 containerd[2020]: 2025-08-13 00:19:57.738 [INFO][5181] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:19:57.781055 containerd[2020]: 2025-08-13 00:19:57.765 [INFO][5148] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Aug 13 00:19:57.789480 containerd[2020]: time="2025-08-13T00:19:57.789292397Z" level=info msg="TearDown network for sandbox \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\" successfully" Aug 13 00:19:57.789837 containerd[2020]: time="2025-08-13T00:19:57.789709565Z" level=info msg="StopPodSandbox for \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\" returns successfully" Aug 13 00:19:57.800497 containerd[2020]: time="2025-08-13T00:19:57.799316345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7689d867cd-nh2hh,Uid:5d408832-1099-4d99-a077-a600c984323a,Namespace:calico-system,Attempt:1,}" Aug 13 00:19:57.802569 containerd[2020]: time="2025-08-13T00:19:57.802486637Z" level=info msg="StartContainer for \"d4e3f4c0ef0d18ef881e075521d9c8dbff713e6c54516e7c6ced27948668267c\" returns successfully" Aug 13 00:19:57.980721 containerd[2020]: 2025-08-13 00:19:57.640 [INFO][5140] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Aug 13 00:19:57.980721 containerd[2020]: 2025-08-13 00:19:57.640 [INFO][5140] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" iface="eth0" netns="/var/run/netns/cni-b826a09b-77dc-368a-6ce9-79f7c4134adf" Aug 13 00:19:57.980721 containerd[2020]: 2025-08-13 00:19:57.643 [INFO][5140] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" iface="eth0" netns="/var/run/netns/cni-b826a09b-77dc-368a-6ce9-79f7c4134adf" Aug 13 00:19:57.980721 containerd[2020]: 2025-08-13 00:19:57.645 [INFO][5140] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" iface="eth0" netns="/var/run/netns/cni-b826a09b-77dc-368a-6ce9-79f7c4134adf" Aug 13 00:19:57.980721 containerd[2020]: 2025-08-13 00:19:57.645 [INFO][5140] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Aug 13 00:19:57.980721 containerd[2020]: 2025-08-13 00:19:57.646 [INFO][5140] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Aug 13 00:19:57.980721 containerd[2020]: 2025-08-13 00:19:57.870 [INFO][5189] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" HandleID="k8s-pod-network.b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Workload="ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0" Aug 13 00:19:57.980721 containerd[2020]: 2025-08-13 00:19:57.875 [INFO][5189] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:19:57.980721 containerd[2020]: 2025-08-13 00:19:57.875 [INFO][5189] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:19:57.980721 containerd[2020]: 2025-08-13 00:19:57.941 [WARNING][5189] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" HandleID="k8s-pod-network.b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Workload="ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0" Aug 13 00:19:57.980721 containerd[2020]: 2025-08-13 00:19:57.941 [INFO][5189] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" HandleID="k8s-pod-network.b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Workload="ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0" Aug 13 00:19:57.980721 containerd[2020]: 2025-08-13 00:19:57.954 [INFO][5189] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:19:57.980721 containerd[2020]: 2025-08-13 00:19:57.969 [INFO][5140] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Aug 13 00:19:57.983875 containerd[2020]: time="2025-08-13T00:19:57.981651342Z" level=info msg="TearDown network for sandbox \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\" successfully" Aug 13 00:19:57.983875 containerd[2020]: time="2025-08-13T00:19:57.982606374Z" level=info msg="StopPodSandbox for \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\" returns successfully" Aug 13 00:19:57.984980 containerd[2020]: time="2025-08-13T00:19:57.984930402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vh7v8,Uid:7cfa030b-4e22-4159-b794-1031c8aae80f,Namespace:calico-system,Attempt:1,}" Aug 13 00:19:58.079754 systemd-networkd[1936]: cali5deeb3c7584: Gained IPv6LL Aug 13 00:19:58.126756 systemd[1]: run-netns-cni\x2db826a09b\x2d77dc\x2d368a\x2d6ce9\x2d79f7c4134adf.mount: Deactivated successfully. Aug 13 00:19:58.127075 systemd[1]: run-netns-cni\x2d43521de5\x2d2c96\x2de292\x2d0408\x2dca891db3c58a.mount: Deactivated successfully. Aug 13 00:19:58.335227 containerd[2020]: time="2025-08-13T00:19:58.334603528Z" level=info msg="StopPodSandbox for \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\"" Aug 13 00:19:58.338266 containerd[2020]: time="2025-08-13T00:19:58.338080348Z" level=info msg="StopPodSandbox for \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\"" Aug 13 00:19:58.400339 systemd-networkd[1936]: calid2a0434ef12: Gained IPv6LL Aug 13 00:19:58.459745 systemd-networkd[1936]: cali8beee0ad0d0: Link UP Aug 13 00:19:58.465111 systemd-networkd[1936]: cali8beee0ad0d0: Gained carrier Aug 13 00:19:58.573106 containerd[2020]: 2025-08-13 00:19:58.042 [INFO][5203] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0 calico-kube-controllers-7689d867cd- calico-system 5d408832-1099-4d99-a077-a600c984323a 982 0 2025-08-13 00:19:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7689d867cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-31-36 calico-kube-controllers-7689d867cd-nh2hh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8beee0ad0d0 [] [] }} ContainerID="30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d" Namespace="calico-system" Pod="calico-kube-controllers-7689d867cd-nh2hh" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-" Aug 13 00:19:58.573106 containerd[2020]: 2025-08-13 00:19:58.042 [INFO][5203] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d" Namespace="calico-system" Pod="calico-kube-controllers-7689d867cd-nh2hh" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0" Aug 13 00:19:58.573106 containerd[2020]: 2025-08-13 00:19:58.235 [INFO][5227] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d" HandleID="k8s-pod-network.30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d" Workload="ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0" Aug 13 00:19:58.573106 containerd[2020]: 2025-08-13 00:19:58.236 [INFO][5227] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d" HandleID="k8s-pod-network.30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d" Workload="ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dbd90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-36", "pod":"calico-kube-controllers-7689d867cd-nh2hh", "timestamp":"2025-08-13 00:19:58.235737135 +0000 UTC"}, Hostname:"ip-172-31-31-36", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:19:58.573106 containerd[2020]: 2025-08-13 00:19:58.236 [INFO][5227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:19:58.573106 containerd[2020]: 2025-08-13 00:19:58.236 [INFO][5227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:19:58.573106 containerd[2020]: 2025-08-13 00:19:58.236 [INFO][5227] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-36' Aug 13 00:19:58.573106 containerd[2020]: 2025-08-13 00:19:58.267 [INFO][5227] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d" host="ip-172-31-31-36" Aug 13 00:19:58.573106 containerd[2020]: 2025-08-13 00:19:58.288 [INFO][5227] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-36" Aug 13 00:19:58.573106 containerd[2020]: 2025-08-13 00:19:58.306 [INFO][5227] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:19:58.573106 containerd[2020]: 2025-08-13 00:19:58.312 [INFO][5227] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:19:58.573106 containerd[2020]: 2025-08-13 00:19:58.321 [INFO][5227] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:19:58.573106 containerd[2020]: 2025-08-13 00:19:58.322 [INFO][5227] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d" host="ip-172-31-31-36" Aug 13 00:19:58.573106 containerd[2020]: 2025-08-13 00:19:58.327 [INFO][5227] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d Aug 13 00:19:58.573106 containerd[2020]: 2025-08-13 00:19:58.351 [INFO][5227] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d" host="ip-172-31-31-36" Aug 13 00:19:58.573106 containerd[2020]: 2025-08-13 00:19:58.415 [INFO][5227] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.68/26] block=192.168.99.64/26 handle="k8s-pod-network.30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d" host="ip-172-31-31-36" Aug 13 00:19:58.573106 containerd[2020]: 2025-08-13 00:19:58.415 [INFO][5227] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.68/26] handle="k8s-pod-network.30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d" host="ip-172-31-31-36" Aug 13 00:19:58.573106 containerd[2020]: 2025-08-13 00:19:58.416 [INFO][5227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:19:58.573106 containerd[2020]: 2025-08-13 00:19:58.417 [INFO][5227] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.68/26] IPv6=[] ContainerID="30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d" HandleID="k8s-pod-network.30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d" Workload="ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0" Aug 13 00:19:58.578336 containerd[2020]: 2025-08-13 00:19:58.432 [INFO][5203] cni-plugin/k8s.go 418: Populated endpoint ContainerID="30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d" Namespace="calico-system" Pod="calico-kube-controllers-7689d867cd-nh2hh" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0", GenerateName:"calico-kube-controllers-7689d867cd-", Namespace:"calico-system", SelfLink:"", UID:"5d408832-1099-4d99-a077-a600c984323a", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7689d867cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"", Pod:"calico-kube-controllers-7689d867cd-nh2hh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8beee0ad0d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:19:58.578336 containerd[2020]: 2025-08-13 00:19:58.432 [INFO][5203] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.68/32] ContainerID="30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d" Namespace="calico-system" Pod="calico-kube-controllers-7689d867cd-nh2hh" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0" Aug 13 00:19:58.578336 containerd[2020]: 2025-08-13 00:19:58.432 [INFO][5203] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8beee0ad0d0 ContainerID="30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d" Namespace="calico-system" Pod="calico-kube-controllers-7689d867cd-nh2hh" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0" Aug 13 00:19:58.578336 containerd[2020]: 2025-08-13 00:19:58.472 [INFO][5203] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d" Namespace="calico-system" Pod="calico-kube-controllers-7689d867cd-nh2hh" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0" Aug 13 00:19:58.578336 containerd[2020]: 2025-08-13 00:19:58.496 [INFO][5203] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d" Namespace="calico-system" Pod="calico-kube-controllers-7689d867cd-nh2hh" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0", GenerateName:"calico-kube-controllers-7689d867cd-", Namespace:"calico-system", SelfLink:"", UID:"5d408832-1099-4d99-a077-a600c984323a", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7689d867cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d", Pod:"calico-kube-controllers-7689d867cd-nh2hh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8beee0ad0d0", MAC:"66:df:4a:d2:7a:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:19:58.578336 containerd[2020]: 2025-08-13 00:19:58.555 [INFO][5203] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d" Namespace="calico-system" Pod="calico-kube-controllers-7689d867cd-nh2hh" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0" Aug 13 00:19:58.697340 containerd[2020]: time="2025-08-13T00:19:58.693741726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:58.697340 containerd[2020]: time="2025-08-13T00:19:58.695903562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:58.697340 containerd[2020]: time="2025-08-13T00:19:58.695936646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:58.700546 containerd[2020]: time="2025-08-13T00:19:58.697763094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:58.732898 systemd-networkd[1936]: cali4521660e031: Link UP Aug 13 00:19:58.740933 systemd-networkd[1936]: cali4521660e031: Gained carrier Aug 13 00:19:58.867708 containerd[2020]: 2025-08-13 00:19:58.221 [INFO][5217] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0 csi-node-driver- calico-system 7cfa030b-4e22-4159-b794-1031c8aae80f 984 0 2025-08-13 00:19:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-31-36 csi-node-driver-vh7v8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4521660e031 [] [] }} ContainerID="085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319" Namespace="calico-system" Pod="csi-node-driver-vh7v8" WorkloadEndpoint="ip--172--31--31--36-k8s-csi--node--driver--vh7v8-" Aug 13 00:19:58.867708 containerd[2020]: 2025-08-13 00:19:58.221 [INFO][5217] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319" Namespace="calico-system" Pod="csi-node-driver-vh7v8" WorkloadEndpoint="ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0" Aug 13 00:19:58.867708 containerd[2020]: 2025-08-13 00:19:58.389 [INFO][5236] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319" HandleID="k8s-pod-network.085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319" Workload="ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0" Aug 13 00:19:58.867708 containerd[2020]: 2025-08-13 00:19:58.395 [INFO][5236] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319" HandleID="k8s-pod-network.085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319" Workload="ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c1970), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-36", "pod":"csi-node-driver-vh7v8", "timestamp":"2025-08-13 00:19:58.388921168 +0000 UTC"}, Hostname:"ip-172-31-31-36", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:19:58.867708 containerd[2020]: 2025-08-13 00:19:58.395 [INFO][5236] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:19:58.867708 containerd[2020]: 2025-08-13 00:19:58.416 [INFO][5236] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:19:58.867708 containerd[2020]: 2025-08-13 00:19:58.418 [INFO][5236] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-36' Aug 13 00:19:58.867708 containerd[2020]: 2025-08-13 00:19:58.472 [INFO][5236] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319" host="ip-172-31-31-36" Aug 13 00:19:58.867708 containerd[2020]: 2025-08-13 00:19:58.507 [INFO][5236] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-36" Aug 13 00:19:58.867708 containerd[2020]: 2025-08-13 00:19:58.528 [INFO][5236] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:19:58.867708 containerd[2020]: 2025-08-13 00:19:58.546 [INFO][5236] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:19:58.867708 containerd[2020]: 2025-08-13 00:19:58.587 [INFO][5236] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:19:58.867708 containerd[2020]: 2025-08-13 00:19:58.587 [INFO][5236] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319" host="ip-172-31-31-36" Aug 13 00:19:58.867708 containerd[2020]: 2025-08-13 00:19:58.614 [INFO][5236] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319 Aug 13 00:19:58.867708 containerd[2020]: 2025-08-13 00:19:58.655 [INFO][5236] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319" host="ip-172-31-31-36" Aug 13 00:19:58.867708 containerd[2020]: 2025-08-13 00:19:58.686 [INFO][5236] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.69/26] block=192.168.99.64/26 handle="k8s-pod-network.085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319" host="ip-172-31-31-36" Aug 13 00:19:58.867708 containerd[2020]: 2025-08-13 00:19:58.687 [INFO][5236] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.69/26] handle="k8s-pod-network.085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319" host="ip-172-31-31-36" Aug 13 00:19:58.867708 containerd[2020]: 2025-08-13 00:19:58.689 [INFO][5236] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:19:58.867708 containerd[2020]: 2025-08-13 00:19:58.689 [INFO][5236] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.69/26] IPv6=[] ContainerID="085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319" HandleID="k8s-pod-network.085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319" Workload="ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0" Aug 13 00:19:58.869309 containerd[2020]: 2025-08-13 00:19:58.701 [INFO][5217] cni-plugin/k8s.go 418: Populated endpoint ContainerID="085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319" Namespace="calico-system" Pod="csi-node-driver-vh7v8" WorkloadEndpoint="ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7cfa030b-4e22-4159-b794-1031c8aae80f", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"", Pod:"csi-node-driver-vh7v8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4521660e031", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:19:58.869309 containerd[2020]: 2025-08-13 00:19:58.703 [INFO][5217] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.69/32] ContainerID="085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319" Namespace="calico-system" Pod="csi-node-driver-vh7v8" WorkloadEndpoint="ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0" Aug 13 00:19:58.869309 containerd[2020]: 2025-08-13 00:19:58.710 [INFO][5217] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4521660e031 ContainerID="085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319" Namespace="calico-system" Pod="csi-node-driver-vh7v8" WorkloadEndpoint="ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0" Aug 13 00:19:58.869309 containerd[2020]: 2025-08-13 00:19:58.752 [INFO][5217] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319" Namespace="calico-system" Pod="csi-node-driver-vh7v8" WorkloadEndpoint="ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0" Aug 13 00:19:58.869309 containerd[2020]: 2025-08-13 00:19:58.757 [INFO][5217] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319" Namespace="calico-system" Pod="csi-node-driver-vh7v8" WorkloadEndpoint="ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7cfa030b-4e22-4159-b794-1031c8aae80f", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319", Pod:"csi-node-driver-vh7v8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4521660e031", MAC:"be:b3:29:c3:0a:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:19:58.869309 containerd[2020]: 2025-08-13 00:19:58.831 [INFO][5217] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319" Namespace="calico-system" Pod="csi-node-driver-vh7v8" WorkloadEndpoint="ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0" Aug 13 00:19:58.883927 systemd[1]: Started cri-containerd-30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d.scope - libcontainer container 30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d. Aug 13 00:19:58.973546 kubelet[3342]: I0813 00:19:58.970307 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wfn57" podStartSLOduration=51.970275439 podStartE2EDuration="51.970275439s" podCreationTimestamp="2025-08-13 00:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:19:58.906827383 +0000 UTC m=+57.889340389" watchObservedRunningTime="2025-08-13 00:19:58.970275439 +0000 UTC m=+57.952788433" Aug 13 00:19:58.984339 containerd[2020]: time="2025-08-13T00:19:58.982638295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:58.984339 containerd[2020]: time="2025-08-13T00:19:58.982747639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:58.984339 containerd[2020]: time="2025-08-13T00:19:58.982783447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:58.984339 containerd[2020]: time="2025-08-13T00:19:58.982986679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:59.065147 containerd[2020]: 2025-08-13 00:19:58.681 [INFO][5256] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Aug 13 00:19:59.065147 containerd[2020]: 2025-08-13 00:19:58.681 [INFO][5256] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" iface="eth0" netns="/var/run/netns/cni-db66c0eb-67e7-35c5-681d-c0e0b401c17d" Aug 13 00:19:59.065147 containerd[2020]: 2025-08-13 00:19:58.682 [INFO][5256] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" iface="eth0" netns="/var/run/netns/cni-db66c0eb-67e7-35c5-681d-c0e0b401c17d" Aug 13 00:19:59.065147 containerd[2020]: 2025-08-13 00:19:58.683 [INFO][5256] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" iface="eth0" netns="/var/run/netns/cni-db66c0eb-67e7-35c5-681d-c0e0b401c17d" Aug 13 00:19:59.065147 containerd[2020]: 2025-08-13 00:19:58.683 [INFO][5256] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Aug 13 00:19:59.065147 containerd[2020]: 2025-08-13 00:19:58.683 [INFO][5256] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Aug 13 00:19:59.065147 containerd[2020]: 2025-08-13 00:19:58.952 [INFO][5303] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" HandleID="k8s-pod-network.06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0" Aug 13 00:19:59.065147 containerd[2020]: 2025-08-13 00:19:58.953 [INFO][5303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:19:59.065147 containerd[2020]: 2025-08-13 00:19:58.953 [INFO][5303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:19:59.065147 containerd[2020]: 2025-08-13 00:19:59.025 [WARNING][5303] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" HandleID="k8s-pod-network.06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0" Aug 13 00:19:59.065147 containerd[2020]: 2025-08-13 00:19:59.027 [INFO][5303] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" HandleID="k8s-pod-network.06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0" Aug 13 00:19:59.065147 containerd[2020]: 2025-08-13 00:19:59.032 [INFO][5303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:19:59.065147 containerd[2020]: 2025-08-13 00:19:59.053 [INFO][5256] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Aug 13 00:19:59.069682 containerd[2020]: time="2025-08-13T00:19:59.066060891Z" level=info msg="TearDown network for sandbox \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\" successfully" Aug 13 00:19:59.069682 containerd[2020]: time="2025-08-13T00:19:59.066107703Z" level=info msg="StopPodSandbox for \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\" returns successfully" Aug 13 00:19:59.078902 containerd[2020]: time="2025-08-13T00:19:59.076407399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6mfph,Uid:4a84ee61-fe58-4097-b314-181a986dece7,Namespace:kube-system,Attempt:1,}" Aug 13 00:19:59.092132 systemd[1]: run-netns-cni\x2ddb66c0eb\x2d67e7\x2d35c5\x2d681d\x2dc0e0b401c17d.mount: Deactivated successfully. Aug 13 00:19:59.135367 systemd[1]: Started cri-containerd-085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319.scope - libcontainer container 085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319. Aug 13 00:19:59.243000 containerd[2020]: 2025-08-13 00:19:58.744 [INFO][5264] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Aug 13 00:19:59.243000 containerd[2020]: 2025-08-13 00:19:58.745 [INFO][5264] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" iface="eth0" netns="/var/run/netns/cni-4b7084db-16ed-dd05-702f-e374d2d0b329" Aug 13 00:19:59.243000 containerd[2020]: 2025-08-13 00:19:58.746 [INFO][5264] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" iface="eth0" netns="/var/run/netns/cni-4b7084db-16ed-dd05-702f-e374d2d0b329" Aug 13 00:19:59.243000 containerd[2020]: 2025-08-13 00:19:58.746 [INFO][5264] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" iface="eth0" netns="/var/run/netns/cni-4b7084db-16ed-dd05-702f-e374d2d0b329" Aug 13 00:19:59.243000 containerd[2020]: 2025-08-13 00:19:58.746 [INFO][5264] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Aug 13 00:19:59.243000 containerd[2020]: 2025-08-13 00:19:58.746 [INFO][5264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Aug 13 00:19:59.243000 containerd[2020]: 2025-08-13 00:19:59.113 [INFO][5317] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" HandleID="k8s-pod-network.085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Workload="ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0" Aug 13 00:19:59.243000 containerd[2020]: 2025-08-13 00:19:59.115 [INFO][5317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:19:59.243000 containerd[2020]: 2025-08-13 00:19:59.116 [INFO][5317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:19:59.243000 containerd[2020]: 2025-08-13 00:19:59.168 [WARNING][5317] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" HandleID="k8s-pod-network.085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Workload="ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0" Aug 13 00:19:59.243000 containerd[2020]: 2025-08-13 00:19:59.170 [INFO][5317] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" HandleID="k8s-pod-network.085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Workload="ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0" Aug 13 00:19:59.243000 containerd[2020]: 2025-08-13 00:19:59.179 [INFO][5317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:19:59.243000 containerd[2020]: 2025-08-13 00:19:59.200 [INFO][5264] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Aug 13 00:19:59.256775 systemd[1]: run-netns-cni\x2d4b7084db\x2d16ed\x2ddd05\x2d702f\x2de374d2d0b329.mount: Deactivated successfully. Aug 13 00:19:59.275518 containerd[2020]: time="2025-08-13T00:19:59.275438284Z" level=info msg="TearDown network for sandbox \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\" successfully" Aug 13 00:19:59.278536 containerd[2020]: time="2025-08-13T00:19:59.277794988Z" level=info msg="StopPodSandbox for \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\" returns successfully" Aug 13 00:19:59.285296 containerd[2020]: time="2025-08-13T00:19:59.283341568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nvbpr,Uid:49aa71c9-5b84-402e-9316-844c45ada5f3,Namespace:calico-system,Attempt:1,}" Aug 13 00:19:59.330782 containerd[2020]: time="2025-08-13T00:19:59.330432509Z" level=info msg="StopPodSandbox for \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\"" Aug 13 00:19:59.588338 containerd[2020]: time="2025-08-13T00:19:59.588145254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7689d867cd-nh2hh,Uid:5d408832-1099-4d99-a077-a600c984323a,Namespace:calico-system,Attempt:1,} returns sandbox id \"30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d\"" Aug 13 00:19:59.802079 containerd[2020]: time="2025-08-13T00:19:59.801911959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vh7v8,Uid:7cfa030b-4e22-4159-b794-1031c8aae80f,Namespace:calico-system,Attempt:1,} returns sandbox id \"085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319\"" Aug 13 00:20:00.121859 systemd-networkd[1936]: cali2d5175f09c9: Link UP Aug 13 00:20:00.122324 systemd-networkd[1936]: cali2d5175f09c9: Gained carrier Aug 13 00:20:00.200876 containerd[2020]: 2025-08-13 00:19:59.673 [INFO][5375] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0 coredns-668d6bf9bc- kube-system 4a84ee61-fe58-4097-b314-181a986dece7 996 0 2025-08-13 00:19:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-36 coredns-668d6bf9bc-6mfph eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2d5175f09c9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879" Namespace="kube-system" Pod="coredns-668d6bf9bc-6mfph" WorkloadEndpoint="ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-" Aug 13 00:20:00.200876 containerd[2020]: 2025-08-13 00:19:59.676 [INFO][5375] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879" Namespace="kube-system" Pod="coredns-668d6bf9bc-6mfph" WorkloadEndpoint="ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0" Aug 13 00:20:00.200876 containerd[2020]: 2025-08-13 00:19:59.939 [INFO][5433] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879" HandleID="k8s-pod-network.69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0" Aug 13 00:20:00.200876 containerd[2020]: 2025-08-13 00:19:59.940 [INFO][5433] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879" HandleID="k8s-pod-network.69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000126d50), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-36", "pod":"coredns-668d6bf9bc-6mfph", "timestamp":"2025-08-13 00:19:59.939891764 +0000 UTC"}, Hostname:"ip-172-31-31-36", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:20:00.200876 containerd[2020]: 2025-08-13 00:19:59.940 [INFO][5433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:00.200876 containerd[2020]: 2025-08-13 00:19:59.940 [INFO][5433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:00.200876 containerd[2020]: 2025-08-13 00:19:59.940 [INFO][5433] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-36' Aug 13 00:20:00.200876 containerd[2020]: 2025-08-13 00:19:59.964 [INFO][5433] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879" host="ip-172-31-31-36" Aug 13 00:20:00.200876 containerd[2020]: 2025-08-13 00:19:59.980 [INFO][5433] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-36" Aug 13 00:20:00.200876 containerd[2020]: 2025-08-13 00:20:00.008 [INFO][5433] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:20:00.200876 containerd[2020]: 2025-08-13 00:20:00.025 [INFO][5433] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:20:00.200876 containerd[2020]: 2025-08-13 00:20:00.032 [INFO][5433] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:20:00.200876 containerd[2020]: 2025-08-13 00:20:00.032 [INFO][5433] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879" host="ip-172-31-31-36" Aug 13 00:20:00.200876 containerd[2020]: 2025-08-13 00:20:00.037 [INFO][5433] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879 Aug 13 00:20:00.200876 containerd[2020]: 2025-08-13 00:20:00.058 [INFO][5433] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879" host="ip-172-31-31-36" Aug 13 00:20:00.200876 containerd[2020]: 2025-08-13 00:20:00.086 [INFO][5433] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.70/26] block=192.168.99.64/26 handle="k8s-pod-network.69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879" host="ip-172-31-31-36" Aug 13 00:20:00.200876 containerd[2020]: 2025-08-13 00:20:00.087 [INFO][5433] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.70/26] handle="k8s-pod-network.69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879" host="ip-172-31-31-36" Aug 13 00:20:00.200876 containerd[2020]: 2025-08-13 00:20:00.087 [INFO][5433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:00.200876 containerd[2020]: 2025-08-13 00:20:00.087 [INFO][5433] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.70/26] IPv6=[] ContainerID="69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879" HandleID="k8s-pod-network.69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0" Aug 13 00:20:00.203603 containerd[2020]: 2025-08-13 00:20:00.105 [INFO][5375] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879" Namespace="kube-system" Pod="coredns-668d6bf9bc-6mfph" WorkloadEndpoint="ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4a84ee61-fe58-4097-b314-181a986dece7", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"", Pod:"coredns-668d6bf9bc-6mfph", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d5175f09c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:00.203603 containerd[2020]: 2025-08-13 00:20:00.105 [INFO][5375] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.70/32] ContainerID="69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879" Namespace="kube-system" Pod="coredns-668d6bf9bc-6mfph" WorkloadEndpoint="ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0" Aug 13 00:20:00.203603 containerd[2020]: 2025-08-13 00:20:00.105 [INFO][5375] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d5175f09c9 ContainerID="69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879" Namespace="kube-system" Pod="coredns-668d6bf9bc-6mfph" WorkloadEndpoint="ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0" Aug 13 00:20:00.203603 containerd[2020]: 2025-08-13 00:20:00.125 [INFO][5375] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879" Namespace="kube-system" Pod="coredns-668d6bf9bc-6mfph" WorkloadEndpoint="ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0" Aug 13 00:20:00.203603 containerd[2020]: 2025-08-13 00:20:00.132 [INFO][5375] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879" Namespace="kube-system" Pod="coredns-668d6bf9bc-6mfph" WorkloadEndpoint="ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4a84ee61-fe58-4097-b314-181a986dece7", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879", Pod:"coredns-668d6bf9bc-6mfph", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d5175f09c9", MAC:"9e:6b:80:da:a8:5b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:00.203603 containerd[2020]: 2025-08-13 00:20:00.186 [INFO][5375] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879" Namespace="kube-system" Pod="coredns-668d6bf9bc-6mfph" WorkloadEndpoint="ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0" Aug 13 00:20:00.295574 containerd[2020]: time="2025-08-13T00:20:00.295233174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:00.295574 containerd[2020]: time="2025-08-13T00:20:00.295354890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:00.295574 containerd[2020]: time="2025-08-13T00:20:00.295404558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:00.296481 containerd[2020]: time="2025-08-13T00:20:00.295616382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:00.383153 systemd-networkd[1936]: cali8beee0ad0d0: Gained IPv6LL Aug 13 00:20:00.387927 systemd-networkd[1936]: cali19e00db4553: Link UP Aug 13 00:20:00.391921 systemd-networkd[1936]: cali19e00db4553: Gained carrier Aug 13 00:20:00.426732 containerd[2020]: 2025-08-13 00:19:59.868 [INFO][5404] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Aug 13 00:20:00.426732 containerd[2020]: 2025-08-13 00:19:59.868 [INFO][5404] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" iface="eth0" netns="/var/run/netns/cni-bbd23644-36d2-5aba-0813-3042e753eb2d" Aug 13 00:20:00.426732 containerd[2020]: 2025-08-13 00:19:59.869 [INFO][5404] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" iface="eth0" netns="/var/run/netns/cni-bbd23644-36d2-5aba-0813-3042e753eb2d" Aug 13 00:20:00.426732 containerd[2020]: 2025-08-13 00:19:59.870 [INFO][5404] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" iface="eth0" netns="/var/run/netns/cni-bbd23644-36d2-5aba-0813-3042e753eb2d" Aug 13 00:20:00.426732 containerd[2020]: 2025-08-13 00:19:59.870 [INFO][5404] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Aug 13 00:20:00.426732 containerd[2020]: 2025-08-13 00:19:59.870 [INFO][5404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Aug 13 00:20:00.426732 containerd[2020]: 2025-08-13 00:20:00.082 [INFO][5452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" HandleID="k8s-pod-network.4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0" Aug 13 00:20:00.426732 containerd[2020]: 2025-08-13 00:20:00.083 [INFO][5452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:00.426732 containerd[2020]: 2025-08-13 00:20:00.306 [INFO][5452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:00.426732 containerd[2020]: 2025-08-13 00:20:00.371 [WARNING][5452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" HandleID="k8s-pod-network.4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0" Aug 13 00:20:00.426732 containerd[2020]: 2025-08-13 00:20:00.371 [INFO][5452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" HandleID="k8s-pod-network.4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0" Aug 13 00:20:00.426732 containerd[2020]: 2025-08-13 00:20:00.386 [INFO][5452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:00.426732 containerd[2020]: 2025-08-13 00:20:00.411 [INFO][5404] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Aug 13 00:20:00.429212 containerd[2020]: time="2025-08-13T00:20:00.428599110Z" level=info msg="TearDown network for sandbox \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\" successfully" Aug 13 00:20:00.429212 containerd[2020]: time="2025-08-13T00:20:00.428648574Z" level=info msg="StopPodSandbox for \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\" returns successfully" Aug 13 00:20:00.433870 containerd[2020]: time="2025-08-13T00:20:00.433724718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd6797c4b-r7mhx,Uid:1d377c06-c55d-4e39-863b-173de05fa641,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:20:00.444004 systemd[1]: run-netns-cni\x2dbbd23644\x2d36d2\x2d5aba\x2d0813\x2d3042e753eb2d.mount: Deactivated successfully. Aug 13 00:20:00.498531 containerd[2020]: 2025-08-13 00:19:59.769 [INFO][5397] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0 goldmane-768f4c5c69- calico-system 49aa71c9-5b84-402e-9316-844c45ada5f3 998 0 2025-08-13 00:19:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-31-36 goldmane-768f4c5c69-nvbpr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali19e00db4553 [] [] }} ContainerID="f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3" Namespace="calico-system" Pod="goldmane-768f4c5c69-nvbpr" WorkloadEndpoint="ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-" Aug 13 00:20:00.498531 containerd[2020]: 2025-08-13 00:19:59.770 [INFO][5397] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3" Namespace="calico-system" Pod="goldmane-768f4c5c69-nvbpr" WorkloadEndpoint="ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0" Aug 13 00:20:00.498531 containerd[2020]: 2025-08-13 00:20:00.072 [INFO][5446] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3" HandleID="k8s-pod-network.f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3" Workload="ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0" Aug 13 00:20:00.498531 containerd[2020]: 2025-08-13 00:20:00.073 [INFO][5446] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3" HandleID="k8s-pod-network.f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3" Workload="ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c450), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-36", "pod":"goldmane-768f4c5c69-nvbpr", "timestamp":"2025-08-13 00:20:00.070950988 +0000 UTC"}, Hostname:"ip-172-31-31-36", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:20:00.498531 containerd[2020]: 2025-08-13 00:20:00.073 [INFO][5446] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:00.498531 containerd[2020]: 2025-08-13 00:20:00.088 [INFO][5446] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:00.498531 containerd[2020]: 2025-08-13 00:20:00.088 [INFO][5446] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-36' Aug 13 00:20:00.498531 containerd[2020]: 2025-08-13 00:20:00.150 [INFO][5446] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3" host="ip-172-31-31-36" Aug 13 00:20:00.498531 containerd[2020]: 2025-08-13 00:20:00.190 [INFO][5446] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-36" Aug 13 00:20:00.498531 containerd[2020]: 2025-08-13 00:20:00.211 [INFO][5446] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:20:00.498531 containerd[2020]: 2025-08-13 00:20:00.219 [INFO][5446] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:20:00.498531 containerd[2020]: 2025-08-13 00:20:00.232 [INFO][5446] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:20:00.498531 containerd[2020]: 2025-08-13 00:20:00.232 [INFO][5446] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3" host="ip-172-31-31-36" Aug 13 00:20:00.498531 containerd[2020]: 2025-08-13 00:20:00.242 [INFO][5446] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3 Aug 13 00:20:00.498531 containerd[2020]: 2025-08-13 00:20:00.264 [INFO][5446] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3" host="ip-172-31-31-36" Aug 13 00:20:00.498531 containerd[2020]: 2025-08-13 00:20:00.299 [INFO][5446] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.71/26] block=192.168.99.64/26 handle="k8s-pod-network.f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3" host="ip-172-31-31-36" Aug 13 00:20:00.498531 containerd[2020]: 2025-08-13 00:20:00.303 [INFO][5446] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.71/26] handle="k8s-pod-network.f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3" host="ip-172-31-31-36" Aug 13 00:20:00.498531 containerd[2020]: 2025-08-13 00:20:00.304 [INFO][5446] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:00.498531 containerd[2020]: 2025-08-13 00:20:00.305 [INFO][5446] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.71/26] IPv6=[] ContainerID="f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3" HandleID="k8s-pod-network.f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3" Workload="ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0" Aug 13 00:20:00.502429 containerd[2020]: 2025-08-13 00:20:00.326 [INFO][5397] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3" Namespace="calico-system" Pod="goldmane-768f4c5c69-nvbpr" WorkloadEndpoint="ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"49aa71c9-5b84-402e-9316-844c45ada5f3", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"", Pod:"goldmane-768f4c5c69-nvbpr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali19e00db4553", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:00.502429 containerd[2020]: 2025-08-13 00:20:00.326 [INFO][5397] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.71/32] ContainerID="f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3" Namespace="calico-system" Pod="goldmane-768f4c5c69-nvbpr" WorkloadEndpoint="ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0" Aug 13 00:20:00.502429 containerd[2020]: 2025-08-13 00:20:00.326 [INFO][5397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19e00db4553 ContainerID="f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3" Namespace="calico-system" Pod="goldmane-768f4c5c69-nvbpr" WorkloadEndpoint="ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0" Aug 13 00:20:00.502429 containerd[2020]: 2025-08-13 00:20:00.398 [INFO][5397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3" Namespace="calico-system" Pod="goldmane-768f4c5c69-nvbpr" WorkloadEndpoint="ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0" Aug 13 00:20:00.502429 containerd[2020]: 2025-08-13 00:20:00.406 [INFO][5397] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3" Namespace="calico-system" Pod="goldmane-768f4c5c69-nvbpr" WorkloadEndpoint="ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"49aa71c9-5b84-402e-9316-844c45ada5f3", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3", Pod:"goldmane-768f4c5c69-nvbpr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali19e00db4553", MAC:"4a:b3:51:e6:b9:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:00.502429 containerd[2020]: 2025-08-13 00:20:00.470 [INFO][5397] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3" Namespace="calico-system" Pod="goldmane-768f4c5c69-nvbpr" WorkloadEndpoint="ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0" Aug 13 00:20:00.510734 systemd-networkd[1936]: cali4521660e031: Gained IPv6LL Aug 13 00:20:00.576734 systemd[1]: Started cri-containerd-69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879.scope - libcontainer container 69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879. Aug 13 00:20:00.709483 containerd[2020]: time="2025-08-13T00:20:00.707984084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:00.709483 containerd[2020]: time="2025-08-13T00:20:00.708084392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:00.713629 containerd[2020]: time="2025-08-13T00:20:00.708112940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:00.715643 containerd[2020]: time="2025-08-13T00:20:00.713912072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:00.793891 containerd[2020]: time="2025-08-13T00:20:00.788640584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6mfph,Uid:4a84ee61-fe58-4097-b314-181a986dece7,Namespace:kube-system,Attempt:1,} returns sandbox id \"69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879\"" Aug 13 00:20:00.810511 containerd[2020]: time="2025-08-13T00:20:00.810352940Z" level=info msg="CreateContainer within sandbox \"69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:20:00.854835 systemd[1]: Started cri-containerd-f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3.scope - libcontainer container f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3. Aug 13 00:20:00.916878 containerd[2020]: time="2025-08-13T00:20:00.916802457Z" level=info msg="CreateContainer within sandbox \"69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fdc6d13302f93fbf434a150b179e9e87f382071edbb981058c9b0044b2197c7e\"" Aug 13 00:20:00.927112 containerd[2020]: time="2025-08-13T00:20:00.925714845Z" level=info msg="StartContainer for \"fdc6d13302f93fbf434a150b179e9e87f382071edbb981058c9b0044b2197c7e\"" Aug 13 00:20:01.035222 containerd[2020]: time="2025-08-13T00:20:01.034395929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nvbpr,Uid:49aa71c9-5b84-402e-9316-844c45ada5f3,Namespace:calico-system,Attempt:1,} returns sandbox id \"f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3\"" Aug 13 00:20:01.062829 systemd[1]: Started cri-containerd-fdc6d13302f93fbf434a150b179e9e87f382071edbb981058c9b0044b2197c7e.scope - libcontainer container fdc6d13302f93fbf434a150b179e9e87f382071edbb981058c9b0044b2197c7e. Aug 13 00:20:01.261691 containerd[2020]: time="2025-08-13T00:20:01.260758686Z" level=info msg="StartContainer for \"fdc6d13302f93fbf434a150b179e9e87f382071edbb981058c9b0044b2197c7e\" returns successfully" Aug 13 00:20:01.279302 systemd-networkd[1936]: cali2d5175f09c9: Gained IPv6LL Aug 13 00:20:01.283431 containerd[2020]: time="2025-08-13T00:20:01.282884010Z" level=info msg="StopPodSandbox for \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\"" Aug 13 00:20:01.299630 systemd-networkd[1936]: calif77e2de8181: Link UP Aug 13 00:20:01.306187 systemd-networkd[1936]: calif77e2de8181: Gained carrier Aug 13 00:20:01.426403 containerd[2020]: 2025-08-13 00:20:00.890 [INFO][5511] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0 calico-apiserver-bd6797c4b- calico-apiserver 1d377c06-c55d-4e39-863b-173de05fa641 1013 0 2025-08-13 00:19:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bd6797c4b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-36 calico-apiserver-bd6797c4b-r7mhx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif77e2de8181 [] [] }} ContainerID="321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f" Namespace="calico-apiserver" Pod="calico-apiserver-bd6797c4b-r7mhx" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-" Aug 13 00:20:01.426403 containerd[2020]: 2025-08-13 00:20:00.891 [INFO][5511] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f" Namespace="calico-apiserver" Pod="calico-apiserver-bd6797c4b-r7mhx" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0" Aug 13 00:20:01.426403 containerd[2020]: 2025-08-13 00:20:01.023 [INFO][5580] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f" HandleID="k8s-pod-network.321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0" Aug 13 00:20:01.426403 containerd[2020]: 2025-08-13 00:20:01.027 [INFO][5580] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f" HandleID="k8s-pod-network.321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000393ee0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-36", "pod":"calico-apiserver-bd6797c4b-r7mhx", "timestamp":"2025-08-13 00:20:01.023413961 +0000 UTC"}, Hostname:"ip-172-31-31-36", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:20:01.426403 containerd[2020]: 2025-08-13 00:20:01.028 [INFO][5580] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:01.426403 containerd[2020]: 2025-08-13 00:20:01.028 [INFO][5580] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:01.426403 containerd[2020]: 2025-08-13 00:20:01.028 [INFO][5580] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-36' Aug 13 00:20:01.426403 containerd[2020]: 2025-08-13 00:20:01.082 [INFO][5580] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f" host="ip-172-31-31-36" Aug 13 00:20:01.426403 containerd[2020]: 2025-08-13 00:20:01.103 [INFO][5580] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-36" Aug 13 00:20:01.426403 containerd[2020]: 2025-08-13 00:20:01.118 [INFO][5580] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:20:01.426403 containerd[2020]: 2025-08-13 00:20:01.126 [INFO][5580] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:20:01.426403 containerd[2020]: 2025-08-13 00:20:01.141 [INFO][5580] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-31-36" Aug 13 00:20:01.426403 containerd[2020]: 2025-08-13 00:20:01.151 [INFO][5580] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f" host="ip-172-31-31-36" Aug 13 00:20:01.426403 containerd[2020]: 2025-08-13 00:20:01.175 [INFO][5580] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f Aug 13 00:20:01.426403 containerd[2020]: 2025-08-13 00:20:01.205 [INFO][5580] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f" host="ip-172-31-31-36" Aug 13 00:20:01.426403 containerd[2020]: 2025-08-13 00:20:01.237 [INFO][5580] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.72/26] block=192.168.99.64/26 handle="k8s-pod-network.321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f" host="ip-172-31-31-36" Aug 13 00:20:01.426403 containerd[2020]: 2025-08-13 00:20:01.239 [INFO][5580] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.72/26] handle="k8s-pod-network.321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f" host="ip-172-31-31-36" Aug 13 00:20:01.426403 containerd[2020]: 2025-08-13 00:20:01.241 [INFO][5580] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:01.426403 containerd[2020]: 2025-08-13 00:20:01.242 [INFO][5580] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.72/26] IPv6=[] ContainerID="321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f" HandleID="k8s-pod-network.321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0" Aug 13 00:20:01.429588 containerd[2020]: 2025-08-13 00:20:01.264 [INFO][5511] cni-plugin/k8s.go 418: Populated endpoint ContainerID="321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f" Namespace="calico-apiserver" Pod="calico-apiserver-bd6797c4b-r7mhx" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0", GenerateName:"calico-apiserver-bd6797c4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d377c06-c55d-4e39-863b-173de05fa641", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bd6797c4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"", Pod:"calico-apiserver-bd6797c4b-r7mhx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif77e2de8181", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:01.429588 containerd[2020]: 2025-08-13 00:20:01.264 [INFO][5511] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.72/32] ContainerID="321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f" Namespace="calico-apiserver" Pod="calico-apiserver-bd6797c4b-r7mhx" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0" Aug 13 00:20:01.429588 containerd[2020]: 2025-08-13 00:20:01.264 [INFO][5511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif77e2de8181 ContainerID="321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f" Namespace="calico-apiserver" Pod="calico-apiserver-bd6797c4b-r7mhx" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0" Aug 13 00:20:01.429588 containerd[2020]: 2025-08-13 00:20:01.321 [INFO][5511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f" Namespace="calico-apiserver" Pod="calico-apiserver-bd6797c4b-r7mhx" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0" Aug 13 00:20:01.429588 containerd[2020]: 2025-08-13 00:20:01.328 [INFO][5511] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f" Namespace="calico-apiserver" Pod="calico-apiserver-bd6797c4b-r7mhx" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0", GenerateName:"calico-apiserver-bd6797c4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d377c06-c55d-4e39-863b-173de05fa641", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bd6797c4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f", Pod:"calico-apiserver-bd6797c4b-r7mhx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif77e2de8181", MAC:"ca:39:b5:9c:b7:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:01.429588 containerd[2020]: 2025-08-13 00:20:01.410 [INFO][5511] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f" Namespace="calico-apiserver" Pod="calico-apiserver-bd6797c4b-r7mhx" WorkloadEndpoint="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0" Aug 13 00:20:01.587734 containerd[2020]: time="2025-08-13T00:20:01.585495752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:01.587734 containerd[2020]: time="2025-08-13T00:20:01.585670472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:01.587734 containerd[2020]: time="2025-08-13T00:20:01.585700820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:01.587734 containerd[2020]: time="2025-08-13T00:20:01.585869744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:01.720832 systemd[1]: Started cri-containerd-321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f.scope - libcontainer container 321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f. Aug 13 00:20:01.730399 systemd-networkd[1936]: cali19e00db4553: Gained IPv6LL Aug 13 00:20:01.845578 containerd[2020]: 2025-08-13 00:20:01.615 [WARNING][5635] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5a64667d-5251-47e9-8797-9a7e4c011870", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba", Pod:"coredns-668d6bf9bc-wfn57", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2a0434ef12", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:01.845578 containerd[2020]: 2025-08-13 00:20:01.622 [INFO][5635] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Aug 13 00:20:01.845578 containerd[2020]: 2025-08-13 00:20:01.622 [INFO][5635] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" iface="eth0" netns="" Aug 13 00:20:01.845578 containerd[2020]: 2025-08-13 00:20:01.622 [INFO][5635] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Aug 13 00:20:01.845578 containerd[2020]: 2025-08-13 00:20:01.622 [INFO][5635] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Aug 13 00:20:01.845578 containerd[2020]: 2025-08-13 00:20:01.800 [INFO][5673] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" HandleID="k8s-pod-network.679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0" Aug 13 00:20:01.845578 containerd[2020]: 2025-08-13 00:20:01.801 [INFO][5673] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:01.845578 containerd[2020]: 2025-08-13 00:20:01.801 [INFO][5673] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:01.845578 containerd[2020]: 2025-08-13 00:20:01.827 [WARNING][5673] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" HandleID="k8s-pod-network.679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0" Aug 13 00:20:01.845578 containerd[2020]: 2025-08-13 00:20:01.828 [INFO][5673] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" HandleID="k8s-pod-network.679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0" Aug 13 00:20:01.845578 containerd[2020]: 2025-08-13 00:20:01.835 [INFO][5673] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:01.845578 containerd[2020]: 2025-08-13 00:20:01.840 [INFO][5635] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Aug 13 00:20:01.848391 containerd[2020]: time="2025-08-13T00:20:01.847624641Z" level=info msg="TearDown network for sandbox \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\" successfully" Aug 13 00:20:01.848391 containerd[2020]: time="2025-08-13T00:20:01.847682589Z" level=info msg="StopPodSandbox for \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\" returns successfully" Aug 13 00:20:01.848889 containerd[2020]: time="2025-08-13T00:20:01.848749329Z" level=info msg="RemovePodSandbox for \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\"" Aug 13 00:20:01.848889 containerd[2020]: time="2025-08-13T00:20:01.848811453Z" level=info msg="Forcibly stopping sandbox \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\"" Aug 13 00:20:02.070659 containerd[2020]: time="2025-08-13T00:20:02.070357662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd6797c4b-r7mhx,Uid:1d377c06-c55d-4e39-863b-173de05fa641,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f\"" Aug 13 00:20:02.118592 kubelet[3342]: I0813 00:20:02.116978 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6mfph" podStartSLOduration=55.116952343 podStartE2EDuration="55.116952343s" podCreationTimestamp="2025-08-13 00:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:20:01.966780226 +0000 UTC m=+60.949293268" watchObservedRunningTime="2025-08-13 00:20:02.116952343 +0000 UTC m=+61.099465325" Aug 13 00:20:02.527627 containerd[2020]: 2025-08-13 00:20:02.227 [WARNING][5703] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5a64667d-5251-47e9-8797-9a7e4c011870", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"0cbf13f72694e608b62bafabbe57c973ed24b44bec3ec0d38a877c256a80bcba", Pod:"coredns-668d6bf9bc-wfn57", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2a0434ef12", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:02.527627 containerd[2020]: 2025-08-13 00:20:02.229 [INFO][5703] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Aug 13 00:20:02.527627 containerd[2020]: 2025-08-13 00:20:02.229 [INFO][5703] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" iface="eth0" netns="" Aug 13 00:20:02.527627 containerd[2020]: 2025-08-13 00:20:02.229 [INFO][5703] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Aug 13 00:20:02.527627 containerd[2020]: 2025-08-13 00:20:02.229 [INFO][5703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Aug 13 00:20:02.527627 containerd[2020]: 2025-08-13 00:20:02.385 [INFO][5723] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" HandleID="k8s-pod-network.679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0" Aug 13 00:20:02.527627 containerd[2020]: 2025-08-13 00:20:02.387 [INFO][5723] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:02.527627 containerd[2020]: 2025-08-13 00:20:02.391 [INFO][5723] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:02.527627 containerd[2020]: 2025-08-13 00:20:02.473 [WARNING][5723] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" HandleID="k8s-pod-network.679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0" Aug 13 00:20:02.527627 containerd[2020]: 2025-08-13 00:20:02.474 [INFO][5723] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" HandleID="k8s-pod-network.679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--wfn57-eth0" Aug 13 00:20:02.527627 containerd[2020]: 2025-08-13 00:20:02.504 [INFO][5723] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:02.527627 containerd[2020]: 2025-08-13 00:20:02.515 [INFO][5703] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447" Aug 13 00:20:02.527627 containerd[2020]: time="2025-08-13T00:20:02.523694217Z" level=info msg="TearDown network for sandbox \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\" successfully" Aug 13 00:20:02.538896 containerd[2020]: time="2025-08-13T00:20:02.538598913Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:20:02.540385 containerd[2020]: time="2025-08-13T00:20:02.540240033Z" level=info msg="RemovePodSandbox \"679327e9ada9f93a48e4ee3249a4d725d4c8f5cd75720146c7539bf41fd13447\" returns successfully" Aug 13 00:20:02.543580 containerd[2020]: time="2025-08-13T00:20:02.543504141Z" level=info msg="StopPodSandbox for \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\"" Aug 13 00:20:02.678508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3192368245.mount: Deactivated successfully. Aug 13 00:20:02.755281 systemd-networkd[1936]: calif77e2de8181: Gained IPv6LL Aug 13 00:20:02.760339 containerd[2020]: time="2025-08-13T00:20:02.759923734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:02.767475 containerd[2020]: time="2025-08-13T00:20:02.767396542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 13 00:20:02.770513 containerd[2020]: time="2025-08-13T00:20:02.770392918Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:02.781794 containerd[2020]: time="2025-08-13T00:20:02.781540498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:02.786777 containerd[2020]: time="2025-08-13T00:20:02.786071218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 6.539203953s" Aug 13 00:20:02.786777 containerd[2020]: time="2025-08-13T00:20:02.786171646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 13 00:20:02.791659 containerd[2020]: time="2025-08-13T00:20:02.790934482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:20:02.794905 containerd[2020]: time="2025-08-13T00:20:02.794512270Z" level=info msg="CreateContainer within sandbox \"cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:20:02.823108 containerd[2020]: 2025-08-13 00:20:02.682 [WARNING][5743] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0", GenerateName:"calico-apiserver-bd6797c4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"28081b87-c735-43fc-9236-d52ebf6d339c", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bd6797c4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17", Pod:"calico-apiserver-bd6797c4b-4bgrn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5deeb3c7584", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:02.823108 containerd[2020]: 2025-08-13 00:20:02.683 [INFO][5743] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Aug 13 00:20:02.823108 containerd[2020]: 2025-08-13 00:20:02.683 [INFO][5743] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" iface="eth0" netns="" Aug 13 00:20:02.823108 containerd[2020]: 2025-08-13 00:20:02.683 [INFO][5743] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Aug 13 00:20:02.823108 containerd[2020]: 2025-08-13 00:20:02.683 [INFO][5743] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Aug 13 00:20:02.823108 containerd[2020]: 2025-08-13 00:20:02.746 [INFO][5752] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" HandleID="k8s-pod-network.3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0" Aug 13 00:20:02.823108 containerd[2020]: 2025-08-13 00:20:02.750 [INFO][5752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:02.823108 containerd[2020]: 2025-08-13 00:20:02.750 [INFO][5752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:02.823108 containerd[2020]: 2025-08-13 00:20:02.783 [WARNING][5752] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" HandleID="k8s-pod-network.3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0" Aug 13 00:20:02.823108 containerd[2020]: 2025-08-13 00:20:02.783 [INFO][5752] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" HandleID="k8s-pod-network.3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0" Aug 13 00:20:02.823108 containerd[2020]: 2025-08-13 00:20:02.787 [INFO][5752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:02.823108 containerd[2020]: 2025-08-13 00:20:02.800 [INFO][5743] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Aug 13 00:20:02.823108 containerd[2020]: time="2025-08-13T00:20:02.822782266Z" level=info msg="TearDown network for sandbox \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\" successfully" Aug 13 00:20:02.823108 containerd[2020]: time="2025-08-13T00:20:02.822820750Z" level=info msg="StopPodSandbox for \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\" returns successfully" Aug 13 00:20:02.824567 containerd[2020]: time="2025-08-13T00:20:02.823981618Z" level=info msg="RemovePodSandbox for \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\"" Aug 13 00:20:02.824567 containerd[2020]: time="2025-08-13T00:20:02.824034982Z" level=info msg="Forcibly stopping sandbox \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\"" Aug 13 00:20:02.864713 containerd[2020]: time="2025-08-13T00:20:02.863481994Z" level=info msg="CreateContainer within sandbox \"cba175acab0db722a627c457589c1eaf9108b2bfa4fbdea0ec4606240ebd0960\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9bbcfc62ffe61986693ef6f336353884a16328e39d2565218d541a728aa95520\"" Aug 13 00:20:02.865842 containerd[2020]: time="2025-08-13T00:20:02.865723846Z" level=info msg="StartContainer for \"9bbcfc62ffe61986693ef6f336353884a16328e39d2565218d541a728aa95520\"" Aug 13 00:20:02.957384 systemd[1]: Started cri-containerd-9bbcfc62ffe61986693ef6f336353884a16328e39d2565218d541a728aa95520.scope - libcontainer container 9bbcfc62ffe61986693ef6f336353884a16328e39d2565218d541a728aa95520. Aug 13 00:20:03.113955 containerd[2020]: 2025-08-13 00:20:02.957 [WARNING][5770] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0", GenerateName:"calico-apiserver-bd6797c4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"28081b87-c735-43fc-9236-d52ebf6d339c", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bd6797c4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17", Pod:"calico-apiserver-bd6797c4b-4bgrn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5deeb3c7584", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:03.113955 containerd[2020]: 2025-08-13 00:20:02.971 [INFO][5770] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Aug 13 00:20:03.113955 containerd[2020]: 2025-08-13 00:20:02.973 [INFO][5770] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" iface="eth0" netns="" Aug 13 00:20:03.113955 containerd[2020]: 2025-08-13 00:20:02.973 [INFO][5770] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Aug 13 00:20:03.113955 containerd[2020]: 2025-08-13 00:20:02.973 [INFO][5770] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Aug 13 00:20:03.113955 containerd[2020]: 2025-08-13 00:20:03.067 [INFO][5801] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" HandleID="k8s-pod-network.3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0" Aug 13 00:20:03.113955 containerd[2020]: 2025-08-13 00:20:03.068 [INFO][5801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:03.113955 containerd[2020]: 2025-08-13 00:20:03.068 [INFO][5801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:03.113955 containerd[2020]: 2025-08-13 00:20:03.097 [WARNING][5801] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" HandleID="k8s-pod-network.3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0" Aug 13 00:20:03.113955 containerd[2020]: 2025-08-13 00:20:03.097 [INFO][5801] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" HandleID="k8s-pod-network.3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--4bgrn-eth0" Aug 13 00:20:03.113955 containerd[2020]: 2025-08-13 00:20:03.105 [INFO][5801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:03.113955 containerd[2020]: 2025-08-13 00:20:03.111 [INFO][5770] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863" Aug 13 00:20:03.116701 containerd[2020]: time="2025-08-13T00:20:03.116330216Z" level=info msg="TearDown network for sandbox \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\" successfully" Aug 13 00:20:03.125111 containerd[2020]: time="2025-08-13T00:20:03.124635212Z" level=info msg="StartContainer for \"9bbcfc62ffe61986693ef6f336353884a16328e39d2565218d541a728aa95520\" returns successfully" Aug 13 00:20:03.132584 containerd[2020]: time="2025-08-13T00:20:03.132438236Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:20:03.133104 containerd[2020]: time="2025-08-13T00:20:03.132589352Z" level=info msg="RemovePodSandbox \"3dc404cf2934fbcb2385a8f68fe4be49df242d1e8d29541155b83be7c4ce3863\" returns successfully" Aug 13 00:20:03.134097 containerd[2020]: time="2025-08-13T00:20:03.133352312Z" level=info msg="StopPodSandbox for \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\"" Aug 13 00:20:03.210823 systemd[1]: Started sshd@9-172.31.31.36:22-139.178.89.65:54914.service - OpenSSH per-connection server daemon (139.178.89.65:54914). Aug 13 00:20:03.352359 containerd[2020]: 2025-08-13 00:20:03.235 [WARNING][5826] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0", GenerateName:"calico-kube-controllers-7689d867cd-", Namespace:"calico-system", SelfLink:"", UID:"5d408832-1099-4d99-a077-a600c984323a", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7689d867cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d", Pod:"calico-kube-controllers-7689d867cd-nh2hh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8beee0ad0d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:03.352359 containerd[2020]: 2025-08-13 00:20:03.237 [INFO][5826] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Aug 13 00:20:03.352359 containerd[2020]: 2025-08-13 00:20:03.237 [INFO][5826] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" iface="eth0" netns="" Aug 13 00:20:03.352359 containerd[2020]: 2025-08-13 00:20:03.237 [INFO][5826] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Aug 13 00:20:03.352359 containerd[2020]: 2025-08-13 00:20:03.237 [INFO][5826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Aug 13 00:20:03.352359 containerd[2020]: 2025-08-13 00:20:03.327 [INFO][5835] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" HandleID="k8s-pod-network.bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Workload="ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0" Aug 13 00:20:03.352359 containerd[2020]: 2025-08-13 00:20:03.328 [INFO][5835] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:03.352359 containerd[2020]: 2025-08-13 00:20:03.328 [INFO][5835] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:03.352359 containerd[2020]: 2025-08-13 00:20:03.343 [WARNING][5835] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" HandleID="k8s-pod-network.bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Workload="ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0" Aug 13 00:20:03.352359 containerd[2020]: 2025-08-13 00:20:03.343 [INFO][5835] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" HandleID="k8s-pod-network.bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Workload="ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0" Aug 13 00:20:03.352359 containerd[2020]: 2025-08-13 00:20:03.346 [INFO][5835] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:03.352359 containerd[2020]: 2025-08-13 00:20:03.349 [INFO][5826] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Aug 13 00:20:03.353694 containerd[2020]: time="2025-08-13T00:20:03.352647969Z" level=info msg="TearDown network for sandbox \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\" successfully" Aug 13 00:20:03.353694 containerd[2020]: time="2025-08-13T00:20:03.352725033Z" level=info msg="StopPodSandbox for \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\" returns successfully" Aug 13 00:20:03.354676 containerd[2020]: time="2025-08-13T00:20:03.354620109Z" level=info msg="RemovePodSandbox for \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\"" Aug 13 00:20:03.354845 containerd[2020]: time="2025-08-13T00:20:03.354692133Z" level=info msg="Forcibly stopping sandbox \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\"" Aug 13 00:20:03.454271 sshd[5833]: Accepted publickey for core from 139.178.89.65 port 54914 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:03.462608 sshd[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:03.478005 systemd-logind[1993]: New session 10 of user core. Aug 13 00:20:03.485913 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:20:03.584291 containerd[2020]: 2025-08-13 00:20:03.466 [WARNING][5854] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0", GenerateName:"calico-kube-controllers-7689d867cd-", Namespace:"calico-system", SelfLink:"", UID:"5d408832-1099-4d99-a077-a600c984323a", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7689d867cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d", Pod:"calico-kube-controllers-7689d867cd-nh2hh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8beee0ad0d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:03.584291 containerd[2020]: 2025-08-13 00:20:03.467 [INFO][5854] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Aug 13 00:20:03.584291 containerd[2020]: 2025-08-13 00:20:03.467 [INFO][5854] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" iface="eth0" netns="" Aug 13 00:20:03.584291 containerd[2020]: 2025-08-13 00:20:03.467 [INFO][5854] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Aug 13 00:20:03.584291 containerd[2020]: 2025-08-13 00:20:03.467 [INFO][5854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Aug 13 00:20:03.584291 containerd[2020]: 2025-08-13 00:20:03.554 [INFO][5861] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" HandleID="k8s-pod-network.bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Workload="ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0" Aug 13 00:20:03.584291 containerd[2020]: 2025-08-13 00:20:03.554 [INFO][5861] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:03.584291 containerd[2020]: 2025-08-13 00:20:03.555 [INFO][5861] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:03.584291 containerd[2020]: 2025-08-13 00:20:03.573 [WARNING][5861] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" HandleID="k8s-pod-network.bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Workload="ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0" Aug 13 00:20:03.584291 containerd[2020]: 2025-08-13 00:20:03.573 [INFO][5861] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" HandleID="k8s-pod-network.bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Workload="ip--172--31--31--36-k8s-calico--kube--controllers--7689d867cd--nh2hh-eth0" Aug 13 00:20:03.584291 containerd[2020]: 2025-08-13 00:20:03.576 [INFO][5861] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:03.584291 containerd[2020]: 2025-08-13 00:20:03.580 [INFO][5854] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c" Aug 13 00:20:03.585823 containerd[2020]: time="2025-08-13T00:20:03.584348998Z" level=info msg="TearDown network for sandbox \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\" successfully" Aug 13 00:20:03.594389 containerd[2020]: time="2025-08-13T00:20:03.594275098Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:20:03.595998 containerd[2020]: time="2025-08-13T00:20:03.595533550Z" level=info msg="RemovePodSandbox \"bc699247fce18124acebce36a841bf81c7f3afa5c6cca8e669180d8728afd37c\" returns successfully" Aug 13 00:20:03.599496 containerd[2020]: time="2025-08-13T00:20:03.597148354Z" level=info msg="StopPodSandbox for \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\"" Aug 13 00:20:03.870059 containerd[2020]: 2025-08-13 00:20:03.737 [WARNING][5883] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4a84ee61-fe58-4097-b314-181a986dece7", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879", Pod:"coredns-668d6bf9bc-6mfph", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d5175f09c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:03.870059 containerd[2020]: 2025-08-13 00:20:03.738 [INFO][5883] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Aug 13 00:20:03.870059 containerd[2020]: 2025-08-13 00:20:03.738 [INFO][5883] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" iface="eth0" netns="" Aug 13 00:20:03.870059 containerd[2020]: 2025-08-13 00:20:03.738 [INFO][5883] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Aug 13 00:20:03.870059 containerd[2020]: 2025-08-13 00:20:03.738 [INFO][5883] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Aug 13 00:20:03.870059 containerd[2020]: 2025-08-13 00:20:03.828 [INFO][5892] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" HandleID="k8s-pod-network.06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0" Aug 13 00:20:03.870059 containerd[2020]: 2025-08-13 00:20:03.828 [INFO][5892] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:03.870059 containerd[2020]: 2025-08-13 00:20:03.828 [INFO][5892] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:03.870059 containerd[2020]: 2025-08-13 00:20:03.854 [WARNING][5892] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" HandleID="k8s-pod-network.06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0" Aug 13 00:20:03.870059 containerd[2020]: 2025-08-13 00:20:03.854 [INFO][5892] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" HandleID="k8s-pod-network.06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0" Aug 13 00:20:03.870059 containerd[2020]: 2025-08-13 00:20:03.861 [INFO][5892] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:03.870059 containerd[2020]: 2025-08-13 00:20:03.866 [INFO][5883] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Aug 13 00:20:03.872661 containerd[2020]: time="2025-08-13T00:20:03.870585179Z" level=info msg="TearDown network for sandbox \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\" successfully" Aug 13 00:20:03.872661 containerd[2020]: time="2025-08-13T00:20:03.870626399Z" level=info msg="StopPodSandbox for \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\" returns successfully" Aug 13 00:20:03.872661 containerd[2020]: time="2025-08-13T00:20:03.871811975Z" level=info msg="RemovePodSandbox for \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\"" Aug 13 00:20:03.872661 containerd[2020]: time="2025-08-13T00:20:03.871871243Z" level=info msg="Forcibly stopping sandbox \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\"" Aug 13 00:20:03.889859 sshd[5833]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:03.903137 systemd[1]: sshd@9-172.31.31.36:22-139.178.89.65:54914.service: Deactivated successfully. Aug 13 00:20:03.913391 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:20:03.920273 systemd-logind[1993]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:20:03.924871 systemd-logind[1993]: Removed session 10. Aug 13 00:20:04.113151 containerd[2020]: 2025-08-13 00:20:03.965 [WARNING][5906] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4a84ee61-fe58-4097-b314-181a986dece7", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"69180c655f01e18d774e8c7624f58b68df82e97dab2c1aee886043ab4debc879", Pod:"coredns-668d6bf9bc-6mfph", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d5175f09c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:04.113151 containerd[2020]: 2025-08-13 00:20:03.966 [INFO][5906] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Aug 13 00:20:04.113151 containerd[2020]: 2025-08-13 00:20:03.966 [INFO][5906] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" iface="eth0" netns="" Aug 13 00:20:04.113151 containerd[2020]: 2025-08-13 00:20:03.966 [INFO][5906] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Aug 13 00:20:04.113151 containerd[2020]: 2025-08-13 00:20:03.966 [INFO][5906] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Aug 13 00:20:04.113151 containerd[2020]: 2025-08-13 00:20:04.044 [INFO][5915] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" HandleID="k8s-pod-network.06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0" Aug 13 00:20:04.113151 containerd[2020]: 2025-08-13 00:20:04.045 [INFO][5915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:04.113151 containerd[2020]: 2025-08-13 00:20:04.045 [INFO][5915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:04.113151 containerd[2020]: 2025-08-13 00:20:04.087 [WARNING][5915] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" HandleID="k8s-pod-network.06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0" Aug 13 00:20:04.113151 containerd[2020]: 2025-08-13 00:20:04.087 [INFO][5915] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" HandleID="k8s-pod-network.06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Workload="ip--172--31--31--36-k8s-coredns--668d6bf9bc--6mfph-eth0" Aug 13 00:20:04.113151 containerd[2020]: 2025-08-13 00:20:04.100 [INFO][5915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:04.113151 containerd[2020]: 2025-08-13 00:20:04.105 [INFO][5906] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8" Aug 13 00:20:04.113151 containerd[2020]: time="2025-08-13T00:20:04.110753192Z" level=info msg="TearDown network for sandbox \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\" successfully" Aug 13 00:20:04.118088 containerd[2020]: time="2025-08-13T00:20:04.118027892Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:20:04.118962 containerd[2020]: time="2025-08-13T00:20:04.118547324Z" level=info msg="RemovePodSandbox \"06a6d85598948266def8b881c9aa5a3a442234c9bed962954dd7424f63d45ab8\" returns successfully" Aug 13 00:20:04.121584 containerd[2020]: time="2025-08-13T00:20:04.120094040Z" level=info msg="StopPodSandbox for \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\"" Aug 13 00:20:04.306899 containerd[2020]: 2025-08-13 00:20:04.224 [WARNING][5930] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"49aa71c9-5b84-402e-9316-844c45ada5f3", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3", Pod:"goldmane-768f4c5c69-nvbpr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali19e00db4553", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:04.306899 containerd[2020]: 2025-08-13 00:20:04.225 [INFO][5930] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Aug 13 00:20:04.306899 containerd[2020]: 2025-08-13 00:20:04.225 [INFO][5930] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" iface="eth0" netns="" Aug 13 00:20:04.306899 containerd[2020]: 2025-08-13 00:20:04.225 [INFO][5930] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Aug 13 00:20:04.306899 containerd[2020]: 2025-08-13 00:20:04.225 [INFO][5930] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Aug 13 00:20:04.306899 containerd[2020]: 2025-08-13 00:20:04.279 [INFO][5938] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" HandleID="k8s-pod-network.085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Workload="ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0" Aug 13 00:20:04.306899 containerd[2020]: 2025-08-13 00:20:04.279 [INFO][5938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:04.306899 containerd[2020]: 2025-08-13 00:20:04.280 [INFO][5938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:04.306899 containerd[2020]: 2025-08-13 00:20:04.297 [WARNING][5938] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" HandleID="k8s-pod-network.085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Workload="ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0" Aug 13 00:20:04.306899 containerd[2020]: 2025-08-13 00:20:04.297 [INFO][5938] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" HandleID="k8s-pod-network.085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Workload="ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0" Aug 13 00:20:04.306899 containerd[2020]: 2025-08-13 00:20:04.300 [INFO][5938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:04.306899 containerd[2020]: 2025-08-13 00:20:04.303 [INFO][5930] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Aug 13 00:20:04.308188 containerd[2020]: time="2025-08-13T00:20:04.306878097Z" level=info msg="TearDown network for sandbox \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\" successfully" Aug 13 00:20:04.308188 containerd[2020]: time="2025-08-13T00:20:04.306944457Z" level=info msg="StopPodSandbox for \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\" returns successfully" Aug 13 00:20:04.309002 containerd[2020]: time="2025-08-13T00:20:04.308852085Z" level=info msg="RemovePodSandbox for \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\"" Aug 13 00:20:04.309002 containerd[2020]: time="2025-08-13T00:20:04.308958189Z" level=info msg="Forcibly stopping sandbox \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\"" Aug 13 00:20:04.479684 containerd[2020]: 2025-08-13 00:20:04.388 [WARNING][5953] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"49aa71c9-5b84-402e-9316-844c45ada5f3", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3", Pod:"goldmane-768f4c5c69-nvbpr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali19e00db4553", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:04.479684 containerd[2020]: 2025-08-13 00:20:04.390 [INFO][5953] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Aug 13 00:20:04.479684 containerd[2020]: 2025-08-13 00:20:04.390 [INFO][5953] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" iface="eth0" netns="" Aug 13 00:20:04.479684 containerd[2020]: 2025-08-13 00:20:04.390 [INFO][5953] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Aug 13 00:20:04.479684 containerd[2020]: 2025-08-13 00:20:04.390 [INFO][5953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Aug 13 00:20:04.479684 containerd[2020]: 2025-08-13 00:20:04.452 [INFO][5962] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" HandleID="k8s-pod-network.085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Workload="ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0" Aug 13 00:20:04.479684 containerd[2020]: 2025-08-13 00:20:04.452 [INFO][5962] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:04.479684 containerd[2020]: 2025-08-13 00:20:04.452 [INFO][5962] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:04.479684 containerd[2020]: 2025-08-13 00:20:04.468 [WARNING][5962] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" HandleID="k8s-pod-network.085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Workload="ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0" Aug 13 00:20:04.479684 containerd[2020]: 2025-08-13 00:20:04.468 [INFO][5962] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" HandleID="k8s-pod-network.085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Workload="ip--172--31--31--36-k8s-goldmane--768f4c5c69--nvbpr-eth0" Aug 13 00:20:04.479684 containerd[2020]: 2025-08-13 00:20:04.471 [INFO][5962] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:04.479684 containerd[2020]: 2025-08-13 00:20:04.475 [INFO][5953] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc" Aug 13 00:20:04.479684 containerd[2020]: time="2025-08-13T00:20:04.478800466Z" level=info msg="TearDown network for sandbox \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\" successfully" Aug 13 00:20:04.487950 containerd[2020]: time="2025-08-13T00:20:04.487681354Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:20:04.487950 containerd[2020]: time="2025-08-13T00:20:04.487799314Z" level=info msg="RemovePodSandbox \"085f35fbab8272e68fa5e7f0eff9feb3c769a482472b8eb5183ded4592b213cc\" returns successfully" Aug 13 00:20:04.488523 containerd[2020]: time="2025-08-13T00:20:04.488402014Z" level=info msg="StopPodSandbox for \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\"" Aug 13 00:20:04.682564 containerd[2020]: 2025-08-13 00:20:04.572 [WARNING][5976] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7cfa030b-4e22-4159-b794-1031c8aae80f", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319", Pod:"csi-node-driver-vh7v8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4521660e031", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:04.682564 containerd[2020]: 2025-08-13 00:20:04.573 [INFO][5976] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Aug 13 00:20:04.682564 containerd[2020]: 2025-08-13 00:20:04.573 [INFO][5976] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" iface="eth0" netns="" Aug 13 00:20:04.682564 containerd[2020]: 2025-08-13 00:20:04.573 [INFO][5976] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Aug 13 00:20:04.682564 containerd[2020]: 2025-08-13 00:20:04.573 [INFO][5976] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Aug 13 00:20:04.682564 containerd[2020]: 2025-08-13 00:20:04.643 [INFO][5983] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" HandleID="k8s-pod-network.b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Workload="ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0" Aug 13 00:20:04.682564 containerd[2020]: 2025-08-13 00:20:04.644 [INFO][5983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:04.682564 containerd[2020]: 2025-08-13 00:20:04.644 [INFO][5983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:04.682564 containerd[2020]: 2025-08-13 00:20:04.671 [WARNING][5983] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" HandleID="k8s-pod-network.b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Workload="ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0" Aug 13 00:20:04.682564 containerd[2020]: 2025-08-13 00:20:04.671 [INFO][5983] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" HandleID="k8s-pod-network.b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Workload="ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0" Aug 13 00:20:04.682564 containerd[2020]: 2025-08-13 00:20:04.675 [INFO][5983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:04.682564 containerd[2020]: 2025-08-13 00:20:04.679 [INFO][5976] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Aug 13 00:20:04.683501 containerd[2020]: time="2025-08-13T00:20:04.682603523Z" level=info msg="TearDown network for sandbox \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\" successfully" Aug 13 00:20:04.683501 containerd[2020]: time="2025-08-13T00:20:04.682641599Z" level=info msg="StopPodSandbox for \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\" returns successfully" Aug 13 00:20:04.683501 containerd[2020]: time="2025-08-13T00:20:04.683312183Z" level=info msg="RemovePodSandbox for \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\"" Aug 13 00:20:04.683687 containerd[2020]: time="2025-08-13T00:20:04.683360663Z" level=info msg="Forcibly stopping sandbox \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\"" Aug 13 00:20:04.890055 containerd[2020]: 2025-08-13 00:20:04.786 [WARNING][5998] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7cfa030b-4e22-4159-b794-1031c8aae80f", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319", Pod:"csi-node-driver-vh7v8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4521660e031", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:04.890055 containerd[2020]: 2025-08-13 00:20:04.787 [INFO][5998] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Aug 13 00:20:04.890055 containerd[2020]: 2025-08-13 00:20:04.787 [INFO][5998] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" iface="eth0" netns="" Aug 13 00:20:04.890055 containerd[2020]: 2025-08-13 00:20:04.787 [INFO][5998] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Aug 13 00:20:04.890055 containerd[2020]: 2025-08-13 00:20:04.789 [INFO][5998] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Aug 13 00:20:04.890055 containerd[2020]: 2025-08-13 00:20:04.858 [INFO][6005] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" HandleID="k8s-pod-network.b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Workload="ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0" Aug 13 00:20:04.890055 containerd[2020]: 2025-08-13 00:20:04.860 [INFO][6005] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:04.890055 containerd[2020]: 2025-08-13 00:20:04.860 [INFO][6005] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:04.890055 containerd[2020]: 2025-08-13 00:20:04.878 [WARNING][6005] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" HandleID="k8s-pod-network.b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Workload="ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0" Aug 13 00:20:04.890055 containerd[2020]: 2025-08-13 00:20:04.878 [INFO][6005] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" HandleID="k8s-pod-network.b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Workload="ip--172--31--31--36-k8s-csi--node--driver--vh7v8-eth0" Aug 13 00:20:04.890055 containerd[2020]: 2025-08-13 00:20:04.883 [INFO][6005] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:04.890055 containerd[2020]: 2025-08-13 00:20:04.886 [INFO][5998] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a" Aug 13 00:20:04.892679 containerd[2020]: time="2025-08-13T00:20:04.890367192Z" level=info msg="TearDown network for sandbox \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\" successfully" Aug 13 00:20:04.899850 containerd[2020]: time="2025-08-13T00:20:04.899738892Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:20:04.900212 containerd[2020]: time="2025-08-13T00:20:04.900162036Z" level=info msg="RemovePodSandbox \"b7e1905ee52e65f0464fadb039373f3cb0be7f994172f5ea07cc80c45099445a\" returns successfully" Aug 13 00:20:04.901440 containerd[2020]: time="2025-08-13T00:20:04.901386180Z" level=info msg="StopPodSandbox for \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\"" Aug 13 00:20:05.158359 containerd[2020]: 2025-08-13 00:20:05.031 [WARNING][6023] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" WorkloadEndpoint="ip--172--31--31--36-k8s-whisker--dc6d9647--98mdm-eth0" Aug 13 00:20:05.158359 containerd[2020]: 2025-08-13 00:20:05.032 [INFO][6023] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Aug 13 00:20:05.158359 containerd[2020]: 2025-08-13 00:20:05.032 [INFO][6023] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" iface="eth0" netns="" Aug 13 00:20:05.158359 containerd[2020]: 2025-08-13 00:20:05.032 [INFO][6023] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Aug 13 00:20:05.158359 containerd[2020]: 2025-08-13 00:20:05.032 [INFO][6023] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Aug 13 00:20:05.158359 containerd[2020]: 2025-08-13 00:20:05.118 [INFO][6030] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" HandleID="k8s-pod-network.cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Workload="ip--172--31--31--36-k8s-whisker--dc6d9647--98mdm-eth0" Aug 13 00:20:05.158359 containerd[2020]: 2025-08-13 00:20:05.119 [INFO][6030] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:05.158359 containerd[2020]: 2025-08-13 00:20:05.119 [INFO][6030] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:05.158359 containerd[2020]: 2025-08-13 00:20:05.143 [WARNING][6030] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" HandleID="k8s-pod-network.cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Workload="ip--172--31--31--36-k8s-whisker--dc6d9647--98mdm-eth0" Aug 13 00:20:05.158359 containerd[2020]: 2025-08-13 00:20:05.143 [INFO][6030] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" HandleID="k8s-pod-network.cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Workload="ip--172--31--31--36-k8s-whisker--dc6d9647--98mdm-eth0" Aug 13 00:20:05.158359 containerd[2020]: 2025-08-13 00:20:05.147 [INFO][6030] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:05.158359 containerd[2020]: 2025-08-13 00:20:05.151 [INFO][6023] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Aug 13 00:20:05.159942 containerd[2020]: time="2025-08-13T00:20:05.159641302Z" level=info msg="TearDown network for sandbox \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\" successfully" Aug 13 00:20:05.160173 containerd[2020]: time="2025-08-13T00:20:05.160045042Z" level=info msg="StopPodSandbox for \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\" returns successfully" Aug 13 00:20:05.161985 containerd[2020]: time="2025-08-13T00:20:05.161863294Z" level=info msg="RemovePodSandbox for \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\"" Aug 13 00:20:05.161985 containerd[2020]: time="2025-08-13T00:20:05.161930542Z" level=info msg="Forcibly stopping sandbox \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\"" Aug 13 00:20:05.410632 containerd[2020]: 2025-08-13 00:20:05.269 [WARNING][6044] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" WorkloadEndpoint="ip--172--31--31--36-k8s-whisker--dc6d9647--98mdm-eth0" Aug 13 00:20:05.410632 containerd[2020]: 2025-08-13 00:20:05.270 [INFO][6044] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Aug 13 00:20:05.410632 containerd[2020]: 2025-08-13 00:20:05.270 [INFO][6044] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" iface="eth0" netns="" Aug 13 00:20:05.410632 containerd[2020]: 2025-08-13 00:20:05.270 [INFO][6044] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Aug 13 00:20:05.410632 containerd[2020]: 2025-08-13 00:20:05.270 [INFO][6044] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Aug 13 00:20:05.410632 containerd[2020]: 2025-08-13 00:20:05.352 [INFO][6051] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" HandleID="k8s-pod-network.cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Workload="ip--172--31--31--36-k8s-whisker--dc6d9647--98mdm-eth0" Aug 13 00:20:05.410632 containerd[2020]: 2025-08-13 00:20:05.353 [INFO][6051] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:05.410632 containerd[2020]: 2025-08-13 00:20:05.353 [INFO][6051] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:05.410632 containerd[2020]: 2025-08-13 00:20:05.385 [WARNING][6051] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" HandleID="k8s-pod-network.cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Workload="ip--172--31--31--36-k8s-whisker--dc6d9647--98mdm-eth0" Aug 13 00:20:05.410632 containerd[2020]: 2025-08-13 00:20:05.386 [INFO][6051] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" HandleID="k8s-pod-network.cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Workload="ip--172--31--31--36-k8s-whisker--dc6d9647--98mdm-eth0" Aug 13 00:20:05.410632 containerd[2020]: 2025-08-13 00:20:05.393 [INFO][6051] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:05.410632 containerd[2020]: 2025-08-13 00:20:05.400 [INFO][6044] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25" Aug 13 00:20:05.410632 containerd[2020]: time="2025-08-13T00:20:05.409935767Z" level=info msg="TearDown network for sandbox \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\" successfully" Aug 13 00:20:05.421593 containerd[2020]: time="2025-08-13T00:20:05.421109639Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:20:05.421593 containerd[2020]: time="2025-08-13T00:20:05.421211663Z" level=info msg="RemovePodSandbox \"cf23902098a323db595f3954846845e8a231f128a9541c8fd04be6705e34aa25\" returns successfully" Aug 13 00:20:05.615528 ntpd[1986]: Listen normally on 8 vxlan.calico 192.168.99.64:123 Aug 13 00:20:05.616671 ntpd[1986]: 13 Aug 00:20:05 ntpd[1986]: Listen normally on 8 vxlan.calico 192.168.99.64:123 Aug 13 00:20:05.616671 ntpd[1986]: 13 Aug 00:20:05 ntpd[1986]: Listen normally on 9 calia8c0aa0d297 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 13 00:20:05.616671 ntpd[1986]: 13 Aug 00:20:05 ntpd[1986]: Listen normally on 10 vxlan.calico [fe80::643e:6aff:fed5:977a%5]:123 Aug 13 00:20:05.616671 ntpd[1986]: 13 Aug 00:20:05 ntpd[1986]: Listen normally on 11 cali5deeb3c7584 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 13 00:20:05.616671 ntpd[1986]: 13 Aug 00:20:05 ntpd[1986]: Listen normally on 12 calid2a0434ef12 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 13 00:20:05.616671 ntpd[1986]: 13 Aug 00:20:05 ntpd[1986]: Listen normally on 13 cali8beee0ad0d0 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 13 00:20:05.616671 ntpd[1986]: 13 Aug 00:20:05 ntpd[1986]: Listen normally on 14 cali4521660e031 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 13 00:20:05.616671 ntpd[1986]: 13 Aug 00:20:05 ntpd[1986]: Listen normally on 15 cali2d5175f09c9 [fe80::ecee:eeff:feee:eeee%12]:123 Aug 13 00:20:05.616671 ntpd[1986]: 13 Aug 00:20:05 ntpd[1986]: Listen normally on 16 cali19e00db4553 [fe80::ecee:eeff:feee:eeee%13]:123 Aug 13 00:20:05.615702 ntpd[1986]: Listen normally on 9 calia8c0aa0d297 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 13 00:20:05.618217 ntpd[1986]: 13 Aug 00:20:05 ntpd[1986]: Listen normally on 17 calif77e2de8181 [fe80::ecee:eeff:feee:eeee%14]:123 Aug 13 00:20:05.616110 ntpd[1986]: Listen normally on 10 vxlan.calico [fe80::643e:6aff:fed5:977a%5]:123 Aug 13 00:20:05.616237 ntpd[1986]: Listen normally on 11 cali5deeb3c7584 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 13 00:20:05.616325 ntpd[1986]: Listen normally on 12 calid2a0434ef12 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 13 00:20:05.616403 ntpd[1986]: Listen normally on 13 cali8beee0ad0d0 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 13 00:20:05.616522 ntpd[1986]: Listen normally on 14 cali4521660e031 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 13 00:20:05.616606 ntpd[1986]: Listen normally on 15 cali2d5175f09c9 [fe80::ecee:eeff:feee:eeee%12]:123 Aug 13 00:20:05.616678 ntpd[1986]: Listen normally on 16 cali19e00db4553 [fe80::ecee:eeff:feee:eeee%13]:123 Aug 13 00:20:05.616753 ntpd[1986]: Listen normally on 17 calif77e2de8181 [fe80::ecee:eeff:feee:eeee%14]:123 Aug 13 00:20:06.340370 containerd[2020]: time="2025-08-13T00:20:06.340285728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:06.342312 containerd[2020]: time="2025-08-13T00:20:06.342230520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 13 00:20:06.345083 containerd[2020]: time="2025-08-13T00:20:06.344939532Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:06.350731 containerd[2020]: time="2025-08-13T00:20:06.350629704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:06.352735 containerd[2020]: time="2025-08-13T00:20:06.352542912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 3.561431874s" Aug 13 00:20:06.352735 containerd[2020]: time="2025-08-13T00:20:06.352605468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:20:06.356439 containerd[2020]: time="2025-08-13T00:20:06.355622184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:20:06.357050 containerd[2020]: time="2025-08-13T00:20:06.356819208Z" level=info msg="CreateContainer within sandbox \"b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:20:06.414382 containerd[2020]: time="2025-08-13T00:20:06.413966016Z" level=info msg="CreateContainer within sandbox \"b3720483090f6aae8f24555784e616c397f2d402558a49b884736d6da792ed17\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5e335b02598dc242fd1ebad907951ba2e82fa65519763b9436cd09ee44594868\"" Aug 13 00:20:06.415761 containerd[2020]: time="2025-08-13T00:20:06.415679904Z" level=info msg="StartContainer for \"5e335b02598dc242fd1ebad907951ba2e82fa65519763b9436cd09ee44594868\"" Aug 13 00:20:06.518803 systemd[1]: Started cri-containerd-5e335b02598dc242fd1ebad907951ba2e82fa65519763b9436cd09ee44594868.scope - libcontainer container 5e335b02598dc242fd1ebad907951ba2e82fa65519763b9436cd09ee44594868. Aug 13 00:20:06.599893 containerd[2020]: time="2025-08-13T00:20:06.598883893Z" level=info msg="StartContainer for \"5e335b02598dc242fd1ebad907951ba2e82fa65519763b9436cd09ee44594868\" returns successfully" Aug 13 00:20:07.109306 kubelet[3342]: I0813 00:20:07.109128 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bd6797c4b-4bgrn" podStartSLOduration=34.956581121 podStartE2EDuration="44.109080167s" podCreationTimestamp="2025-08-13 00:19:23 +0000 UTC" firstStartedPulling="2025-08-13 00:19:57.201666254 +0000 UTC m=+56.184179236" lastFinishedPulling="2025-08-13 00:20:06.354165276 +0000 UTC m=+65.336678282" observedRunningTime="2025-08-13 00:20:07.106220735 +0000 UTC m=+66.088733753" watchObservedRunningTime="2025-08-13 00:20:07.109080167 +0000 UTC m=+66.091593161" Aug 13 00:20:07.110275 kubelet[3342]: I0813 00:20:07.109586 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-864598d977-rp5hw" podStartSLOduration=5.940956029 podStartE2EDuration="15.109547987s" podCreationTimestamp="2025-08-13 00:19:52 +0000 UTC" firstStartedPulling="2025-08-13 00:19:53.62182266 +0000 UTC m=+52.604335654" lastFinishedPulling="2025-08-13 00:20:02.790414414 +0000 UTC m=+61.772927612" observedRunningTime="2025-08-13 00:20:04.068989664 +0000 UTC m=+63.051502682" watchObservedRunningTime="2025-08-13 00:20:07.109547987 +0000 UTC m=+66.092060993" Aug 13 00:20:08.083273 kubelet[3342]: I0813 00:20:08.083221 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:20:08.935660 systemd[1]: Started sshd@10-172.31.31.36:22-139.178.89.65:60114.service - OpenSSH per-connection server daemon (139.178.89.65:60114). Aug 13 00:20:09.126388 sshd[6117]: Accepted publickey for core from 139.178.89.65 port 60114 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:09.131842 sshd[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:09.143569 systemd-logind[1993]: New session 11 of user core. Aug 13 00:20:09.151781 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:20:09.588185 sshd[6117]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:09.602927 systemd[1]: sshd@10-172.31.31.36:22-139.178.89.65:60114.service: Deactivated successfully. Aug 13 00:20:09.617777 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:20:09.625501 systemd-logind[1993]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:20:09.633172 systemd-logind[1993]: Removed session 11. Aug 13 00:20:11.190556 containerd[2020]: time="2025-08-13T00:20:11.190445080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:11.192966 containerd[2020]: time="2025-08-13T00:20:11.192387832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 13 00:20:11.195185 containerd[2020]: time="2025-08-13T00:20:11.195115720Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:11.200976 containerd[2020]: time="2025-08-13T00:20:11.200826424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:11.203067 containerd[2020]: time="2025-08-13T00:20:11.202747024Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 4.847065032s" Aug 13 00:20:11.203067 containerd[2020]: time="2025-08-13T00:20:11.202824856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 13 00:20:11.208190 containerd[2020]: time="2025-08-13T00:20:11.207821848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:20:11.237518 containerd[2020]: time="2025-08-13T00:20:11.234605788Z" level=info msg="CreateContainer within sandbox \"30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:20:11.275602 containerd[2020]: time="2025-08-13T00:20:11.275514460Z" level=info msg="CreateContainer within sandbox \"30bbab45f41add62867ae1f4588d64e8da01048eabc77d51a492890fde348b4d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"282f7ca2efbaad91f7d9fc5d17fb5aba71594eb9b4f0f6a19e2536a9db61fbb4\"" Aug 13 00:20:11.276929 containerd[2020]: time="2025-08-13T00:20:11.276845092Z" level=info msg="StartContainer for \"282f7ca2efbaad91f7d9fc5d17fb5aba71594eb9b4f0f6a19e2536a9db61fbb4\"" Aug 13 00:20:11.355908 systemd[1]: Started cri-containerd-282f7ca2efbaad91f7d9fc5d17fb5aba71594eb9b4f0f6a19e2536a9db61fbb4.scope - libcontainer container 282f7ca2efbaad91f7d9fc5d17fb5aba71594eb9b4f0f6a19e2536a9db61fbb4. Aug 13 00:20:11.470160 containerd[2020]: time="2025-08-13T00:20:11.469844429Z" level=info msg="StartContainer for \"282f7ca2efbaad91f7d9fc5d17fb5aba71594eb9b4f0f6a19e2536a9db61fbb4\" returns successfully" Aug 13 00:20:12.151971 kubelet[3342]: I0813 00:20:12.151841 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7689d867cd-nh2hh" podStartSLOduration=28.554647806 podStartE2EDuration="40.151808404s" podCreationTimestamp="2025-08-13 00:19:32 +0000 UTC" firstStartedPulling="2025-08-13 00:19:59.608099502 +0000 UTC m=+58.590612496" lastFinishedPulling="2025-08-13 00:20:11.205260076 +0000 UTC m=+70.187773094" observedRunningTime="2025-08-13 00:20:12.148998412 +0000 UTC m=+71.131511418" watchObservedRunningTime="2025-08-13 00:20:12.151808404 +0000 UTC m=+71.134321386" Aug 13 00:20:12.707690 containerd[2020]: time="2025-08-13T00:20:12.707615587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:12.709969 containerd[2020]: time="2025-08-13T00:20:12.709895251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 13 00:20:12.712450 containerd[2020]: time="2025-08-13T00:20:12.712348855Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:12.718598 containerd[2020]: time="2025-08-13T00:20:12.718500355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:12.720263 containerd[2020]: time="2025-08-13T00:20:12.720201679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.512312403s" Aug 13 00:20:12.721037 containerd[2020]: time="2025-08-13T00:20:12.720498055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 13 00:20:12.722991 containerd[2020]: time="2025-08-13T00:20:12.722907115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:20:12.726032 containerd[2020]: time="2025-08-13T00:20:12.725963815Z" level=info msg="CreateContainer within sandbox \"085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:20:12.776543 containerd[2020]: time="2025-08-13T00:20:12.776388415Z" level=info msg="CreateContainer within sandbox \"085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3501f7c983de6bfbe1881e7c99c11dfea990af1f5a93714912c851ca13a3d94f\"" Aug 13 00:20:12.778365 containerd[2020]: time="2025-08-13T00:20:12.777431396Z" level=info msg="StartContainer for \"3501f7c983de6bfbe1881e7c99c11dfea990af1f5a93714912c851ca13a3d94f\"" Aug 13 00:20:12.852957 systemd[1]: Started cri-containerd-3501f7c983de6bfbe1881e7c99c11dfea990af1f5a93714912c851ca13a3d94f.scope - libcontainer container 3501f7c983de6bfbe1881e7c99c11dfea990af1f5a93714912c851ca13a3d94f. Aug 13 00:20:12.932309 containerd[2020]: time="2025-08-13T00:20:12.932185472Z" level=info msg="StartContainer for \"3501f7c983de6bfbe1881e7c99c11dfea990af1f5a93714912c851ca13a3d94f\" returns successfully" Aug 13 00:20:14.633799 systemd[1]: Started sshd@11-172.31.31.36:22-139.178.89.65:60118.service - OpenSSH per-connection server daemon (139.178.89.65:60118). Aug 13 00:20:14.855215 sshd[6248]: Accepted publickey for core from 139.178.89.65 port 60118 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:14.859800 sshd[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:14.878140 systemd-logind[1993]: New session 12 of user core. Aug 13 00:20:14.885044 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:20:15.249014 sshd[6248]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:15.258373 systemd[1]: sshd@11-172.31.31.36:22-139.178.89.65:60118.service: Deactivated successfully. Aug 13 00:20:15.265515 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:20:15.271710 systemd-logind[1993]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:20:15.299151 systemd[1]: Started sshd@12-172.31.31.36:22-139.178.89.65:60132.service - OpenSSH per-connection server daemon (139.178.89.65:60132). Aug 13 00:20:15.303162 systemd-logind[1993]: Removed session 12. Aug 13 00:20:15.516329 sshd[6262]: Accepted publickey for core from 139.178.89.65 port 60132 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:15.527076 sshd[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:15.543923 systemd-logind[1993]: New session 13 of user core. Aug 13 00:20:15.551221 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:20:15.928838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4184585211.mount: Deactivated successfully. Aug 13 00:20:16.063758 sshd[6262]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:16.078493 systemd[1]: sshd@12-172.31.31.36:22-139.178.89.65:60132.service: Deactivated successfully. Aug 13 00:20:16.087559 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:20:16.100802 systemd-logind[1993]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:20:16.141929 systemd[1]: Started sshd@13-172.31.31.36:22-139.178.89.65:60142.service - OpenSSH per-connection server daemon (139.178.89.65:60142). Aug 13 00:20:16.145965 systemd-logind[1993]: Removed session 13. Aug 13 00:20:16.384967 sshd[6284]: Accepted publickey for core from 139.178.89.65 port 60142 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:16.389892 sshd[6284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:16.410269 systemd-logind[1993]: New session 14 of user core. Aug 13 00:20:16.416904 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:20:16.792428 sshd[6284]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:16.810368 systemd[1]: sshd@13-172.31.31.36:22-139.178.89.65:60142.service: Deactivated successfully. Aug 13 00:20:16.818272 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:20:16.823783 systemd-logind[1993]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:20:16.829427 systemd-logind[1993]: Removed session 14. Aug 13 00:20:17.283220 containerd[2020]: time="2025-08-13T00:20:17.283128574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:17.285608 containerd[2020]: time="2025-08-13T00:20:17.285513862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 13 00:20:17.288408 containerd[2020]: time="2025-08-13T00:20:17.288300718Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:17.294531 containerd[2020]: time="2025-08-13T00:20:17.294371950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:17.296712 containerd[2020]: time="2025-08-13T00:20:17.296411962Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 4.573424147s" Aug 13 00:20:17.296712 containerd[2020]: time="2025-08-13T00:20:17.296528686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 13 00:20:17.308663 containerd[2020]: time="2025-08-13T00:20:17.308565718Z" level=info msg="CreateContainer within sandbox \"f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:20:17.316080 containerd[2020]: time="2025-08-13T00:20:17.316014502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:20:17.351410 containerd[2020]: time="2025-08-13T00:20:17.351229630Z" level=info msg="CreateContainer within sandbox \"f44df99cab72c4351a0fb7e02f5e6d4238cf4fe379c81393ff8ee963a20a92c3\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a2385f8f72d8396cd7543eef11022664b45768234a32c82781069272bdab287d\"" Aug 13 00:20:17.354566 containerd[2020]: time="2025-08-13T00:20:17.353991046Z" level=info msg="StartContainer for \"a2385f8f72d8396cd7543eef11022664b45768234a32c82781069272bdab287d\"" Aug 13 00:20:17.463880 systemd[1]: Started cri-containerd-a2385f8f72d8396cd7543eef11022664b45768234a32c82781069272bdab287d.scope - libcontainer container a2385f8f72d8396cd7543eef11022664b45768234a32c82781069272bdab287d. Aug 13 00:20:17.543266 containerd[2020]: time="2025-08-13T00:20:17.542937947Z" level=info msg="StartContainer for \"a2385f8f72d8396cd7543eef11022664b45768234a32c82781069272bdab287d\" returns successfully" Aug 13 00:20:17.650075 containerd[2020]: time="2025-08-13T00:20:17.649996284Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:17.656475 containerd[2020]: time="2025-08-13T00:20:17.656361336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 00:20:17.664388 containerd[2020]: time="2025-08-13T00:20:17.664303128Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 348.00089ms" Aug 13 00:20:17.664852 containerd[2020]: time="2025-08-13T00:20:17.664569624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:20:17.669287 containerd[2020]: time="2025-08-13T00:20:17.669178488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:20:17.673390 containerd[2020]: time="2025-08-13T00:20:17.673283592Z" level=info msg="CreateContainer within sandbox \"321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:20:17.720183 containerd[2020]: time="2025-08-13T00:20:17.720068424Z" level=info msg="CreateContainer within sandbox \"321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7249b7f7b8e2efc578f20b62a3a739b17ec951b9bb5d95b732b7ad6fe722d56b\"" Aug 13 00:20:17.724501 containerd[2020]: time="2025-08-13T00:20:17.724352700Z" level=info msg="StartContainer for \"7249b7f7b8e2efc578f20b62a3a739b17ec951b9bb5d95b732b7ad6fe722d56b\"" Aug 13 00:20:17.790812 systemd[1]: Started cri-containerd-7249b7f7b8e2efc578f20b62a3a739b17ec951b9bb5d95b732b7ad6fe722d56b.scope - libcontainer container 7249b7f7b8e2efc578f20b62a3a739b17ec951b9bb5d95b732b7ad6fe722d56b. Aug 13 00:20:17.870599 containerd[2020]: time="2025-08-13T00:20:17.870277765Z" level=info msg="StartContainer for \"7249b7f7b8e2efc578f20b62a3a739b17ec951b9bb5d95b732b7ad6fe722d56b\" returns successfully" Aug 13 00:20:18.195880 kubelet[3342]: I0813 00:20:18.194426 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-nvbpr" podStartSLOduration=30.938638173 podStartE2EDuration="47.194266942s" podCreationTimestamp="2025-08-13 00:19:31 +0000 UTC" firstStartedPulling="2025-08-13 00:20:01.043719233 +0000 UTC m=+60.026232227" lastFinishedPulling="2025-08-13 00:20:17.299347906 +0000 UTC m=+76.281860996" observedRunningTime="2025-08-13 00:20:18.190179934 +0000 UTC m=+77.172692952" watchObservedRunningTime="2025-08-13 00:20:18.194266942 +0000 UTC m=+77.176779936" Aug 13 00:20:19.183904 kubelet[3342]: I0813 00:20:19.183335 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:20:19.736296 containerd[2020]: time="2025-08-13T00:20:19.736207706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:19.741604 containerd[2020]: time="2025-08-13T00:20:19.741518438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 13 00:20:19.745505 containerd[2020]: time="2025-08-13T00:20:19.745371206Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:19.763174 containerd[2020]: time="2025-08-13T00:20:19.762213050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:19.767427 containerd[2020]: time="2025-08-13T00:20:19.767353058Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 2.098094818s" Aug 13 00:20:19.767427 containerd[2020]: time="2025-08-13T00:20:19.767427878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 13 00:20:19.781889 containerd[2020]: time="2025-08-13T00:20:19.781753586Z" level=info msg="CreateContainer within sandbox \"085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:20:19.832177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177726032.mount: Deactivated successfully. Aug 13 00:20:19.842757 containerd[2020]: time="2025-08-13T00:20:19.842626479Z" level=info msg="CreateContainer within sandbox \"085ce7edac71bf65772c75e5eb7ee8b1f4056346a89c8927f7f74811a4d42319\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"33975b78ad969c7f1be511cb04dde59a1d86d389f9257dc4a97f9855e35de85f\"" Aug 13 00:20:19.846881 containerd[2020]: time="2025-08-13T00:20:19.845828487Z" level=info msg="StartContainer for \"33975b78ad969c7f1be511cb04dde59a1d86d389f9257dc4a97f9855e35de85f\"" Aug 13 00:20:19.940030 systemd[1]: Started cri-containerd-33975b78ad969c7f1be511cb04dde59a1d86d389f9257dc4a97f9855e35de85f.scope - libcontainer container 33975b78ad969c7f1be511cb04dde59a1d86d389f9257dc4a97f9855e35de85f. Aug 13 00:20:20.026623 containerd[2020]: time="2025-08-13T00:20:20.026234700Z" level=info msg="StartContainer for \"33975b78ad969c7f1be511cb04dde59a1d86d389f9257dc4a97f9855e35de85f\" returns successfully" Aug 13 00:20:20.192963 kubelet[3342]: I0813 00:20:20.192152 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:20:20.231770 kubelet[3342]: I0813 00:20:20.230893 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bd6797c4b-r7mhx" podStartSLOduration=41.641565119 podStartE2EDuration="57.230862877s" podCreationTimestamp="2025-08-13 00:19:23 +0000 UTC" firstStartedPulling="2025-08-13 00:20:02.076748694 +0000 UTC m=+61.059261688" lastFinishedPulling="2025-08-13 00:20:17.66604638 +0000 UTC m=+76.648559446" observedRunningTime="2025-08-13 00:20:18.246357815 +0000 UTC m=+77.228870869" watchObservedRunningTime="2025-08-13 00:20:20.230862877 +0000 UTC m=+79.213375907" Aug 13 00:20:20.516868 kubelet[3342]: I0813 00:20:20.516830 3342 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:20:20.517405 kubelet[3342]: I0813 00:20:20.517109 3342 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:20:20.525793 kubelet[3342]: I0813 00:20:20.525692 3342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vh7v8" podStartSLOduration=28.563574619 podStartE2EDuration="48.52566431s" podCreationTimestamp="2025-08-13 00:19:32 +0000 UTC" firstStartedPulling="2025-08-13 00:19:59.811235035 +0000 UTC m=+58.793748029" lastFinishedPulling="2025-08-13 00:20:19.773324738 +0000 UTC m=+78.755837720" observedRunningTime="2025-08-13 00:20:20.233067073 +0000 UTC m=+79.215580079" watchObservedRunningTime="2025-08-13 00:20:20.52566431 +0000 UTC m=+79.508177304" Aug 13 00:20:21.835033 systemd[1]: Started sshd@14-172.31.31.36:22-139.178.89.65:55268.service - OpenSSH per-connection server daemon (139.178.89.65:55268). Aug 13 00:20:22.031346 sshd[6507]: Accepted publickey for core from 139.178.89.65 port 55268 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:22.034643 sshd[6507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:22.045999 systemd-logind[1993]: New session 15 of user core. Aug 13 00:20:22.050825 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:20:22.323822 sshd[6507]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:22.336742 systemd[1]: sshd@14-172.31.31.36:22-139.178.89.65:55268.service: Deactivated successfully. Aug 13 00:20:22.345190 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:20:22.351628 systemd-logind[1993]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:20:22.357086 systemd-logind[1993]: Removed session 15. Aug 13 00:20:27.371064 systemd[1]: Started sshd@15-172.31.31.36:22-139.178.89.65:55274.service - OpenSSH per-connection server daemon (139.178.89.65:55274). Aug 13 00:20:27.547491 sshd[6550]: Accepted publickey for core from 139.178.89.65 port 55274 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:27.550751 sshd[6550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:27.559041 systemd-logind[1993]: New session 16 of user core. Aug 13 00:20:27.565765 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:20:27.829222 sshd[6550]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:27.837762 systemd[1]: sshd@15-172.31.31.36:22-139.178.89.65:55274.service: Deactivated successfully. Aug 13 00:20:27.843938 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:20:27.845939 systemd-logind[1993]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:20:27.848587 systemd-logind[1993]: Removed session 16. Aug 13 00:20:32.875675 systemd[1]: Started sshd@16-172.31.31.36:22-139.178.89.65:44438.service - OpenSSH per-connection server daemon (139.178.89.65:44438). Aug 13 00:20:33.075065 sshd[6565]: Accepted publickey for core from 139.178.89.65 port 44438 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:33.079102 sshd[6565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:33.095593 systemd-logind[1993]: New session 17 of user core. Aug 13 00:20:33.106821 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:20:33.465038 sshd[6565]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:33.474582 systemd[1]: sshd@16-172.31.31.36:22-139.178.89.65:44438.service: Deactivated successfully. Aug 13 00:20:33.479791 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:20:33.485970 systemd-logind[1993]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:20:33.491897 systemd-logind[1993]: Removed session 17. Aug 13 00:20:35.822238 kubelet[3342]: I0813 00:20:35.821305 3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:20:38.510198 systemd[1]: Started sshd@17-172.31.31.36:22-139.178.89.65:44446.service - OpenSSH per-connection server daemon (139.178.89.65:44446). Aug 13 00:20:38.724937 sshd[6588]: Accepted publickey for core from 139.178.89.65 port 44446 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:38.729348 sshd[6588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:38.748441 systemd-logind[1993]: New session 18 of user core. Aug 13 00:20:38.753968 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:20:39.074764 sshd[6588]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:39.082338 systemd-logind[1993]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:20:39.084107 systemd[1]: sshd@17-172.31.31.36:22-139.178.89.65:44446.service: Deactivated successfully. Aug 13 00:20:39.091119 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:20:39.094354 systemd-logind[1993]: Removed session 18. Aug 13 00:20:39.124108 systemd[1]: Started sshd@18-172.31.31.36:22-139.178.89.65:60916.service - OpenSSH per-connection server daemon (139.178.89.65:60916). Aug 13 00:20:39.321372 sshd[6603]: Accepted publickey for core from 139.178.89.65 port 60916 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:39.324706 sshd[6603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:39.336090 systemd-logind[1993]: New session 19 of user core. Aug 13 00:20:39.341816 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:20:39.980925 sshd[6603]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:39.989007 systemd[1]: sshd@18-172.31.31.36:22-139.178.89.65:60916.service: Deactivated successfully. Aug 13 00:20:39.994325 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:20:39.996847 systemd-logind[1993]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:20:40.000212 systemd-logind[1993]: Removed session 19. Aug 13 00:20:40.020159 systemd[1]: Started sshd@19-172.31.31.36:22-139.178.89.65:60924.service - OpenSSH per-connection server daemon (139.178.89.65:60924). Aug 13 00:20:40.214172 sshd[6614]: Accepted publickey for core from 139.178.89.65 port 60924 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:40.217425 sshd[6614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:40.228000 systemd-logind[1993]: New session 20 of user core. Aug 13 00:20:40.232835 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:20:41.312812 sshd[6614]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:41.327907 systemd-logind[1993]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:20:41.329394 systemd[1]: sshd@19-172.31.31.36:22-139.178.89.65:60924.service: Deactivated successfully. Aug 13 00:20:41.337135 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:20:41.369539 systemd[1]: Started sshd@20-172.31.31.36:22-139.178.89.65:60932.service - OpenSSH per-connection server daemon (139.178.89.65:60932). Aug 13 00:20:41.374320 systemd-logind[1993]: Removed session 20. Aug 13 00:20:41.557626 sshd[6634]: Accepted publickey for core from 139.178.89.65 port 60932 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:41.561045 sshd[6634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:41.575008 systemd-logind[1993]: New session 21 of user core. Aug 13 00:20:41.581813 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:20:42.098213 sshd[6634]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:42.107503 systemd[1]: sshd@20-172.31.31.36:22-139.178.89.65:60932.service: Deactivated successfully. Aug 13 00:20:42.115021 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:20:42.117867 systemd-logind[1993]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:20:42.149126 systemd[1]: Started sshd@21-172.31.31.36:22-139.178.89.65:60942.service - OpenSSH per-connection server daemon (139.178.89.65:60942). Aug 13 00:20:42.171619 systemd-logind[1993]: Removed session 21. Aug 13 00:20:42.337877 sshd[6656]: Accepted publickey for core from 139.178.89.65 port 60942 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:42.340652 sshd[6656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:42.349762 systemd-logind[1993]: New session 22 of user core. Aug 13 00:20:42.355806 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:20:42.638593 sshd[6656]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:42.646089 systemd[1]: sshd@21-172.31.31.36:22-139.178.89.65:60942.service: Deactivated successfully. Aug 13 00:20:42.651322 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:20:42.654302 systemd-logind[1993]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:20:42.656564 systemd-logind[1993]: Removed session 22. Aug 13 00:20:47.680029 systemd[1]: Started sshd@22-172.31.31.36:22-139.178.89.65:60954.service - OpenSSH per-connection server daemon (139.178.89.65:60954). Aug 13 00:20:47.861688 sshd[6698]: Accepted publickey for core from 139.178.89.65 port 60954 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:47.865180 sshd[6698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:47.875179 systemd-logind[1993]: New session 23 of user core. Aug 13 00:20:47.880769 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:20:48.124072 sshd[6698]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:48.131157 systemd[1]: sshd@22-172.31.31.36:22-139.178.89.65:60954.service: Deactivated successfully. Aug 13 00:20:48.135802 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:20:48.138246 systemd-logind[1993]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:20:48.142087 systemd-logind[1993]: Removed session 23. Aug 13 00:20:53.166273 systemd[1]: Started sshd@23-172.31.31.36:22-139.178.89.65:58550.service - OpenSSH per-connection server daemon (139.178.89.65:58550). Aug 13 00:20:53.357424 sshd[6737]: Accepted publickey for core from 139.178.89.65 port 58550 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:53.361331 sshd[6737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:53.372299 systemd-logind[1993]: New session 24 of user core. Aug 13 00:20:53.381174 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:20:53.661175 sshd[6737]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:53.669531 systemd-logind[1993]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:20:53.670797 systemd[1]: sshd@23-172.31.31.36:22-139.178.89.65:58550.service: Deactivated successfully. Aug 13 00:20:53.674946 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:20:53.681280 systemd-logind[1993]: Removed session 24. Aug 13 00:20:58.701054 systemd[1]: Started sshd@24-172.31.31.36:22-139.178.89.65:58558.service - OpenSSH per-connection server daemon (139.178.89.65:58558). Aug 13 00:20:58.880111 sshd[6773]: Accepted publickey for core from 139.178.89.65 port 58558 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:58.885108 sshd[6773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:58.896943 systemd-logind[1993]: New session 25 of user core. Aug 13 00:20:58.903787 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:20:59.157821 sshd[6773]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:59.165363 systemd[1]: sshd@24-172.31.31.36:22-139.178.89.65:58558.service: Deactivated successfully. Aug 13 00:20:59.170604 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:20:59.173244 systemd-logind[1993]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:20:59.176224 systemd-logind[1993]: Removed session 25. Aug 13 00:21:04.199286 systemd[1]: Started sshd@25-172.31.31.36:22-139.178.89.65:33228.service - OpenSSH per-connection server daemon (139.178.89.65:33228). Aug 13 00:21:04.400541 sshd[6788]: Accepted publickey for core from 139.178.89.65 port 33228 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:04.405753 sshd[6788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:04.420945 systemd-logind[1993]: New session 26 of user core. Aug 13 00:21:04.428962 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:21:04.786764 sshd[6788]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:04.799437 systemd[1]: sshd@25-172.31.31.36:22-139.178.89.65:33228.service: Deactivated successfully. Aug 13 00:21:04.811754 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:21:04.814704 systemd-logind[1993]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:21:04.818585 systemd-logind[1993]: Removed session 26. Aug 13 00:21:05.430570 containerd[2020]: time="2025-08-13T00:21:05.430435521Z" level=info msg="StopPodSandbox for \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\"" Aug 13 00:21:05.593519 containerd[2020]: 2025-08-13 00:21:05.507 [WARNING][6808] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0", GenerateName:"calico-apiserver-bd6797c4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d377c06-c55d-4e39-863b-173de05fa641", ResourceVersion:"1210", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bd6797c4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f", Pod:"calico-apiserver-bd6797c4b-r7mhx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif77e2de8181", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:05.593519 containerd[2020]: 2025-08-13 00:21:05.508 [INFO][6808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Aug 13 00:21:05.593519 containerd[2020]: 2025-08-13 00:21:05.508 [INFO][6808] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" iface="eth0" netns="" Aug 13 00:21:05.593519 containerd[2020]: 2025-08-13 00:21:05.508 [INFO][6808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Aug 13 00:21:05.593519 containerd[2020]: 2025-08-13 00:21:05.508 [INFO][6808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Aug 13 00:21:05.593519 containerd[2020]: 2025-08-13 00:21:05.564 [INFO][6815] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" HandleID="k8s-pod-network.4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0" Aug 13 00:21:05.593519 containerd[2020]: 2025-08-13 00:21:05.564 [INFO][6815] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:05.593519 containerd[2020]: 2025-08-13 00:21:05.565 [INFO][6815] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:05.593519 containerd[2020]: 2025-08-13 00:21:05.578 [WARNING][6815] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" HandleID="k8s-pod-network.4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0" Aug 13 00:21:05.593519 containerd[2020]: 2025-08-13 00:21:05.579 [INFO][6815] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" HandleID="k8s-pod-network.4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0" Aug 13 00:21:05.593519 containerd[2020]: 2025-08-13 00:21:05.582 [INFO][6815] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:05.593519 containerd[2020]: 2025-08-13 00:21:05.585 [INFO][6808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Aug 13 00:21:05.593519 containerd[2020]: time="2025-08-13T00:21:05.592260358Z" level=info msg="TearDown network for sandbox \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\" successfully" Aug 13 00:21:05.593519 containerd[2020]: time="2025-08-13T00:21:05.592299718Z" level=info msg="StopPodSandbox for \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\" returns successfully" Aug 13 00:21:05.596622 containerd[2020]: time="2025-08-13T00:21:05.595292026Z" level=info msg="RemovePodSandbox for \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\"" Aug 13 00:21:05.596622 containerd[2020]: time="2025-08-13T00:21:05.595388230Z" level=info msg="Forcibly stopping sandbox \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\"" Aug 13 00:21:05.775408 containerd[2020]: 2025-08-13 00:21:05.701 [WARNING][6829] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0", GenerateName:"calico-apiserver-bd6797c4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d377c06-c55d-4e39-863b-173de05fa641", ResourceVersion:"1210", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bd6797c4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-36", ContainerID:"321aece02ec29d9620191d549ed851d5fc1e299a25433cd3acb1a69bf893ad8f", Pod:"calico-apiserver-bd6797c4b-r7mhx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif77e2de8181", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:05.775408 containerd[2020]: 2025-08-13 00:21:05.701 [INFO][6829] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Aug 13 00:21:05.775408 containerd[2020]: 2025-08-13 00:21:05.702 [INFO][6829] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" iface="eth0" netns="" Aug 13 00:21:05.775408 containerd[2020]: 2025-08-13 00:21:05.702 [INFO][6829] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Aug 13 00:21:05.775408 containerd[2020]: 2025-08-13 00:21:05.702 [INFO][6829] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Aug 13 00:21:05.775408 containerd[2020]: 2025-08-13 00:21:05.748 [INFO][6836] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" HandleID="k8s-pod-network.4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0" Aug 13 00:21:05.775408 containerd[2020]: 2025-08-13 00:21:05.749 [INFO][6836] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:05.775408 containerd[2020]: 2025-08-13 00:21:05.749 [INFO][6836] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:05.775408 containerd[2020]: 2025-08-13 00:21:05.763 [WARNING][6836] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" HandleID="k8s-pod-network.4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0" Aug 13 00:21:05.775408 containerd[2020]: 2025-08-13 00:21:05.763 [INFO][6836] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" HandleID="k8s-pod-network.4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Workload="ip--172--31--31--36-k8s-calico--apiserver--bd6797c4b--r7mhx-eth0" Aug 13 00:21:05.775408 containerd[2020]: 2025-08-13 00:21:05.768 [INFO][6836] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:05.775408 containerd[2020]: 2025-08-13 00:21:05.770 [INFO][6829] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe" Aug 13 00:21:05.775408 containerd[2020]: time="2025-08-13T00:21:05.775153211Z" level=info msg="TearDown network for sandbox \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\" successfully" Aug 13 00:21:05.787028 containerd[2020]: time="2025-08-13T00:21:05.786355175Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:21:05.787028 containerd[2020]: time="2025-08-13T00:21:05.786591635Z" level=info msg="RemovePodSandbox \"4f265ec92f9338d07b5d57433d32cd94709a69db9a4437260ab6e94cf6f139fe\" returns successfully" Aug 13 00:21:09.835008 systemd[1]: Started sshd@26-172.31.31.36:22-139.178.89.65:40826.service - OpenSSH per-connection server daemon (139.178.89.65:40826). Aug 13 00:21:10.053516 sshd[6845]: Accepted publickey for core from 139.178.89.65 port 40826 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:10.058303 sshd[6845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:10.070585 systemd-logind[1993]: New session 27 of user core. Aug 13 00:21:10.079835 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:21:10.432816 sshd[6845]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:10.442515 systemd[1]: sshd@26-172.31.31.36:22-139.178.89.65:40826.service: Deactivated successfully. Aug 13 00:21:10.447573 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:21:10.454580 systemd-logind[1993]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:21:10.458567 systemd-logind[1993]: Removed session 27. Aug 13 00:21:15.478222 systemd[1]: Started sshd@27-172.31.31.36:22-139.178.89.65:40832.service - OpenSSH per-connection server daemon (139.178.89.65:40832). Aug 13 00:21:15.669633 sshd[6877]: Accepted publickey for core from 139.178.89.65 port 40832 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:15.674828 sshd[6877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:15.691831 systemd-logind[1993]: New session 28 of user core. Aug 13 00:21:15.700817 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 00:21:16.008028 sshd[6877]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:16.017055 systemd-logind[1993]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:21:16.021525 systemd[1]: sshd@27-172.31.31.36:22-139.178.89.65:40832.service: Deactivated successfully. Aug 13 00:21:16.028197 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:21:16.032969 systemd-logind[1993]: Removed session 28. Aug 13 00:21:30.715410 systemd[1]: cri-containerd-f3e9916a5110d1d348ff295f47a2afac5933998c7aa9410e214aa387349d3cf0.scope: Deactivated successfully. Aug 13 00:21:30.718270 systemd[1]: cri-containerd-f3e9916a5110d1d348ff295f47a2afac5933998c7aa9410e214aa387349d3cf0.scope: Consumed 22.485s CPU time. Aug 13 00:21:30.767316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3e9916a5110d1d348ff295f47a2afac5933998c7aa9410e214aa387349d3cf0-rootfs.mount: Deactivated successfully. Aug 13 00:21:30.801044 containerd[2020]: time="2025-08-13T00:21:30.765629687Z" level=info msg="shim disconnected" id=f3e9916a5110d1d348ff295f47a2afac5933998c7aa9410e214aa387349d3cf0 namespace=k8s.io Aug 13 00:21:30.801044 containerd[2020]: time="2025-08-13T00:21:30.801001655Z" level=warning msg="cleaning up after shim disconnected" id=f3e9916a5110d1d348ff295f47a2afac5933998c7aa9410e214aa387349d3cf0 namespace=k8s.io Aug 13 00:21:30.801044 containerd[2020]: time="2025-08-13T00:21:30.801035135Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:21:30.823970 containerd[2020]: time="2025-08-13T00:21:30.823792271Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:21:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 00:21:30.883437 systemd[1]: cri-containerd-1b90c1c9247a120895fed52a05a0b9e128c5ae386e0a60c93fd3a94d1f1d2356.scope: Deactivated successfully. Aug 13 00:21:30.884229 systemd[1]: cri-containerd-1b90c1c9247a120895fed52a05a0b9e128c5ae386e0a60c93fd3a94d1f1d2356.scope: Consumed 6.709s CPU time, 20.1M memory peak, 0B memory swap peak. Aug 13 00:21:30.940321 containerd[2020]: time="2025-08-13T00:21:30.939801648Z" level=info msg="shim disconnected" id=1b90c1c9247a120895fed52a05a0b9e128c5ae386e0a60c93fd3a94d1f1d2356 namespace=k8s.io Aug 13 00:21:30.940321 containerd[2020]: time="2025-08-13T00:21:30.939906300Z" level=warning msg="cleaning up after shim disconnected" id=1b90c1c9247a120895fed52a05a0b9e128c5ae386e0a60c93fd3a94d1f1d2356 namespace=k8s.io Aug 13 00:21:30.940321 containerd[2020]: time="2025-08-13T00:21:30.939929556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:21:30.944768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b90c1c9247a120895fed52a05a0b9e128c5ae386e0a60c93fd3a94d1f1d2356-rootfs.mount: Deactivated successfully. Aug 13 00:21:31.452359 kubelet[3342]: I0813 00:21:31.451832 3342 scope.go:117] "RemoveContainer" containerID="1b90c1c9247a120895fed52a05a0b9e128c5ae386e0a60c93fd3a94d1f1d2356" Aug 13 00:21:31.458003 containerd[2020]: time="2025-08-13T00:21:31.457659838Z" level=info msg="CreateContainer within sandbox \"a40ff0fb44bd3e4e1db7648838c5d364068cec127a3e57956dbee3558b5b84f0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 13 00:21:31.458640 kubelet[3342]: I0813 00:21:31.458599 3342 scope.go:117] "RemoveContainer" containerID="f3e9916a5110d1d348ff295f47a2afac5933998c7aa9410e214aa387349d3cf0" Aug 13 00:21:31.465769 containerd[2020]: time="2025-08-13T00:21:31.465702610Z" level=info msg="CreateContainer within sandbox \"31fac44eaf62ab5d64327501d0e38373a3777805199ca4a12c0368f3ad068400\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 13 00:21:31.519441 containerd[2020]: time="2025-08-13T00:21:31.518610347Z" level=info msg="CreateContainer within sandbox \"a40ff0fb44bd3e4e1db7648838c5d364068cec127a3e57956dbee3558b5b84f0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d1d3efc21f26a51f25963f1139a3aaceb9d9b919fc8da10810317622972429e7\"" Aug 13 00:21:31.520320 containerd[2020]: time="2025-08-13T00:21:31.520242767Z" level=info msg="StartContainer for \"d1d3efc21f26a51f25963f1139a3aaceb9d9b919fc8da10810317622972429e7\"" Aug 13 00:21:31.526289 containerd[2020]: time="2025-08-13T00:21:31.525944279Z" level=info msg="CreateContainer within sandbox \"31fac44eaf62ab5d64327501d0e38373a3777805199ca4a12c0368f3ad068400\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"44b21f3c9d6fe3a4355ffece41b990ff9ae6cea29176023a186969f5b6447992\"" Aug 13 00:21:31.527670 containerd[2020]: time="2025-08-13T00:21:31.527204627Z" level=info msg="StartContainer for \"44b21f3c9d6fe3a4355ffece41b990ff9ae6cea29176023a186969f5b6447992\"" Aug 13 00:21:31.595842 systemd[1]: Started cri-containerd-d1d3efc21f26a51f25963f1139a3aaceb9d9b919fc8da10810317622972429e7.scope - libcontainer container d1d3efc21f26a51f25963f1139a3aaceb9d9b919fc8da10810317622972429e7. Aug 13 00:21:31.612802 systemd[1]: Started cri-containerd-44b21f3c9d6fe3a4355ffece41b990ff9ae6cea29176023a186969f5b6447992.scope - libcontainer container 44b21f3c9d6fe3a4355ffece41b990ff9ae6cea29176023a186969f5b6447992. Aug 13 00:21:31.698132 containerd[2020]: time="2025-08-13T00:21:31.697490460Z" level=info msg="StartContainer for \"44b21f3c9d6fe3a4355ffece41b990ff9ae6cea29176023a186969f5b6447992\" returns successfully" Aug 13 00:21:31.713244 containerd[2020]: time="2025-08-13T00:21:31.712509240Z" level=info msg="StartContainer for \"d1d3efc21f26a51f25963f1139a3aaceb9d9b919fc8da10810317622972429e7\" returns successfully" Aug 13 00:21:34.622259 kubelet[3342]: E0813 00:21:34.622042 3342 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-36?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 13 00:21:35.237493 systemd[1]: cri-containerd-355ee3411dc2f2ad86d025e85d1dd6ad192a2a53c1b362b63e4b525463e3a99e.scope: Deactivated successfully. Aug 13 00:21:35.238150 systemd[1]: cri-containerd-355ee3411dc2f2ad86d025e85d1dd6ad192a2a53c1b362b63e4b525463e3a99e.scope: Consumed 6.416s CPU time, 15.6M memory peak, 0B memory swap peak. Aug 13 00:21:35.283331 containerd[2020]: time="2025-08-13T00:21:35.283090741Z" level=info msg="shim disconnected" id=355ee3411dc2f2ad86d025e85d1dd6ad192a2a53c1b362b63e4b525463e3a99e namespace=k8s.io Aug 13 00:21:35.283331 containerd[2020]: time="2025-08-13T00:21:35.283253509Z" level=warning msg="cleaning up after shim disconnected" id=355ee3411dc2f2ad86d025e85d1dd6ad192a2a53c1b362b63e4b525463e3a99e namespace=k8s.io Aug 13 00:21:35.283331 containerd[2020]: time="2025-08-13T00:21:35.283279501Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:21:35.290845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-355ee3411dc2f2ad86d025e85d1dd6ad192a2a53c1b362b63e4b525463e3a99e-rootfs.mount: Deactivated successfully. Aug 13 00:21:35.492231 kubelet[3342]: I0813 00:21:35.491627 3342 scope.go:117] "RemoveContainer" containerID="355ee3411dc2f2ad86d025e85d1dd6ad192a2a53c1b362b63e4b525463e3a99e" Aug 13 00:21:35.496701 containerd[2020]: time="2025-08-13T00:21:35.496586798Z" level=info msg="CreateContainer within sandbox \"6e49144cc71a649638660f72915de845f481ff16542cfc88820ec321d1058c3b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 13 00:21:35.528329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount149624801.mount: Deactivated successfully. Aug 13 00:21:35.533249 containerd[2020]: time="2025-08-13T00:21:35.533165307Z" level=info msg="CreateContainer within sandbox \"6e49144cc71a649638660f72915de845f481ff16542cfc88820ec321d1058c3b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"cca6c93439a0d61d0ce9d3325003a36fc283f994e5777eced7571253d3802204\"" Aug 13 00:21:35.534615 containerd[2020]: time="2025-08-13T00:21:35.534121443Z" level=info msg="StartContainer for \"cca6c93439a0d61d0ce9d3325003a36fc283f994e5777eced7571253d3802204\"" Aug 13 00:21:35.607832 systemd[1]: Started cri-containerd-cca6c93439a0d61d0ce9d3325003a36fc283f994e5777eced7571253d3802204.scope - libcontainer container cca6c93439a0d61d0ce9d3325003a36fc283f994e5777eced7571253d3802204. Aug 13 00:21:35.681412 containerd[2020]: time="2025-08-13T00:21:35.681238371Z" level=info msg="StartContainer for \"cca6c93439a0d61d0ce9d3325003a36fc283f994e5777eced7571253d3802204\" returns successfully" Aug 13 00:21:36.286184 systemd[1]: run-containerd-runc-k8s.io-cca6c93439a0d61d0ce9d3325003a36fc283f994e5777eced7571253d3802204-runc.jWFaVx.mount: Deactivated successfully. Aug 13 00:21:43.254813 systemd[1]: cri-containerd-44b21f3c9d6fe3a4355ffece41b990ff9ae6cea29176023a186969f5b6447992.scope: Deactivated successfully. Aug 13 00:21:43.293311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44b21f3c9d6fe3a4355ffece41b990ff9ae6cea29176023a186969f5b6447992-rootfs.mount: Deactivated successfully. Aug 13 00:21:43.305743 containerd[2020]: time="2025-08-13T00:21:43.305433333Z" level=info msg="shim disconnected" id=44b21f3c9d6fe3a4355ffece41b990ff9ae6cea29176023a186969f5b6447992 namespace=k8s.io Aug 13 00:21:43.305743 containerd[2020]: time="2025-08-13T00:21:43.305664093Z" level=warning msg="cleaning up after shim disconnected" id=44b21f3c9d6fe3a4355ffece41b990ff9ae6cea29176023a186969f5b6447992 namespace=k8s.io Aug 13 00:21:43.305743 containerd[2020]: time="2025-08-13T00:21:43.305687133Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:21:43.524282 kubelet[3342]: I0813 00:21:43.523646 3342 scope.go:117] "RemoveContainer" containerID="f3e9916a5110d1d348ff295f47a2afac5933998c7aa9410e214aa387349d3cf0" Aug 13 00:21:43.524282 kubelet[3342]: I0813 00:21:43.524040 3342 scope.go:117] "RemoveContainer" containerID="44b21f3c9d6fe3a4355ffece41b990ff9ae6cea29176023a186969f5b6447992" Aug 13 00:21:43.524282 kubelet[3342]: E0813 00:21:43.524259 3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-88hvl_tigera-operator(15c55c70-51de-427f-a71b-7ced83a2b08b)\"" pod="tigera-operator/tigera-operator-747864d56d-88hvl" podUID="15c55c70-51de-427f-a71b-7ced83a2b08b" Aug 13 00:21:43.526808 containerd[2020]: time="2025-08-13T00:21:43.526742158Z" level=info msg="RemoveContainer for \"f3e9916a5110d1d348ff295f47a2afac5933998c7aa9410e214aa387349d3cf0\"" Aug 13 00:21:43.534195 containerd[2020]: time="2025-08-13T00:21:43.534028510Z" level=info msg="RemoveContainer for \"f3e9916a5110d1d348ff295f47a2afac5933998c7aa9410e214aa387349d3cf0\" returns successfully" Aug 13 00:21:44.623798 kubelet[3342]: E0813 00:21:44.623579 3342 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-36?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"