May 14 18:13:47.803089 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 18:13:47.803112 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed May 14 16:42:23 -00 2025 May 14 18:13:47.803123 kernel: KASLR enabled May 14 18:13:47.803129 kernel: efi: EFI v2.7 by EDK II May 14 18:13:47.803134 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb221f18 May 14 18:13:47.803140 kernel: random: crng init done May 14 18:13:47.803147 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 14 18:13:47.803152 kernel: secureboot: Secure boot enabled May 14 18:13:47.803164 kernel: ACPI: Early table checksum verification disabled May 14 18:13:47.803171 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) May 14 18:13:47.803177 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 14 18:13:47.803183 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:13:47.803189 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:13:47.803195 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:13:47.803202 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:13:47.803209 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:13:47.803215 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:13:47.803222 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:13:47.803228 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:13:47.803234 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:13:47.803240 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 14 18:13:47.803246 kernel: ACPI: Use ACPI SPCR as default console: Yes May 14 18:13:47.803252 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 14 18:13:47.803258 kernel: NODE_DATA(0) allocated [mem 0xdc737dc0-0xdc73efff] May 14 18:13:47.803264 kernel: Zone ranges: May 14 18:13:47.803271 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 14 18:13:47.803277 kernel: DMA32 empty May 14 18:13:47.803283 kernel: Normal empty May 14 18:13:47.803289 kernel: Device empty May 14 18:13:47.803295 kernel: Movable zone start for each node May 14 18:13:47.803301 kernel: Early memory node ranges May 14 18:13:47.803307 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] May 14 18:13:47.803313 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] May 14 18:13:47.803319 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] May 14 18:13:47.803325 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] May 14 18:13:47.803331 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] May 14 18:13:47.803337 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] May 14 18:13:47.803344 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] May 14 18:13:47.803350 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] May 14 18:13:47.803357 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 14 18:13:47.803365 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 14 18:13:47.803371 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 14 18:13:47.803378 kernel: psci: probing for conduit method from ACPI. May 14 18:13:47.803384 kernel: psci: PSCIv1.1 detected in firmware. May 14 18:13:47.803392 kernel: psci: Using standard PSCI v0.2 function IDs May 14 18:13:47.803398 kernel: psci: Trusted OS migration not required May 14 18:13:47.803405 kernel: psci: SMC Calling Convention v1.1 May 14 18:13:47.803411 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 18:13:47.803418 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 14 18:13:47.803424 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 14 18:13:47.803431 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 14 18:13:47.803437 kernel: Detected PIPT I-cache on CPU0 May 14 18:13:47.803444 kernel: CPU features: detected: GIC system register CPU interface May 14 18:13:47.803451 kernel: CPU features: detected: Spectre-v4 May 14 18:13:47.803458 kernel: CPU features: detected: Spectre-BHB May 14 18:13:47.803464 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 18:13:47.803471 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 18:13:47.803477 kernel: CPU features: detected: ARM erratum 1418040 May 14 18:13:47.803484 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 18:13:47.803490 kernel: alternatives: applying boot alternatives May 14 18:13:47.803498 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fb5d39925446c9958629410eadbe2d2aa0566996d55f4385bdd8a5ce4ad5f562 May 14 18:13:47.803504 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 18:13:47.803511 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 18:13:47.803517 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 18:13:47.803525 kernel: Fallback order for Node 0: 0 May 14 18:13:47.803531 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 14 18:13:47.803537 kernel: Policy zone: DMA May 14 18:13:47.803544 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 18:13:47.803550 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 14 18:13:47.803556 kernel: software IO TLB: area num 4. May 14 18:13:47.803563 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 14 18:13:47.803570 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) May 14 18:13:47.803576 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 18:13:47.803583 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 18:13:47.803590 kernel: rcu: RCU event tracing is enabled. May 14 18:13:47.803596 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 18:13:47.803604 kernel: Trampoline variant of Tasks RCU enabled. May 14 18:13:47.803611 kernel: Tracing variant of Tasks RCU enabled. May 14 18:13:47.803617 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 18:13:47.803624 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 18:13:47.803630 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 18:13:47.803637 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 18:13:47.803643 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 18:13:47.803649 kernel: GICv3: 256 SPIs implemented May 14 18:13:47.803656 kernel: GICv3: 0 Extended SPIs implemented May 14 18:13:47.803662 kernel: Root IRQ handler: gic_handle_irq May 14 18:13:47.803669 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 14 18:13:47.803677 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 14 18:13:47.803683 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 18:13:47.803690 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 18:13:47.803696 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 14 18:13:47.803703 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 14 18:13:47.803710 kernel: GICv3: using LPI property table @0x0000000040100000 May 14 18:13:47.803716 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 14 18:13:47.803723 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 18:13:47.803729 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 18:13:47.803736 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 18:13:47.803743 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 18:13:47.803749 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 18:13:47.803758 kernel: arm-pv: using stolen time PV May 14 18:13:47.803764 kernel: Console: colour dummy device 80x25 May 14 18:13:47.803771 kernel: ACPI: Core revision 20240827 May 14 18:13:47.803792 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 18:13:47.803799 kernel: pid_max: default: 32768 minimum: 301 May 14 18:13:47.803806 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 18:13:47.803813 kernel: landlock: Up and running. May 14 18:13:47.803819 kernel: SELinux: Initializing. May 14 18:13:47.803826 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 18:13:47.803834 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 18:13:47.803840 kernel: rcu: Hierarchical SRCU implementation. May 14 18:13:47.803847 kernel: rcu: Max phase no-delay instances is 400. May 14 18:13:47.803854 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 14 18:13:47.803860 kernel: Remapping and enabling EFI services. May 14 18:13:47.803867 kernel: smp: Bringing up secondary CPUs ... May 14 18:13:47.803873 kernel: Detected PIPT I-cache on CPU1 May 14 18:13:47.803880 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 18:13:47.803887 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 14 18:13:47.803895 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 18:13:47.803906 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 18:13:47.803913 kernel: Detected PIPT I-cache on CPU2 May 14 18:13:47.803921 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 14 18:13:47.803928 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 14 18:13:47.803942 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 18:13:47.803962 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 14 18:13:47.803969 kernel: Detected PIPT I-cache on CPU3 May 14 18:13:47.803976 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 14 18:13:47.803985 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 14 18:13:47.803992 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 18:13:47.803999 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 14 18:13:47.804006 kernel: smp: Brought up 1 node, 4 CPUs May 14 18:13:47.804013 kernel: SMP: Total of 4 processors activated. May 14 18:13:47.804019 kernel: CPU: All CPU(s) started at EL1 May 14 18:13:47.804026 kernel: CPU features: detected: 32-bit EL0 Support May 14 18:13:47.804033 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 18:13:47.804042 kernel: CPU features: detected: Common not Private translations May 14 18:13:47.804049 kernel: CPU features: detected: CRC32 instructions May 14 18:13:47.804055 kernel: CPU features: detected: Enhanced Virtualization Traps May 14 18:13:47.804062 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 18:13:47.804069 kernel: CPU features: detected: LSE atomic instructions May 14 18:13:47.804076 kernel: CPU features: detected: Privileged Access Never May 14 18:13:47.804083 kernel: CPU features: detected: RAS Extension Support May 14 18:13:47.804090 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 18:13:47.804097 kernel: alternatives: applying system-wide alternatives May 14 18:13:47.804104 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 14 18:13:47.804114 kernel: Memory: 2438884K/2572288K available (11072K kernel code, 2276K rwdata, 8928K rodata, 39424K init, 1034K bss, 127636K reserved, 0K cma-reserved) May 14 18:13:47.804122 kernel: devtmpfs: initialized May 14 18:13:47.804129 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 18:13:47.804137 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 18:13:47.804144 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 14 18:13:47.804151 kernel: 0 pages in range for non-PLT usage May 14 18:13:47.804158 kernel: 508544 pages in range for PLT usage May 14 18:13:47.804165 kernel: pinctrl core: initialized pinctrl subsystem May 14 18:13:47.804172 kernel: SMBIOS 3.0.0 present. May 14 18:13:47.804181 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 14 18:13:47.804188 kernel: DMI: Memory slots populated: 1/1 May 14 18:13:47.804196 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 18:13:47.804204 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 18:13:47.804211 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 18:13:47.804219 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 18:13:47.804226 kernel: audit: initializing netlink subsys (disabled) May 14 18:13:47.804233 kernel: audit: type=2000 audit(0.037:1): state=initialized audit_enabled=0 res=1 May 14 18:13:47.804242 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 18:13:47.804250 kernel: cpuidle: using governor menu May 14 18:13:47.804257 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 18:13:47.804265 kernel: ASID allocator initialised with 32768 entries May 14 18:13:47.804272 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 18:13:47.804279 kernel: Serial: AMBA PL011 UART driver May 14 18:13:47.804286 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 18:13:47.804294 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 18:13:47.804301 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 18:13:47.804310 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 18:13:47.804317 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 18:13:47.804325 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 18:13:47.804332 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 18:13:47.804340 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 18:13:47.804347 kernel: ACPI: Added _OSI(Module Device) May 14 18:13:47.804354 kernel: ACPI: Added _OSI(Processor Device) May 14 18:13:47.804362 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 18:13:47.804369 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 18:13:47.804377 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 18:13:47.804386 kernel: ACPI: Interpreter enabled May 14 18:13:47.804393 kernel: ACPI: Using GIC for interrupt routing May 14 18:13:47.804401 kernel: ACPI: MCFG table detected, 1 entries May 14 18:13:47.804408 kernel: ACPI: CPU0 has been hot-added May 14 18:13:47.804415 kernel: ACPI: CPU1 has been hot-added May 14 18:13:47.804423 kernel: ACPI: CPU2 has been hot-added May 14 18:13:47.804430 kernel: ACPI: CPU3 has been hot-added May 14 18:13:47.804437 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 18:13:47.804444 kernel: printk: legacy console [ttyAMA0] enabled May 14 18:13:47.804453 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 18:13:47.804582 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 18:13:47.804649 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 18:13:47.804712 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 18:13:47.804769 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 18:13:47.804825 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 18:13:47.804834 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 18:13:47.804843 kernel: PCI host bridge to bus 0000:00 May 14 18:13:47.804908 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 18:13:47.805031 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 18:13:47.805090 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 18:13:47.805143 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 18:13:47.805218 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 14 18:13:47.805291 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 14 18:13:47.805352 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 14 18:13:47.805411 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 14 18:13:47.805470 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 14 18:13:47.805529 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 14 18:13:47.805588 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 14 18:13:47.805647 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 14 18:13:47.805702 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 18:13:47.805754 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 18:13:47.805806 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 18:13:47.805816 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 18:13:47.805823 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 18:13:47.805829 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 18:13:47.805836 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 18:13:47.805843 kernel: iommu: Default domain type: Translated May 14 18:13:47.805851 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 18:13:47.805858 kernel: efivars: Registered efivars operations May 14 18:13:47.805865 kernel: vgaarb: loaded May 14 18:13:47.805872 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 18:13:47.805879 kernel: VFS: Disk quotas dquot_6.6.0 May 14 18:13:47.805886 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 18:13:47.805893 kernel: pnp: PnP ACPI init May 14 18:13:47.805979 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 18:13:47.805991 kernel: pnp: PnP ACPI: found 1 devices May 14 18:13:47.806000 kernel: NET: Registered PF_INET protocol family May 14 18:13:47.806008 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 18:13:47.806015 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 18:13:47.806022 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 18:13:47.806029 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 18:13:47.806036 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 18:13:47.806042 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 18:13:47.806050 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 18:13:47.806056 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 18:13:47.806065 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 18:13:47.806071 kernel: PCI: CLS 0 bytes, default 64 May 14 18:13:47.806078 kernel: kvm [1]: HYP mode not available May 14 18:13:47.806085 kernel: Initialise system trusted keyrings May 14 18:13:47.806092 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 18:13:47.806099 kernel: Key type asymmetric registered May 14 18:13:47.806105 kernel: Asymmetric key parser 'x509' registered May 14 18:13:47.806112 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 14 18:13:47.806119 kernel: io scheduler mq-deadline registered May 14 18:13:47.806127 kernel: io scheduler kyber registered May 14 18:13:47.806135 kernel: io scheduler bfq registered May 14 18:13:47.806142 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 18:13:47.806148 kernel: ACPI: button: Power Button [PWRB] May 14 18:13:47.806156 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 18:13:47.806219 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 14 18:13:47.806229 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 18:13:47.806236 kernel: thunder_xcv, ver 1.0 May 14 18:13:47.806242 kernel: thunder_bgx, ver 1.0 May 14 18:13:47.806251 kernel: nicpf, ver 1.0 May 14 18:13:47.806258 kernel: nicvf, ver 1.0 May 14 18:13:47.806324 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 18:13:47.806380 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T18:13:47 UTC (1747246427) May 14 18:13:47.806390 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 18:13:47.806396 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 14 18:13:47.806403 kernel: watchdog: NMI not fully supported May 14 18:13:47.806410 kernel: watchdog: Hard watchdog permanently disabled May 14 18:13:47.806418 kernel: NET: Registered PF_INET6 protocol family May 14 18:13:47.806425 kernel: Segment Routing with IPv6 May 14 18:13:47.806432 kernel: In-situ OAM (IOAM) with IPv6 May 14 18:13:47.806439 kernel: NET: Registered PF_PACKET protocol family May 14 18:13:47.806446 kernel: Key type dns_resolver registered May 14 18:13:47.806452 kernel: registered taskstats version 1 May 14 18:13:47.806459 kernel: Loading compiled-in X.509 certificates May 14 18:13:47.806466 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: c0c250ba312a1bb9bceb2432c486db6e5999df1a' May 14 18:13:47.806473 kernel: Demotion targets for Node 0: null May 14 18:13:47.806481 kernel: Key type .fscrypt registered May 14 18:13:47.806488 kernel: Key type fscrypt-provisioning registered May 14 18:13:47.806495 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 18:13:47.806502 kernel: ima: Allocated hash algorithm: sha1 May 14 18:13:47.806509 kernel: ima: No architecture policies found May 14 18:13:47.806516 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 18:13:47.806522 kernel: clk: Disabling unused clocks May 14 18:13:47.806529 kernel: PM: genpd: Disabling unused power domains May 14 18:13:47.806536 kernel: Warning: unable to open an initial console. May 14 18:13:47.806544 kernel: Freeing unused kernel memory: 39424K May 14 18:13:47.806551 kernel: Run /init as init process May 14 18:13:47.806558 kernel: with arguments: May 14 18:13:47.806565 kernel: /init May 14 18:13:47.806572 kernel: with environment: May 14 18:13:47.806578 kernel: HOME=/ May 14 18:13:47.806585 kernel: TERM=linux May 14 18:13:47.806592 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 18:13:47.806599 systemd[1]: Successfully made /usr/ read-only. May 14 18:13:47.806610 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:13:47.806618 systemd[1]: Detected virtualization kvm. May 14 18:13:47.806625 systemd[1]: Detected architecture arm64. May 14 18:13:47.806632 systemd[1]: Running in initrd. May 14 18:13:47.806639 systemd[1]: No hostname configured, using default hostname. May 14 18:13:47.806647 systemd[1]: Hostname set to . May 14 18:13:47.806654 systemd[1]: Initializing machine ID from VM UUID. May 14 18:13:47.806662 systemd[1]: Queued start job for default target initrd.target. May 14 18:13:47.806669 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:13:47.806677 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:13:47.806685 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 18:13:47.806692 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:13:47.806700 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 18:13:47.806708 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 18:13:47.806718 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 18:13:47.806725 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 18:13:47.806732 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:13:47.806740 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:13:47.806748 systemd[1]: Reached target paths.target - Path Units. May 14 18:13:47.806755 systemd[1]: Reached target slices.target - Slice Units. May 14 18:13:47.806762 systemd[1]: Reached target swap.target - Swaps. May 14 18:13:47.806770 systemd[1]: Reached target timers.target - Timer Units. May 14 18:13:47.806779 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:13:47.806786 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:13:47.806794 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 18:13:47.806801 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 18:13:47.806808 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:13:47.806816 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:13:47.806823 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:13:47.806831 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:13:47.806839 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 18:13:47.806847 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:13:47.806854 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 18:13:47.806862 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 18:13:47.806869 systemd[1]: Starting systemd-fsck-usr.service... May 14 18:13:47.806877 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:13:47.806884 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:13:47.806891 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:13:47.806899 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:13:47.806908 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 18:13:47.806915 systemd[1]: Finished systemd-fsck-usr.service. May 14 18:13:47.806923 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 18:13:47.806963 systemd-journald[244]: Collecting audit messages is disabled. May 14 18:13:47.806986 systemd-journald[244]: Journal started May 14 18:13:47.807006 systemd-journald[244]: Runtime Journal (/run/log/journal/3e4d824e241b407c88084edd3ca9f30a) is 6M, max 48.5M, 42.4M free. May 14 18:13:47.810294 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 18:13:47.797051 systemd-modules-load[246]: Inserted module 'overlay' May 14 18:13:47.812413 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:13:47.813914 systemd-modules-load[246]: Inserted module 'br_netfilter' May 14 18:13:47.815262 kernel: Bridge firewalling registered May 14 18:13:47.815278 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:13:47.816525 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:13:47.817841 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:13:47.822199 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 18:13:47.823917 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:13:47.827089 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:13:47.837690 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:13:47.846983 systemd-tmpfiles[273]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 18:13:47.848163 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:13:47.850661 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:13:47.854144 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:13:47.855571 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:13:47.860482 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 18:13:47.862861 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:13:47.895847 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fb5d39925446c9958629410eadbe2d2aa0566996d55f4385bdd8a5ce4ad5f562 May 14 18:13:47.911588 systemd-resolved[290]: Positive Trust Anchors: May 14 18:13:47.911608 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:13:47.911639 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:13:47.916496 systemd-resolved[290]: Defaulting to hostname 'linux'. May 14 18:13:47.917484 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:13:47.921078 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:13:47.975978 kernel: SCSI subsystem initialized May 14 18:13:47.981967 kernel: Loading iSCSI transport class v2.0-870. May 14 18:13:47.989997 kernel: iscsi: registered transport (tcp) May 14 18:13:48.004164 kernel: iscsi: registered transport (qla4xxx) May 14 18:13:48.004186 kernel: QLogic iSCSI HBA Driver May 14 18:13:48.029616 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:13:48.051732 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:13:48.054673 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:13:48.101017 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 18:13:48.103731 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 18:13:48.191980 kernel: raid6: neonx8 gen() 13498 MB/s May 14 18:13:48.208967 kernel: raid6: neonx4 gen() 12862 MB/s May 14 18:13:48.225966 kernel: raid6: neonx2 gen() 11478 MB/s May 14 18:13:48.242968 kernel: raid6: neonx1 gen() 10066 MB/s May 14 18:13:48.259964 kernel: raid6: int64x8 gen() 6695 MB/s May 14 18:13:48.276963 kernel: raid6: int64x4 gen() 7261 MB/s May 14 18:13:48.293964 kernel: raid6: int64x2 gen() 6001 MB/s May 14 18:13:48.310970 kernel: raid6: int64x1 gen() 5047 MB/s May 14 18:13:48.311002 kernel: raid6: using algorithm neonx8 gen() 13498 MB/s May 14 18:13:48.327970 kernel: raid6: .... xor() 12033 MB/s, rmw enabled May 14 18:13:48.327988 kernel: raid6: using neon recovery algorithm May 14 18:13:48.334141 kernel: xor: measuring software checksum speed May 14 18:13:48.334174 kernel: 8regs : 21641 MB/sec May 14 18:13:48.334186 kernel: 32regs : 21699 MB/sec May 14 18:13:48.335131 kernel: arm64_neon : 28118 MB/sec May 14 18:13:48.335144 kernel: xor: using function: arm64_neon (28118 MB/sec) May 14 18:13:48.396985 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 18:13:48.404665 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 18:13:48.407743 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:13:48.444050 systemd-udevd[499]: Using default interface naming scheme 'v255'. May 14 18:13:48.453737 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:13:48.459617 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 18:13:48.485413 dracut-pre-trigger[511]: rd.md=0: removing MD RAID activation May 14 18:13:48.514639 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:13:48.518096 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:13:48.573729 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:13:48.578351 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 18:13:48.634984 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 14 18:13:48.656399 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 18:13:48.656516 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 18:13:48.656527 kernel: GPT:9289727 != 19775487 May 14 18:13:48.656536 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 18:13:48.656545 kernel: GPT:9289727 != 19775487 May 14 18:13:48.656553 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 18:13:48.656561 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:13:48.650514 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:13:48.650584 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:13:48.660066 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:13:48.666117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:13:48.690625 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 18:13:48.692116 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 18:13:48.695292 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:13:48.705575 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 18:13:48.714123 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 18:13:48.720248 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 18:13:48.721235 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 18:13:48.723921 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:13:48.727035 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:13:48.730116 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:13:48.732735 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 18:13:48.734666 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 18:13:48.753040 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 18:13:48.755524 disk-uuid[592]: Primary Header is updated. May 14 18:13:48.755524 disk-uuid[592]: Secondary Entries is updated. May 14 18:13:48.755524 disk-uuid[592]: Secondary Header is updated. May 14 18:13:48.759990 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:13:49.767288 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:13:49.769040 disk-uuid[600]: The operation has completed successfully. May 14 18:13:49.770098 kernel: block device autoloading is deprecated and will be removed. May 14 18:13:49.794016 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 18:13:49.794113 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 18:13:49.823249 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 18:13:49.845331 sh[613]: Success May 14 18:13:49.860007 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 18:13:49.860055 kernel: device-mapper: uevent: version 1.0.3 May 14 18:13:49.861216 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 18:13:49.873980 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 14 18:13:49.913208 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 18:13:49.915756 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 18:13:49.934551 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 18:13:49.941272 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 18:13:49.941310 kernel: BTRFS: device fsid e21bbf34-4c71-4257-bd6f-908a2b81e5ab devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (625) May 14 18:13:49.942559 kernel: BTRFS info (device dm-0): first mount of filesystem e21bbf34-4c71-4257-bd6f-908a2b81e5ab May 14 18:13:49.942575 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 18:13:49.943968 kernel: BTRFS info (device dm-0): using free-space-tree May 14 18:13:49.947425 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 18:13:49.948742 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 18:13:49.950080 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 18:13:49.950854 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 18:13:49.954297 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 18:13:49.981992 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (656) May 14 18:13:49.984305 kernel: BTRFS info (device vda6): first mount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 18:13:49.984338 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 18:13:49.984987 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:13:49.991972 kernel: BTRFS info (device vda6): last unmount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 18:13:49.992182 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 18:13:49.994768 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 18:13:50.061882 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:13:50.067807 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:13:50.113904 systemd-networkd[797]: lo: Link UP May 14 18:13:50.114617 systemd-networkd[797]: lo: Gained carrier May 14 18:13:50.115387 systemd-networkd[797]: Enumeration completed May 14 18:13:50.115498 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:13:50.116565 systemd[1]: Reached target network.target - Network. May 14 18:13:50.118187 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:13:50.118191 systemd-networkd[797]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:13:50.119914 systemd-networkd[797]: eth0: Link UP May 14 18:13:50.119917 systemd-networkd[797]: eth0: Gained carrier May 14 18:13:50.119935 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:13:50.134811 ignition[702]: Ignition 2.21.0 May 14 18:13:50.134829 ignition[702]: Stage: fetch-offline May 14 18:13:50.134866 ignition[702]: no configs at "/usr/lib/ignition/base.d" May 14 18:13:50.134874 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:13:50.135100 ignition[702]: parsed url from cmdline: "" May 14 18:13:50.135104 ignition[702]: no config URL provided May 14 18:13:50.135109 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" May 14 18:13:50.135116 ignition[702]: no config at "/usr/lib/ignition/user.ign" May 14 18:13:50.139845 systemd-networkd[797]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 18:13:50.135135 ignition[702]: op(1): [started] loading QEMU firmware config module May 14 18:13:50.135140 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 18:13:50.143671 ignition[702]: op(1): [finished] loading QEMU firmware config module May 14 18:13:50.143697 ignition[702]: QEMU firmware config was not found. Ignoring... May 14 18:13:50.182845 ignition[702]: parsing config with SHA512: 0f2b3bcd560f5e9b3f1cab9895ba15a03cb714a0bf452152d66a742831f8d6e13c9fb2256adf48e32e77de8e5f1d02587d7c33c33b219b7b927400ac561a770e May 14 18:13:50.188238 unknown[702]: fetched base config from "system" May 14 18:13:50.188253 unknown[702]: fetched user config from "qemu" May 14 18:13:50.188803 ignition[702]: fetch-offline: fetch-offline passed May 14 18:13:50.190826 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:13:50.188864 ignition[702]: Ignition finished successfully May 14 18:13:50.192304 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 18:13:50.193105 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 18:13:50.220090 ignition[811]: Ignition 2.21.0 May 14 18:13:50.220106 ignition[811]: Stage: kargs May 14 18:13:50.221985 ignition[811]: no configs at "/usr/lib/ignition/base.d" May 14 18:13:50.222276 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:13:50.223140 ignition[811]: kargs: kargs passed May 14 18:13:50.223194 ignition[811]: Ignition finished successfully May 14 18:13:50.227056 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 18:13:50.229181 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 18:13:50.255297 ignition[819]: Ignition 2.21.0 May 14 18:13:50.255316 ignition[819]: Stage: disks May 14 18:13:50.255472 ignition[819]: no configs at "/usr/lib/ignition/base.d" May 14 18:13:50.255481 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:13:50.256734 ignition[819]: disks: disks passed May 14 18:13:50.256804 ignition[819]: Ignition finished successfully May 14 18:13:50.260025 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 18:13:50.261979 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 18:13:50.264115 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 18:13:50.265373 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:13:50.267318 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:13:50.269065 systemd[1]: Reached target basic.target - Basic System. May 14 18:13:50.271788 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 18:13:50.299450 systemd-resolved[290]: Detected conflict on linux IN A 10.0.0.119 May 14 18:13:50.299466 systemd-resolved[290]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. May 14 18:13:50.302449 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 14 18:13:50.304423 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 18:13:50.308065 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 18:13:50.376978 kernel: EXT4-fs (vda9): mounted filesystem a9c1ea72-ce96-48c1-8c16-d7102e51beed r/w with ordered data mode. Quota mode: none. May 14 18:13:50.377781 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 18:13:50.379151 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 18:13:50.386756 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:13:50.388863 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 18:13:50.390040 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 18:13:50.390118 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 18:13:50.390151 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:13:50.401396 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 18:13:50.406143 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 18:13:50.411715 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (836) May 14 18:13:50.411738 kernel: BTRFS info (device vda6): first mount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 18:13:50.411749 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 18:13:50.411758 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:13:50.415684 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:13:50.457332 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory May 14 18:13:50.461429 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory May 14 18:13:50.465039 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory May 14 18:13:50.468815 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory May 14 18:13:50.567124 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 18:13:50.569343 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 18:13:50.571215 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 18:13:50.587998 kernel: BTRFS info (device vda6): last unmount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 18:13:50.603033 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 18:13:50.611035 ignition[951]: INFO : Ignition 2.21.0 May 14 18:13:50.611035 ignition[951]: INFO : Stage: mount May 14 18:13:50.614070 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:13:50.614070 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:13:50.616485 ignition[951]: INFO : mount: mount passed May 14 18:13:50.616485 ignition[951]: INFO : Ignition finished successfully May 14 18:13:50.617001 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 18:13:50.620410 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 18:13:50.940606 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 18:13:50.942835 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:13:50.969845 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (964) May 14 18:13:50.969896 kernel: BTRFS info (device vda6): first mount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 18:13:50.969969 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 18:13:50.971155 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:13:50.975061 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:13:51.008602 ignition[981]: INFO : Ignition 2.21.0 May 14 18:13:51.008602 ignition[981]: INFO : Stage: files May 14 18:13:51.010992 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:13:51.010992 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:13:51.010992 ignition[981]: DEBUG : files: compiled without relabeling support, skipping May 14 18:13:51.016111 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 18:13:51.016111 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 18:13:51.019637 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 18:13:51.021088 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 18:13:51.021088 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 18:13:51.020370 unknown[981]: wrote ssh authorized keys file for user: core May 14 18:13:51.024937 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 18:13:51.024937 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 18:13:51.154428 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 18:13:51.496201 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 18:13:51.498347 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 18:13:51.498347 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 18:13:51.498347 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 18:13:51.498347 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 18:13:51.498347 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:13:51.498347 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:13:51.498347 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:13:51.498347 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:13:51.513748 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:13:51.513748 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:13:51.513748 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 18:13:51.513748 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 18:13:51.513748 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 18:13:51.513748 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 14 18:13:51.799104 systemd-networkd[797]: eth0: Gained IPv6LL May 14 18:13:51.855887 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 18:13:52.146519 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 18:13:52.149281 ignition[981]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 18:13:52.149281 ignition[981]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:13:52.157108 ignition[981]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:13:52.157108 ignition[981]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 18:13:52.157108 ignition[981]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 14 18:13:52.162098 ignition[981]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 18:13:52.162098 ignition[981]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 18:13:52.162098 ignition[981]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 14 18:13:52.162098 ignition[981]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 14 18:13:52.180823 ignition[981]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 18:13:52.185449 ignition[981]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 18:13:52.188127 ignition[981]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 14 18:13:52.188127 ignition[981]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 14 18:13:52.188127 ignition[981]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 14 18:13:52.188127 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 18:13:52.188127 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 18:13:52.188127 ignition[981]: INFO : files: files passed May 14 18:13:52.188127 ignition[981]: INFO : Ignition finished successfully May 14 18:13:52.188757 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 18:13:52.192028 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 18:13:52.195202 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 18:13:52.208514 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 18:13:52.208628 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 18:13:52.212315 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory May 14 18:13:52.214062 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:13:52.214062 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 18:13:52.218449 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:13:52.214909 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:13:52.217574 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 18:13:52.220482 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 18:13:52.266092 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 18:13:52.266237 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 18:13:52.268531 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 18:13:52.270441 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 18:13:52.272278 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 18:13:52.273236 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 18:13:52.302029 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:13:52.305316 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 18:13:52.330963 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 18:13:52.332351 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:13:52.334398 systemd[1]: Stopped target timers.target - Timer Units. May 14 18:13:52.335999 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 18:13:52.336157 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:13:52.338906 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 18:13:52.341057 systemd[1]: Stopped target basic.target - Basic System. May 14 18:13:52.342826 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 18:13:52.344515 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:13:52.346377 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 18:13:52.348106 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 18:13:52.349828 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 18:13:52.351754 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:13:52.353897 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 18:13:52.355990 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 18:13:52.357733 systemd[1]: Stopped target swap.target - Swaps. May 14 18:13:52.359227 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 18:13:52.359372 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 18:13:52.361483 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 18:13:52.363293 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:13:52.365141 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 18:13:52.365390 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:13:52.367347 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 18:13:52.367490 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 18:13:52.370244 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 18:13:52.370387 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:13:52.372197 systemd[1]: Stopped target paths.target - Path Units. May 14 18:13:52.373619 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 18:13:52.373755 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:13:52.375907 systemd[1]: Stopped target slices.target - Slice Units. May 14 18:13:52.377780 systemd[1]: Stopped target sockets.target - Socket Units. May 14 18:13:52.379297 systemd[1]: iscsid.socket: Deactivated successfully. May 14 18:13:52.379405 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:13:52.380906 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 18:13:52.381034 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:13:52.383217 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 18:13:52.383352 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:13:52.385067 systemd[1]: ignition-files.service: Deactivated successfully. May 14 18:13:52.385179 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 18:13:52.387698 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 18:13:52.388879 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 18:13:52.389049 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:13:52.391796 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 18:13:52.393926 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 18:13:52.394087 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:13:52.396330 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 18:13:52.396446 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:13:52.401819 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 18:13:52.402115 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 18:13:52.411464 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 18:13:52.418657 ignition[1038]: INFO : Ignition 2.21.0 May 14 18:13:52.418657 ignition[1038]: INFO : Stage: umount May 14 18:13:52.421411 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:13:52.421411 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:13:52.421411 ignition[1038]: INFO : umount: umount passed May 14 18:13:52.421411 ignition[1038]: INFO : Ignition finished successfully May 14 18:13:52.422343 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 18:13:52.423985 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 18:13:52.429132 systemd[1]: Stopped target network.target - Network. May 14 18:13:52.430486 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 18:13:52.430561 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 18:13:52.432329 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 18:13:52.432383 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 18:13:52.434192 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 18:13:52.434250 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 18:13:52.435527 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 18:13:52.435572 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 18:13:52.437245 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 18:13:52.438909 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 18:13:52.441103 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 18:13:52.441207 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 18:13:52.442982 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 18:13:52.443074 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 18:13:52.449040 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 18:13:52.449151 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 18:13:52.455226 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 18:13:52.455483 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 18:13:52.455584 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 18:13:52.460342 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 18:13:52.461054 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 18:13:52.462822 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 18:13:52.462864 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 18:13:52.466017 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 18:13:52.467136 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 18:13:52.467230 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:13:52.469433 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 18:13:52.469486 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 18:13:52.472241 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 18:13:52.472287 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 18:13:52.474374 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 18:13:52.474424 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:13:52.477637 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:13:52.479567 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 18:13:52.479629 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 18:13:52.494987 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 18:13:52.495130 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 18:13:52.497218 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 18:13:52.497366 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:13:52.501262 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 18:13:52.501332 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 18:13:52.503345 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 18:13:52.503381 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:13:52.505131 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 18:13:52.505186 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 18:13:52.507797 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 18:13:52.507856 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 18:13:52.510441 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 18:13:52.510501 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:13:52.514086 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 18:13:52.515169 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 18:13:52.515232 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:13:52.518479 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 18:13:52.518525 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:13:52.521969 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:13:52.522020 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:13:52.526337 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 14 18:13:52.526388 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 18:13:52.526421 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 18:13:52.535835 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 18:13:52.535991 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 18:13:52.538346 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 18:13:52.541222 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 18:13:52.567412 systemd[1]: Switching root. May 14 18:13:52.610903 systemd-journald[244]: Journal stopped May 14 18:13:53.378853 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). May 14 18:13:53.378906 kernel: SELinux: policy capability network_peer_controls=1 May 14 18:13:53.378929 kernel: SELinux: policy capability open_perms=1 May 14 18:13:53.378945 kernel: SELinux: policy capability extended_socket_class=1 May 14 18:13:53.378987 kernel: SELinux: policy capability always_check_network=0 May 14 18:13:53.378998 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 18:13:53.379013 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 18:13:53.379024 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 18:13:53.379035 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 18:13:53.379045 kernel: SELinux: policy capability userspace_initial_context=0 May 14 18:13:53.379054 kernel: audit: type=1403 audit(1747246432.751:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 18:13:53.379066 systemd[1]: Successfully loaded SELinux policy in 31.439ms. May 14 18:13:53.379087 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.339ms. May 14 18:13:53.379099 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:13:53.379112 systemd[1]: Detected virtualization kvm. May 14 18:13:53.379122 systemd[1]: Detected architecture arm64. May 14 18:13:53.379136 systemd[1]: Detected first boot. May 14 18:13:53.379147 systemd[1]: Initializing machine ID from VM UUID. May 14 18:13:53.379158 zram_generator::config[1084]: No configuration found. May 14 18:13:53.379169 kernel: NET: Registered PF_VSOCK protocol family May 14 18:13:53.379179 systemd[1]: Populated /etc with preset unit settings. May 14 18:13:53.379190 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 18:13:53.379201 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 18:13:53.379213 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 18:13:53.379223 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 18:13:53.379234 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 18:13:53.379244 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 18:13:53.379254 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 18:13:53.379265 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 18:13:53.379275 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 18:13:53.379286 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 18:13:53.379298 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 18:13:53.379308 systemd[1]: Created slice user.slice - User and Session Slice. May 14 18:13:53.379318 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:13:53.379329 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:13:53.379339 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 18:13:53.379352 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 18:13:53.379363 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 18:13:53.379375 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:13:53.379386 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 14 18:13:53.379398 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:13:53.379408 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:13:53.379418 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 18:13:53.379429 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 18:13:53.379439 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 18:13:53.379450 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 18:13:53.379460 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:13:53.379470 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:13:53.379483 systemd[1]: Reached target slices.target - Slice Units. May 14 18:13:53.379494 systemd[1]: Reached target swap.target - Swaps. May 14 18:13:53.379504 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 18:13:53.379514 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 18:13:53.379525 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 18:13:53.379535 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:13:53.379545 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:13:53.379556 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:13:53.379566 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 18:13:53.379582 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 18:13:53.379597 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 18:13:53.379607 systemd[1]: Mounting media.mount - External Media Directory... May 14 18:13:53.379618 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 18:13:53.379629 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 18:13:53.379640 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 18:13:53.379651 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 18:13:53.379662 systemd[1]: Reached target machines.target - Containers. May 14 18:13:53.379673 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 18:13:53.379684 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:13:53.379698 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:13:53.379710 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 18:13:53.379721 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:13:53.379731 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:13:53.379741 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:13:53.379751 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 18:13:53.379763 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:13:53.379775 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 18:13:53.379785 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 18:13:53.379796 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 18:13:53.379806 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 18:13:53.379818 systemd[1]: Stopped systemd-fsck-usr.service. May 14 18:13:53.379833 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:13:53.379843 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:13:53.379853 kernel: fuse: init (API version 7.41) May 14 18:13:53.379865 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:13:53.379875 kernel: loop: module loaded May 14 18:13:53.379885 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:13:53.379896 kernel: ACPI: bus type drm_connector registered May 14 18:13:53.379905 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 18:13:53.379920 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 18:13:53.379932 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:13:53.379944 systemd[1]: verity-setup.service: Deactivated successfully. May 14 18:13:53.379964 systemd[1]: Stopped verity-setup.service. May 14 18:13:53.379976 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 18:13:53.379986 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 18:13:53.379996 systemd[1]: Mounted media.mount - External Media Directory. May 14 18:13:53.380007 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 18:13:53.380017 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 18:13:53.380030 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 18:13:53.380042 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 18:13:53.380053 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:13:53.380092 systemd-journald[1156]: Collecting audit messages is disabled. May 14 18:13:53.380115 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 18:13:53.380128 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 18:13:53.380139 systemd-journald[1156]: Journal started May 14 18:13:53.380161 systemd-journald[1156]: Runtime Journal (/run/log/journal/3e4d824e241b407c88084edd3ca9f30a) is 6M, max 48.5M, 42.4M free. May 14 18:13:53.154224 systemd[1]: Queued start job for default target multi-user.target. May 14 18:13:53.174976 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 18:13:53.175365 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 18:13:53.383504 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:13:53.384489 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:13:53.384718 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:13:53.386281 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:13:53.386460 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:13:53.389349 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:13:53.389534 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:13:53.391174 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 18:13:53.391339 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 18:13:53.392765 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:13:53.392962 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:13:53.394390 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:13:53.396048 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:13:53.397633 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 18:13:53.399417 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 18:13:53.412974 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:13:53.416400 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:13:53.418899 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 18:13:53.421185 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 18:13:53.422465 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 18:13:53.422508 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:13:53.424555 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 18:13:53.433102 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 18:13:53.435035 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:13:53.436701 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 18:13:53.439020 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 18:13:53.440223 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:13:53.444102 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 18:13:53.445264 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:13:53.446364 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:13:53.446646 systemd-journald[1156]: Time spent on flushing to /var/log/journal/3e4d824e241b407c88084edd3ca9f30a is 26.399ms for 885 entries. May 14 18:13:53.446646 systemd-journald[1156]: System Journal (/var/log/journal/3e4d824e241b407c88084edd3ca9f30a) is 8M, max 195.6M, 187.6M free. May 14 18:13:53.478393 systemd-journald[1156]: Received client request to flush runtime journal. May 14 18:13:53.478441 kernel: loop0: detected capacity change from 0 to 189592 May 14 18:13:53.450228 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 18:13:53.453774 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 18:13:53.456252 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 18:13:53.457283 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 18:13:53.480287 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 18:13:53.482406 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 18:13:53.484626 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 18:13:53.490247 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 18:13:53.498283 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 18:13:53.504359 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:13:53.505979 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 18:13:53.508994 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:13:53.517996 kernel: loop1: detected capacity change from 0 to 138376 May 14 18:13:53.526430 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 18:13:53.543032 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. May 14 18:13:53.543047 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. May 14 18:13:53.547679 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:13:53.552989 kernel: loop2: detected capacity change from 0 to 107312 May 14 18:13:53.574565 kernel: loop3: detected capacity change from 0 to 189592 May 14 18:13:53.581010 kernel: loop4: detected capacity change from 0 to 138376 May 14 18:13:53.588068 kernel: loop5: detected capacity change from 0 to 107312 May 14 18:13:53.591405 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 18:13:53.592031 (sd-merge)[1223]: Merged extensions into '/usr'. May 14 18:13:53.595504 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... May 14 18:13:53.595525 systemd[1]: Reloading... May 14 18:13:53.658060 zram_generator::config[1249]: No configuration found. May 14 18:13:53.741963 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:13:53.743658 ldconfig[1196]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 18:13:53.805155 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 18:13:53.805516 systemd[1]: Reloading finished in 209 ms. May 14 18:13:53.830664 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 18:13:53.832528 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 18:13:53.851813 systemd[1]: Starting ensure-sysext.service... May 14 18:13:53.854079 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:13:53.881257 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... May 14 18:13:53.881273 systemd[1]: Reloading... May 14 18:13:53.888134 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 18:13:53.888171 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 18:13:53.888416 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 18:13:53.888612 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 18:13:53.889290 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 18:13:53.889502 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. May 14 18:13:53.889552 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. May 14 18:13:53.892363 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:13:53.892376 systemd-tmpfiles[1284]: Skipping /boot May 14 18:13:53.902475 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:13:53.902493 systemd-tmpfiles[1284]: Skipping /boot May 14 18:13:53.937968 zram_generator::config[1314]: No configuration found. May 14 18:13:54.010211 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:13:54.074296 systemd[1]: Reloading finished in 192 ms. May 14 18:13:54.094849 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 18:13:54.101889 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:13:54.109273 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:13:54.112272 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 18:13:54.123978 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 18:13:54.127841 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:13:54.130804 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:13:54.140111 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 18:13:54.149890 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 18:13:54.154651 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 18:13:54.157681 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:13:54.166556 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:13:54.170272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:13:54.177358 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:13:54.181249 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:13:54.181470 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:13:54.186138 systemd-udevd[1353]: Using default interface naming scheme 'v255'. May 14 18:13:54.188458 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 18:13:54.193583 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 18:13:54.196033 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:13:54.196296 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:13:54.198575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:13:54.198764 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:13:54.200988 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:13:54.201168 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:13:54.203272 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 18:13:54.212271 augenrules[1382]: No rules May 14 18:13:54.214169 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:13:54.214511 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:13:54.216220 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 18:13:54.219028 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:13:54.221122 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:13:54.226359 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:13:54.230367 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:13:54.231585 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:13:54.231726 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:13:54.231825 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 18:13:54.235508 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:13:54.241193 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 18:13:54.243837 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:13:54.244033 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:13:54.248003 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:13:54.248678 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:13:54.263664 systemd[1]: Finished ensure-sysext.service. May 14 18:13:54.270397 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:13:54.271486 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:13:54.275138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:13:54.278160 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:13:54.281886 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:13:54.284197 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:13:54.284250 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:13:54.290729 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:13:54.297206 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 18:13:54.298479 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 18:13:54.299258 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:13:54.300024 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:13:54.302483 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:13:54.303890 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:13:54.305168 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:13:54.305361 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:13:54.306882 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:13:54.308266 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:13:54.320751 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 14 18:13:54.325851 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:13:54.325933 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:13:54.339550 augenrules[1426]: /sbin/augenrules: No change May 14 18:13:54.349586 augenrules[1462]: No rules May 14 18:13:54.352258 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:13:54.352471 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:13:54.365683 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 18:13:54.369323 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 18:13:54.398228 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 18:13:54.422937 systemd-resolved[1351]: Positive Trust Anchors: May 14 18:13:54.423279 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:13:54.423372 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:13:54.436548 systemd-resolved[1351]: Defaulting to hostname 'linux'. May 14 18:13:54.438808 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:13:54.440201 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:13:54.442921 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 18:13:54.444353 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:13:54.445621 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 18:13:54.447019 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 18:13:54.448673 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 18:13:54.450399 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 18:13:54.450438 systemd[1]: Reached target paths.target - Path Units. May 14 18:13:54.451452 systemd[1]: Reached target time-set.target - System Time Set. May 14 18:13:54.452511 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 18:13:54.452702 systemd-networkd[1432]: lo: Link UP May 14 18:13:54.452706 systemd-networkd[1432]: lo: Gained carrier May 14 18:13:54.453808 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 18:13:54.455126 systemd[1]: Reached target timers.target - Timer Units. May 14 18:13:54.455340 systemd-networkd[1432]: Enumeration completed May 14 18:13:54.457498 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 18:13:54.459355 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:13:54.459365 systemd-networkd[1432]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:13:54.459928 systemd-networkd[1432]: eth0: Link UP May 14 18:13:54.460080 systemd-networkd[1432]: eth0: Gained carrier May 14 18:13:54.460101 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:13:54.460125 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 18:13:54.463937 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 18:13:54.465127 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 18:13:54.466467 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 18:13:54.470041 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 18:13:54.471610 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 18:13:54.473595 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:13:54.474887 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 18:13:54.476007 systemd-networkd[1432]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 18:13:54.476167 systemd[1]: Reached target network.target - Network. May 14 18:13:54.476547 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. May 14 18:13:54.477093 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:13:54.477174 systemd-timesyncd[1433]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 18:13:54.477219 systemd-timesyncd[1433]: Initial clock synchronization to Wed 2025-05-14 18:13:54.596719 UTC. May 14 18:13:54.478109 systemd[1]: Reached target basic.target - Basic System. May 14 18:13:54.478859 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 18:13:54.478890 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 18:13:54.481159 systemd[1]: Starting containerd.service - containerd container runtime... May 14 18:13:54.486189 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 18:13:54.493673 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 18:13:54.496054 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 18:13:54.512351 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 18:13:54.513554 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 18:13:54.521617 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 18:13:54.524907 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 18:13:54.527362 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 18:13:54.529789 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 18:13:54.538685 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 18:13:54.541042 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 18:13:54.544229 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 18:13:54.544419 jq[1490]: false May 14 18:13:54.546387 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 18:13:54.546946 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 18:13:54.549445 systemd[1]: Starting update-engine.service - Update Engine... May 14 18:13:54.552345 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 18:13:54.556049 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 18:13:54.557619 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 18:13:54.557824 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 18:13:54.558500 extend-filesystems[1491]: Found loop3 May 14 18:13:54.559434 extend-filesystems[1491]: Found loop4 May 14 18:13:54.559434 extend-filesystems[1491]: Found loop5 May 14 18:13:54.559434 extend-filesystems[1491]: Found vda May 14 18:13:54.559434 extend-filesystems[1491]: Found vda1 May 14 18:13:54.559434 extend-filesystems[1491]: Found vda2 May 14 18:13:54.559434 extend-filesystems[1491]: Found vda3 May 14 18:13:54.559434 extend-filesystems[1491]: Found usr May 14 18:13:54.559434 extend-filesystems[1491]: Found vda4 May 14 18:13:54.559434 extend-filesystems[1491]: Found vda6 May 14 18:13:54.559434 extend-filesystems[1491]: Found vda7 May 14 18:13:54.559434 extend-filesystems[1491]: Found vda9 May 14 18:13:54.559434 extend-filesystems[1491]: Checking size of /dev/vda9 May 14 18:13:54.561191 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 18:13:54.561378 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 18:13:54.576644 jq[1505]: true May 14 18:13:54.583879 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:13:54.585623 systemd[1]: motdgen.service: Deactivated successfully. May 14 18:13:54.587062 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 18:13:54.618237 extend-filesystems[1491]: Resized partition /dev/vda9 May 14 18:13:54.625072 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 18:13:54.629165 jq[1521]: true May 14 18:13:54.629351 (ntainerd)[1523]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 18:13:54.636102 tar[1510]: linux-arm64/helm May 14 18:13:54.642634 extend-filesystems[1531]: resize2fs 1.47.2 (1-Jan-2025) May 14 18:13:54.646968 dbus-daemon[1486]: [system] SELinux support is enabled May 14 18:13:54.648666 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 18:13:54.652045 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 18:13:54.666896 systemd-logind[1500]: Watching system buttons on /dev/input/event0 (Power Button) May 14 18:13:54.670863 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 18:13:54.670919 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 18:13:54.671922 systemd-logind[1500]: New seat seat0. May 14 18:13:54.672381 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 18:13:54.672410 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 18:13:54.678399 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 18:13:54.675806 systemd[1]: Started systemd-logind.service - User Login Management. May 14 18:13:54.683274 update_engine[1504]: I20250514 18:13:54.682736 1504 main.cc:92] Flatcar Update Engine starting May 14 18:13:54.695552 systemd[1]: Started update-engine.service - Update Engine. May 14 18:13:54.698967 extend-filesystems[1531]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 18:13:54.698967 extend-filesystems[1531]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 18:13:54.698967 extend-filesystems[1531]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 18:13:54.708078 extend-filesystems[1491]: Resized filesystem in /dev/vda9 May 14 18:13:54.700897 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 18:13:54.709121 update_engine[1504]: I20250514 18:13:54.698571 1504 update_check_scheduler.cc:74] Next update check in 11m35s May 14 18:13:54.708697 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 18:13:54.708915 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 18:13:54.716238 bash[1553]: Updated "/home/core/.ssh/authorized_keys" May 14 18:13:54.733708 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:13:54.735826 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 18:13:54.746410 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 18:13:54.781731 locksmithd[1552]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 18:13:54.959412 containerd[1523]: time="2025-05-14T18:13:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 18:13:54.963482 containerd[1523]: time="2025-05-14T18:13:54.963433200Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 18:13:54.974080 containerd[1523]: time="2025-05-14T18:13:54.974016640Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.08µs" May 14 18:13:54.974080 containerd[1523]: time="2025-05-14T18:13:54.974070360Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 18:13:54.974232 containerd[1523]: time="2025-05-14T18:13:54.974094120Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 18:13:54.974312 containerd[1523]: time="2025-05-14T18:13:54.974286040Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 18:13:54.974312 containerd[1523]: time="2025-05-14T18:13:54.974310360Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 18:13:54.974356 containerd[1523]: time="2025-05-14T18:13:54.974339080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:13:54.974418 containerd[1523]: time="2025-05-14T18:13:54.974390520Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:13:54.974418 containerd[1523]: time="2025-05-14T18:13:54.974401920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:13:54.974679 containerd[1523]: time="2025-05-14T18:13:54.974653200Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:13:54.974679 containerd[1523]: time="2025-05-14T18:13:54.974677680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:13:54.974723 containerd[1523]: time="2025-05-14T18:13:54.974690640Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:13:54.974723 containerd[1523]: time="2025-05-14T18:13:54.974698840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 18:13:54.974810 containerd[1523]: time="2025-05-14T18:13:54.974784160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 18:13:54.975067 containerd[1523]: time="2025-05-14T18:13:54.975045240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:13:54.975108 containerd[1523]: time="2025-05-14T18:13:54.975083080Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:13:54.975108 containerd[1523]: time="2025-05-14T18:13:54.975094000Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 18:13:54.975155 containerd[1523]: time="2025-05-14T18:13:54.975137400Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 18:13:54.975365 containerd[1523]: time="2025-05-14T18:13:54.975346120Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 18:13:54.975435 containerd[1523]: time="2025-05-14T18:13:54.975419520Z" level=info msg="metadata content store policy set" policy=shared May 14 18:13:54.979233 containerd[1523]: time="2025-05-14T18:13:54.979192120Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 18:13:54.979320 containerd[1523]: time="2025-05-14T18:13:54.979257600Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 18:13:54.979320 containerd[1523]: time="2025-05-14T18:13:54.979273320Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 18:13:54.979320 containerd[1523]: time="2025-05-14T18:13:54.979287840Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 18:13:54.979320 containerd[1523]: time="2025-05-14T18:13:54.979302360Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 18:13:54.979320 containerd[1523]: time="2025-05-14T18:13:54.979316040Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 18:13:54.979532 containerd[1523]: time="2025-05-14T18:13:54.979328480Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 18:13:54.979532 containerd[1523]: time="2025-05-14T18:13:54.979341440Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 18:13:54.979532 containerd[1523]: time="2025-05-14T18:13:54.979353680Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 18:13:54.979532 containerd[1523]: time="2025-05-14T18:13:54.979365320Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 18:13:54.979532 containerd[1523]: time="2025-05-14T18:13:54.979375520Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 18:13:54.979532 containerd[1523]: time="2025-05-14T18:13:54.979389320Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 18:13:54.979532 containerd[1523]: time="2025-05-14T18:13:54.979527440Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 18:13:54.979652 containerd[1523]: time="2025-05-14T18:13:54.979548840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 18:13:54.979652 containerd[1523]: time="2025-05-14T18:13:54.979564360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 18:13:54.979652 containerd[1523]: time="2025-05-14T18:13:54.979575800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 18:13:54.979652 containerd[1523]: time="2025-05-14T18:13:54.979599840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 18:13:54.979652 containerd[1523]: time="2025-05-14T18:13:54.979611080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 18:13:54.979652 containerd[1523]: time="2025-05-14T18:13:54.979622280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 18:13:54.979652 containerd[1523]: time="2025-05-14T18:13:54.979632760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 18:13:54.979652 containerd[1523]: time="2025-05-14T18:13:54.979650640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 18:13:54.979796 containerd[1523]: time="2025-05-14T18:13:54.979661840Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 18:13:54.979796 containerd[1523]: time="2025-05-14T18:13:54.979673520Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 18:13:54.980096 containerd[1523]: time="2025-05-14T18:13:54.980076040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 18:13:54.980151 containerd[1523]: time="2025-05-14T18:13:54.980101280Z" level=info msg="Start snapshots syncer" May 14 18:13:54.980151 containerd[1523]: time="2025-05-14T18:13:54.980134800Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 18:13:54.980509 containerd[1523]: time="2025-05-14T18:13:54.980371760Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 18:13:54.980509 containerd[1523]: time="2025-05-14T18:13:54.980444280Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 18:13:54.980620 containerd[1523]: time="2025-05-14T18:13:54.980519040Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 18:13:54.980717 containerd[1523]: time="2025-05-14T18:13:54.980634600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 18:13:54.980717 containerd[1523]: time="2025-05-14T18:13:54.980666600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 18:13:54.980717 containerd[1523]: time="2025-05-14T18:13:54.980702320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 18:13:54.980717 containerd[1523]: time="2025-05-14T18:13:54.980714840Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 18:13:54.980856 containerd[1523]: time="2025-05-14T18:13:54.980727560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 18:13:54.980856 containerd[1523]: time="2025-05-14T18:13:54.980739040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 18:13:54.980856 containerd[1523]: time="2025-05-14T18:13:54.980749960Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 18:13:54.980856 containerd[1523]: time="2025-05-14T18:13:54.980777080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 18:13:54.980856 containerd[1523]: time="2025-05-14T18:13:54.980790920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 18:13:54.980856 containerd[1523]: time="2025-05-14T18:13:54.980802280Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 18:13:54.980856 containerd[1523]: time="2025-05-14T18:13:54.980838240Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:13:54.980856 containerd[1523]: time="2025-05-14T18:13:54.980852640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:13:54.981021 containerd[1523]: time="2025-05-14T18:13:54.980861160Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:13:54.981021 containerd[1523]: time="2025-05-14T18:13:54.980870680Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:13:54.981021 containerd[1523]: time="2025-05-14T18:13:54.980877840Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 18:13:54.981021 containerd[1523]: time="2025-05-14T18:13:54.980886480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 18:13:54.981021 containerd[1523]: time="2025-05-14T18:13:54.980896960Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 18:13:54.981213 containerd[1523]: time="2025-05-14T18:13:54.981139560Z" level=info msg="runtime interface created" May 14 18:13:54.981213 containerd[1523]: time="2025-05-14T18:13:54.981153280Z" level=info msg="created NRI interface" May 14 18:13:54.981213 containerd[1523]: time="2025-05-14T18:13:54.981164400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 18:13:54.981213 containerd[1523]: time="2025-05-14T18:13:54.981180160Z" level=info msg="Connect containerd service" May 14 18:13:54.981213 containerd[1523]: time="2025-05-14T18:13:54.981210360Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 18:13:54.983015 containerd[1523]: time="2025-05-14T18:13:54.982967360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:13:55.038485 tar[1510]: linux-arm64/LICENSE May 14 18:13:55.038485 tar[1510]: linux-arm64/README.md May 14 18:13:55.052828 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 18:13:55.125082 containerd[1523]: time="2025-05-14T18:13:55.125039794Z" level=info msg="Start subscribing containerd event" May 14 18:13:55.126967 containerd[1523]: time="2025-05-14T18:13:55.126014386Z" level=info msg="Start recovering state" May 14 18:13:55.126967 containerd[1523]: time="2025-05-14T18:13:55.126126417Z" level=info msg="Start event monitor" May 14 18:13:55.126967 containerd[1523]: time="2025-05-14T18:13:55.126142134Z" level=info msg="Start cni network conf syncer for default" May 14 18:13:55.126967 containerd[1523]: time="2025-05-14T18:13:55.126152128Z" level=info msg="Start streaming server" May 14 18:13:55.126967 containerd[1523]: time="2025-05-14T18:13:55.126161155Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 18:13:55.126967 containerd[1523]: time="2025-05-14T18:13:55.126171552Z" level=info msg="runtime interface starting up..." May 14 18:13:55.126967 containerd[1523]: time="2025-05-14T18:13:55.126177073Z" level=info msg="starting plugins..." May 14 18:13:55.126967 containerd[1523]: time="2025-05-14T18:13:55.126191339Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 18:13:55.126967 containerd[1523]: time="2025-05-14T18:13:55.125703600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 18:13:55.126967 containerd[1523]: time="2025-05-14T18:13:55.126341816Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 18:13:55.126967 containerd[1523]: time="2025-05-14T18:13:55.126404924Z" level=info msg="containerd successfully booted in 0.167515s" May 14 18:13:55.126541 systemd[1]: Started containerd.service - containerd container runtime. May 14 18:13:55.460689 sshd_keygen[1529]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 18:13:55.480934 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 18:13:55.483781 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 18:13:55.505809 systemd[1]: issuegen.service: Deactivated successfully. May 14 18:13:55.506060 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 18:13:55.508884 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 18:13:55.532205 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 18:13:55.535265 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 18:13:55.537664 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 14 18:13:55.539251 systemd[1]: Reached target getty.target - Login Prompts. May 14 18:13:56.279444 systemd-networkd[1432]: eth0: Gained IPv6LL May 14 18:13:56.283062 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 18:13:56.284830 systemd[1]: Reached target network-online.target - Network is Online. May 14 18:13:56.287510 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 18:13:56.290090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:13:56.307555 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 18:13:56.329201 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 18:13:56.331289 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 18:13:56.331487 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 18:13:56.335023 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 18:13:56.833342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:13:56.835159 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 18:13:56.837394 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:13:56.840053 systemd[1]: Startup finished in 2.137s (kernel) + 5.150s (initrd) + 4.119s (userspace) = 11.408s. May 14 18:13:57.361231 kubelet[1626]: E0514 18:13:57.361174 1626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:13:57.363889 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:13:57.364151 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:13:57.368189 systemd[1]: kubelet.service: Consumed 790ms CPU time, 232.6M memory peak. May 14 18:14:00.724124 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 18:14:00.725621 systemd[1]: Started sshd@0-10.0.0.119:22-10.0.0.1:33992.service - OpenSSH per-connection server daemon (10.0.0.1:33992). May 14 18:14:00.810778 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 33992 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:14:00.814384 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:00.821913 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 18:14:00.823067 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 18:14:00.829774 systemd-logind[1500]: New session 1 of user core. May 14 18:14:00.849634 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 18:14:00.853295 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 18:14:00.871253 (systemd)[1645]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 18:14:00.873489 systemd-logind[1500]: New session c1 of user core. May 14 18:14:01.008223 systemd[1645]: Queued start job for default target default.target. May 14 18:14:01.020012 systemd[1645]: Created slice app.slice - User Application Slice. May 14 18:14:01.020040 systemd[1645]: Reached target paths.target - Paths. May 14 18:14:01.020083 systemd[1645]: Reached target timers.target - Timers. May 14 18:14:01.021396 systemd[1645]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 18:14:01.031109 systemd[1645]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 18:14:01.031179 systemd[1645]: Reached target sockets.target - Sockets. May 14 18:14:01.031220 systemd[1645]: Reached target basic.target - Basic System. May 14 18:14:01.031264 systemd[1645]: Reached target default.target - Main User Target. May 14 18:14:01.031291 systemd[1645]: Startup finished in 151ms. May 14 18:14:01.031604 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 18:14:01.033283 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 18:14:01.093737 systemd[1]: Started sshd@1-10.0.0.119:22-10.0.0.1:33996.service - OpenSSH per-connection server daemon (10.0.0.1:33996). May 14 18:14:01.153436 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 33996 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:14:01.154546 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:01.158740 systemd-logind[1500]: New session 2 of user core. May 14 18:14:01.167109 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 18:14:01.218591 sshd[1658]: Connection closed by 10.0.0.1 port 33996 May 14 18:14:01.218456 sshd-session[1656]: pam_unix(sshd:session): session closed for user core May 14 18:14:01.226137 systemd[1]: sshd@1-10.0.0.119:22-10.0.0.1:33996.service: Deactivated successfully. May 14 18:14:01.227669 systemd[1]: session-2.scope: Deactivated successfully. May 14 18:14:01.228405 systemd-logind[1500]: Session 2 logged out. Waiting for processes to exit. May 14 18:14:01.230851 systemd[1]: Started sshd@2-10.0.0.119:22-10.0.0.1:34010.service - OpenSSH per-connection server daemon (10.0.0.1:34010). May 14 18:14:01.231436 systemd-logind[1500]: Removed session 2. May 14 18:14:01.280819 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 34010 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:14:01.282230 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:01.287102 systemd-logind[1500]: New session 3 of user core. May 14 18:14:01.299108 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 18:14:01.347318 sshd[1666]: Connection closed by 10.0.0.1 port 34010 May 14 18:14:01.347743 sshd-session[1664]: pam_unix(sshd:session): session closed for user core May 14 18:14:01.357861 systemd[1]: sshd@2-10.0.0.119:22-10.0.0.1:34010.service: Deactivated successfully. May 14 18:14:01.359262 systemd[1]: session-3.scope: Deactivated successfully. May 14 18:14:01.360603 systemd-logind[1500]: Session 3 logged out. Waiting for processes to exit. May 14 18:14:01.363029 systemd[1]: Started sshd@3-10.0.0.119:22-10.0.0.1:34020.service - OpenSSH per-connection server daemon (10.0.0.1:34020). May 14 18:14:01.364388 systemd-logind[1500]: Removed session 3. May 14 18:14:01.406085 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 34020 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:14:01.407128 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:01.410656 systemd-logind[1500]: New session 4 of user core. May 14 18:14:01.425128 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 18:14:01.475321 sshd[1674]: Connection closed by 10.0.0.1 port 34020 May 14 18:14:01.475612 sshd-session[1672]: pam_unix(sshd:session): session closed for user core May 14 18:14:01.484834 systemd[1]: sshd@3-10.0.0.119:22-10.0.0.1:34020.service: Deactivated successfully. May 14 18:14:01.486223 systemd[1]: session-4.scope: Deactivated successfully. May 14 18:14:01.486848 systemd-logind[1500]: Session 4 logged out. Waiting for processes to exit. May 14 18:14:01.488896 systemd[1]: Started sshd@4-10.0.0.119:22-10.0.0.1:34022.service - OpenSSH per-connection server daemon (10.0.0.1:34022). May 14 18:14:01.490182 systemd-logind[1500]: Removed session 4. May 14 18:14:01.537887 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 34022 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:14:01.539363 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:01.543213 systemd-logind[1500]: New session 5 of user core. May 14 18:14:01.556153 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 18:14:01.614574 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 18:14:01.614830 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:14:01.629586 sudo[1683]: pam_unix(sudo:session): session closed for user root May 14 18:14:01.630869 sshd[1682]: Connection closed by 10.0.0.1 port 34022 May 14 18:14:01.631289 sshd-session[1680]: pam_unix(sshd:session): session closed for user core May 14 18:14:01.644873 systemd[1]: sshd@4-10.0.0.119:22-10.0.0.1:34022.service: Deactivated successfully. May 14 18:14:01.646158 systemd[1]: session-5.scope: Deactivated successfully. May 14 18:14:01.648388 systemd-logind[1500]: Session 5 logged out. Waiting for processes to exit. May 14 18:14:01.649593 systemd[1]: Started sshd@5-10.0.0.119:22-10.0.0.1:34030.service - OpenSSH per-connection server daemon (10.0.0.1:34030). May 14 18:14:01.650452 systemd-logind[1500]: Removed session 5. May 14 18:14:01.709190 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 34030 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:14:01.710342 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:01.714799 systemd-logind[1500]: New session 6 of user core. May 14 18:14:01.726133 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 18:14:01.776126 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 18:14:01.776389 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:14:01.780624 sudo[1693]: pam_unix(sudo:session): session closed for user root May 14 18:14:01.784867 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 18:14:01.785173 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:14:01.793702 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:14:01.825385 augenrules[1715]: No rules May 14 18:14:01.826492 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:14:01.827997 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:14:01.829806 sudo[1692]: pam_unix(sudo:session): session closed for user root May 14 18:14:01.831010 sshd[1691]: Connection closed by 10.0.0.1 port 34030 May 14 18:14:01.831376 sshd-session[1689]: pam_unix(sshd:session): session closed for user core May 14 18:14:01.842847 systemd[1]: sshd@5-10.0.0.119:22-10.0.0.1:34030.service: Deactivated successfully. May 14 18:14:01.846121 systemd[1]: session-6.scope: Deactivated successfully. May 14 18:14:01.846679 systemd-logind[1500]: Session 6 logged out. Waiting for processes to exit. May 14 18:14:01.848924 systemd[1]: Started sshd@6-10.0.0.119:22-10.0.0.1:34042.service - OpenSSH per-connection server daemon (10.0.0.1:34042). May 14 18:14:01.849363 systemd-logind[1500]: Removed session 6. May 14 18:14:01.909932 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 34042 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:14:01.910864 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:01.915117 systemd-logind[1500]: New session 7 of user core. May 14 18:14:01.926107 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 18:14:01.976063 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 18:14:01.976324 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:14:02.338854 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 18:14:02.357228 (dockerd)[1748]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 18:14:02.634667 dockerd[1748]: time="2025-05-14T18:14:02.634532835Z" level=info msg="Starting up" May 14 18:14:02.635430 dockerd[1748]: time="2025-05-14T18:14:02.635407658Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 18:14:02.747165 dockerd[1748]: time="2025-05-14T18:14:02.747109320Z" level=info msg="Loading containers: start." May 14 18:14:02.765165 kernel: Initializing XFRM netlink socket May 14 18:14:02.971417 systemd-networkd[1432]: docker0: Link UP May 14 18:14:02.976890 dockerd[1748]: time="2025-05-14T18:14:02.976766742Z" level=info msg="Loading containers: done." May 14 18:14:02.988053 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4245660038-merged.mount: Deactivated successfully. May 14 18:14:02.993899 dockerd[1748]: time="2025-05-14T18:14:02.993478811Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 18:14:02.993899 dockerd[1748]: time="2025-05-14T18:14:02.993585757Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 18:14:02.993899 dockerd[1748]: time="2025-05-14T18:14:02.993704680Z" level=info msg="Initializing buildkit" May 14 18:14:03.021653 dockerd[1748]: time="2025-05-14T18:14:03.021597953Z" level=info msg="Completed buildkit initialization" May 14 18:14:03.027942 dockerd[1748]: time="2025-05-14T18:14:03.027887005Z" level=info msg="Daemon has completed initialization" May 14 18:14:03.028279 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 18:14:03.028405 dockerd[1748]: time="2025-05-14T18:14:03.028207870Z" level=info msg="API listen on /run/docker.sock" May 14 18:14:03.728431 containerd[1523]: time="2025-05-14T18:14:03.728263491Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 18:14:04.713402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2832948300.mount: Deactivated successfully. May 14 18:14:05.791332 containerd[1523]: time="2025-05-14T18:14:05.791155476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:05.791991 containerd[1523]: time="2025-05-14T18:14:05.791954276Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" May 14 18:14:05.792961 containerd[1523]: time="2025-05-14T18:14:05.792921734Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:05.795860 containerd[1523]: time="2025-05-14T18:14:05.795812986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:05.797131 containerd[1523]: time="2025-05-14T18:14:05.797001589Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 2.068700375s" May 14 18:14:05.797131 containerd[1523]: time="2025-05-14T18:14:05.797034397Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 14 18:14:05.797718 containerd[1523]: time="2025-05-14T18:14:05.797529571Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 18:14:07.013403 containerd[1523]: time="2025-05-14T18:14:07.013350839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:07.015684 containerd[1523]: time="2025-05-14T18:14:07.015654430Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" May 14 18:14:07.016294 containerd[1523]: time="2025-05-14T18:14:07.016246343Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:07.019228 containerd[1523]: time="2025-05-14T18:14:07.019190574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:07.020785 containerd[1523]: time="2025-05-14T18:14:07.020707704Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.223149103s" May 14 18:14:07.020785 containerd[1523]: time="2025-05-14T18:14:07.020742504Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 14 18:14:07.021325 containerd[1523]: time="2025-05-14T18:14:07.021273487Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 18:14:07.614490 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 18:14:07.616158 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:14:07.745454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:14:07.748516 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:14:07.784991 kubelet[2024]: E0514 18:14:07.784218 2024 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:14:07.786749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:14:07.786869 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:14:07.789055 systemd[1]: kubelet.service: Consumed 128ms CPU time, 94.8M memory peak. May 14 18:14:08.212069 containerd[1523]: time="2025-05-14T18:14:08.211795381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:08.212390 containerd[1523]: time="2025-05-14T18:14:08.212248882Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" May 14 18:14:08.213074 containerd[1523]: time="2025-05-14T18:14:08.213032405Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:08.216129 containerd[1523]: time="2025-05-14T18:14:08.216073599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:08.217163 containerd[1523]: time="2025-05-14T18:14:08.217114029Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.195812407s" May 14 18:14:08.217163 containerd[1523]: time="2025-05-14T18:14:08.217151831Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 14 18:14:08.217742 containerd[1523]: time="2025-05-14T18:14:08.217561470Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 18:14:09.163628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount977782854.mount: Deactivated successfully. May 14 18:14:09.379186 containerd[1523]: time="2025-05-14T18:14:09.379141828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:09.379837 containerd[1523]: time="2025-05-14T18:14:09.379753073Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 14 18:14:09.380892 containerd[1523]: time="2025-05-14T18:14:09.380855041Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:09.382463 containerd[1523]: time="2025-05-14T18:14:09.382395051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:09.382991 containerd[1523]: time="2025-05-14T18:14:09.382925694Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.165332363s" May 14 18:14:09.382991 containerd[1523]: time="2025-05-14T18:14:09.382972555Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 14 18:14:09.383609 containerd[1523]: time="2025-05-14T18:14:09.383426406Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 18:14:10.000446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2536489222.mount: Deactivated successfully. May 14 18:14:10.722674 containerd[1523]: time="2025-05-14T18:14:10.722605004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:10.723500 containerd[1523]: time="2025-05-14T18:14:10.723466242Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 14 18:14:10.724501 containerd[1523]: time="2025-05-14T18:14:10.724469443Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:10.727390 containerd[1523]: time="2025-05-14T18:14:10.727359185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:10.729002 containerd[1523]: time="2025-05-14T18:14:10.728888515Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.34543495s" May 14 18:14:10.729002 containerd[1523]: time="2025-05-14T18:14:10.728917959Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 18:14:10.729354 containerd[1523]: time="2025-05-14T18:14:10.729305576Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 18:14:11.255051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1747464826.mount: Deactivated successfully. May 14 18:14:11.261353 containerd[1523]: time="2025-05-14T18:14:11.261293024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:14:11.262255 containerd[1523]: time="2025-05-14T18:14:11.262211061Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 14 18:14:11.263660 containerd[1523]: time="2025-05-14T18:14:11.263607368Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:14:11.266126 containerd[1523]: time="2025-05-14T18:14:11.266046284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:14:11.266901 containerd[1523]: time="2025-05-14T18:14:11.266767799Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 537.432861ms" May 14 18:14:11.266901 containerd[1523]: time="2025-05-14T18:14:11.266800005Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 14 18:14:11.267455 containerd[1523]: time="2025-05-14T18:14:11.267197661Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 18:14:11.854949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4005277576.mount: Deactivated successfully. May 14 18:14:13.293177 containerd[1523]: time="2025-05-14T18:14:13.293126277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:13.294169 containerd[1523]: time="2025-05-14T18:14:13.293873982Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 14 18:14:13.294923 containerd[1523]: time="2025-05-14T18:14:13.294892600Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:13.297591 containerd[1523]: time="2025-05-14T18:14:13.297563114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:13.298746 containerd[1523]: time="2025-05-14T18:14:13.298693432Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.031463528s" May 14 18:14:13.298746 containerd[1523]: time="2025-05-14T18:14:13.298726389Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 14 18:14:17.377573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:14:17.377795 systemd[1]: kubelet.service: Consumed 128ms CPU time, 94.8M memory peak. May 14 18:14:17.379737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:14:17.397097 systemd[1]: Reload requested from client PID 2175 ('systemctl') (unit session-7.scope)... May 14 18:14:17.397111 systemd[1]: Reloading... May 14 18:14:17.467250 zram_generator::config[2220]: No configuration found. May 14 18:14:17.560035 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:14:17.645033 systemd[1]: Reloading finished in 247 ms. May 14 18:14:17.693149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:14:17.695316 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:14:17.697446 systemd[1]: kubelet.service: Deactivated successfully. May 14 18:14:17.698998 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:14:17.699039 systemd[1]: kubelet.service: Consumed 78ms CPU time, 82.4M memory peak. May 14 18:14:17.700432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:14:17.830947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:14:17.834457 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:14:17.866152 kubelet[2266]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:14:17.866152 kubelet[2266]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:14:17.866152 kubelet[2266]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:14:17.866474 kubelet[2266]: I0514 18:14:17.866278 2266 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:14:18.560388 kubelet[2266]: I0514 18:14:18.560340 2266 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 18:14:18.560388 kubelet[2266]: I0514 18:14:18.560374 2266 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:14:18.560640 kubelet[2266]: I0514 18:14:18.560611 2266 server.go:929] "Client rotation is on, will bootstrap in background" May 14 18:14:18.597435 kubelet[2266]: E0514 18:14:18.597384 2266 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" May 14 18:14:18.598423 kubelet[2266]: I0514 18:14:18.598388 2266 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:14:18.607185 kubelet[2266]: I0514 18:14:18.607158 2266 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 18:14:18.611037 kubelet[2266]: I0514 18:14:18.611008 2266 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:14:18.611864 kubelet[2266]: I0514 18:14:18.611829 2266 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 18:14:18.612032 kubelet[2266]: I0514 18:14:18.612001 2266 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:14:18.612206 kubelet[2266]: I0514 18:14:18.612028 2266 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 18:14:18.612391 kubelet[2266]: I0514 18:14:18.612377 2266 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:14:18.612391 kubelet[2266]: I0514 18:14:18.612389 2266 container_manager_linux.go:300] "Creating device plugin manager" May 14 18:14:18.612626 kubelet[2266]: I0514 18:14:18.612603 2266 state_mem.go:36] "Initialized new in-memory state store" May 14 18:14:18.617133 kubelet[2266]: I0514 18:14:18.617103 2266 kubelet.go:408] "Attempting to sync node with API server" May 14 18:14:18.617133 kubelet[2266]: I0514 18:14:18.617129 2266 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:14:18.617363 kubelet[2266]: I0514 18:14:18.617344 2266 kubelet.go:314] "Adding apiserver pod source" May 14 18:14:18.617363 kubelet[2266]: I0514 18:14:18.617358 2266 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:14:18.619926 kubelet[2266]: I0514 18:14:18.619894 2266 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:14:18.624039 kubelet[2266]: W0514 18:14:18.623986 2266 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused May 14 18:14:18.624074 kubelet[2266]: E0514 18:14:18.624047 2266 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" May 14 18:14:18.624361 kubelet[2266]: W0514 18:14:18.624271 2266 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused May 14 18:14:18.624361 kubelet[2266]: E0514 18:14:18.624310 2266 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" May 14 18:14:18.626172 kubelet[2266]: I0514 18:14:18.626074 2266 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:14:18.626946 kubelet[2266]: W0514 18:14:18.626929 2266 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 18:14:18.628168 kubelet[2266]: I0514 18:14:18.628135 2266 server.go:1269] "Started kubelet" May 14 18:14:18.629513 kubelet[2266]: I0514 18:14:18.629190 2266 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:14:18.631822 kubelet[2266]: I0514 18:14:18.631089 2266 server.go:460] "Adding debug handlers to kubelet server" May 14 18:14:18.631822 kubelet[2266]: I0514 18:14:18.631403 2266 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:14:18.632609 kubelet[2266]: I0514 18:14:18.632559 2266 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:14:18.632900 kubelet[2266]: I0514 18:14:18.632876 2266 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:14:18.632977 kubelet[2266]: I0514 18:14:18.632586 2266 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 18:14:18.634005 kubelet[2266]: E0514 18:14:18.633848 2266 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:14:18.634306 kubelet[2266]: I0514 18:14:18.634286 2266 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 18:14:18.634470 kubelet[2266]: I0514 18:14:18.634458 2266 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 18:14:18.634529 kubelet[2266]: W0514 18:14:18.634298 2266 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused May 14 18:14:18.634602 kubelet[2266]: E0514 18:14:18.634587 2266 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" May 14 18:14:18.634661 kubelet[2266]: I0514 18:14:18.634505 2266 factory.go:221] Registration of the systemd container factory successfully May 14 18:14:18.634961 kubelet[2266]: E0514 18:14:18.634910 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="200ms" May 14 18:14:18.635120 kubelet[2266]: E0514 18:14:18.633287 2266 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.119:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.119:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f776595966dd9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 18:14:18.628107737 +0000 UTC m=+0.790983977,LastTimestamp:2025-05-14 18:14:18.628107737 +0000 UTC m=+0.790983977,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 18:14:18.635120 kubelet[2266]: I0514 18:14:18.634755 2266 reconciler.go:26] "Reconciler: start to sync state" May 14 18:14:18.635316 kubelet[2266]: I0514 18:14:18.635298 2266 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:14:18.635555 kubelet[2266]: E0514 18:14:18.635539 2266 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:14:18.637170 kubelet[2266]: I0514 18:14:18.637149 2266 factory.go:221] Registration of the containerd container factory successfully May 14 18:14:18.646369 kubelet[2266]: I0514 18:14:18.646317 2266 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:14:18.647596 kubelet[2266]: I0514 18:14:18.647557 2266 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:14:18.647596 kubelet[2266]: I0514 18:14:18.647589 2266 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:14:18.647689 kubelet[2266]: I0514 18:14:18.647611 2266 kubelet.go:2321] "Starting kubelet main sync loop" May 14 18:14:18.647689 kubelet[2266]: E0514 18:14:18.647655 2266 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:14:18.650326 kubelet[2266]: I0514 18:14:18.650308 2266 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:14:18.650630 kubelet[2266]: I0514 18:14:18.650420 2266 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:14:18.650630 kubelet[2266]: I0514 18:14:18.650440 2266 state_mem.go:36] "Initialized new in-memory state store" May 14 18:14:18.651448 kubelet[2266]: W0514 18:14:18.651389 2266 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused May 14 18:14:18.651636 kubelet[2266]: E0514 18:14:18.651452 2266 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" May 14 18:14:18.652704 kubelet[2266]: I0514 18:14:18.652633 2266 policy_none.go:49] "None policy: Start" May 14 18:14:18.653402 kubelet[2266]: I0514 18:14:18.653367 2266 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:14:18.653402 kubelet[2266]: I0514 18:14:18.653401 2266 state_mem.go:35] "Initializing new in-memory state store" May 14 18:14:18.661989 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 18:14:18.676371 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 18:14:18.679845 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 18:14:18.689877 kubelet[2266]: I0514 18:14:18.689849 2266 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:14:18.690558 kubelet[2266]: I0514 18:14:18.690500 2266 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 18:14:18.690712 kubelet[2266]: I0514 18:14:18.690519 2266 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:14:18.691174 kubelet[2266]: I0514 18:14:18.691010 2266 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:14:18.692209 kubelet[2266]: E0514 18:14:18.692177 2266 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 18:14:18.756271 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 14 18:14:18.780712 systemd[1]: Created slice kubepods-burstable-pode1d58980a67f6aea7c91ccf516c2800f.slice - libcontainer container kubepods-burstable-pode1d58980a67f6aea7c91ccf516c2800f.slice. May 14 18:14:18.792743 kubelet[2266]: I0514 18:14:18.792683 2266 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:14:18.793205 kubelet[2266]: E0514 18:14:18.793181 2266 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" May 14 18:14:18.798651 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 14 18:14:18.835833 kubelet[2266]: I0514 18:14:18.835691 2266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:14:18.836090 kubelet[2266]: I0514 18:14:18.835912 2266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:14:18.836090 kubelet[2266]: I0514 18:14:18.835937 2266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1d58980a67f6aea7c91ccf516c2800f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e1d58980a67f6aea7c91ccf516c2800f\") " pod="kube-system/kube-apiserver-localhost" May 14 18:14:18.836090 kubelet[2266]: I0514 18:14:18.835989 2266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1d58980a67f6aea7c91ccf516c2800f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e1d58980a67f6aea7c91ccf516c2800f\") " pod="kube-system/kube-apiserver-localhost" May 14 18:14:18.836090 kubelet[2266]: E0514 18:14:18.835985 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="400ms" May 14 18:14:18.836090 kubelet[2266]: I0514 18:14:18.836007 2266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:14:18.836222 kubelet[2266]: I0514 18:14:18.836046 2266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:14:18.836222 kubelet[2266]: I0514 18:14:18.836077 2266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 18:14:18.836222 kubelet[2266]: I0514 18:14:18.836093 2266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1d58980a67f6aea7c91ccf516c2800f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e1d58980a67f6aea7c91ccf516c2800f\") " pod="kube-system/kube-apiserver-localhost" May 14 18:14:18.836222 kubelet[2266]: I0514 18:14:18.836116 2266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:14:18.995047 kubelet[2266]: I0514 18:14:18.995005 2266 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:14:18.995723 kubelet[2266]: E0514 18:14:18.995696 2266 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" May 14 18:14:19.079414 containerd[1523]: time="2025-05-14T18:14:19.079152761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 14 18:14:19.098487 containerd[1523]: time="2025-05-14T18:14:19.098178255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e1d58980a67f6aea7c91ccf516c2800f,Namespace:kube-system,Attempt:0,}" May 14 18:14:19.101706 containerd[1523]: time="2025-05-14T18:14:19.101655129Z" level=info msg="connecting to shim 29643b43df029f772f177262b717ffacecc6a094558554cde7773d065905d306" address="unix:///run/containerd/s/1e0f32eeb99b293c6b0479b8d44eb1eae512d06e633ff3d842783a4bfcca0f44" namespace=k8s.io protocol=ttrpc version=3 May 14 18:14:19.102020 containerd[1523]: time="2025-05-14T18:14:19.102000397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 14 18:14:19.130198 systemd[1]: Started cri-containerd-29643b43df029f772f177262b717ffacecc6a094558554cde7773d065905d306.scope - libcontainer container 29643b43df029f772f177262b717ffacecc6a094558554cde7773d065905d306. May 14 18:14:19.134999 containerd[1523]: time="2025-05-14T18:14:19.134145818Z" level=info msg="connecting to shim aafc5094e810aee28a40eff2e09c526afadd53aee44bcb238b76edb6b6b251a2" address="unix:///run/containerd/s/1a3d9290a14344789aeb136ec66a89465e230074253ba8a6dc2691b2df09b7c5" namespace=k8s.io protocol=ttrpc version=3 May 14 18:14:19.135128 containerd[1523]: time="2025-05-14T18:14:19.134995686Z" level=info msg="connecting to shim a1d99a006ab1708ade79f054a333e393429ec023125661c867facb91b16dd200" address="unix:///run/containerd/s/6cff88799675cd605e8d2b6d8569f8b8decefb7e0cbc7880243fd7ad2330fd79" namespace=k8s.io protocol=ttrpc version=3 May 14 18:14:19.165179 systemd[1]: Started cri-containerd-a1d99a006ab1708ade79f054a333e393429ec023125661c867facb91b16dd200.scope - libcontainer container a1d99a006ab1708ade79f054a333e393429ec023125661c867facb91b16dd200. May 14 18:14:19.168642 systemd[1]: Started cri-containerd-aafc5094e810aee28a40eff2e09c526afadd53aee44bcb238b76edb6b6b251a2.scope - libcontainer container aafc5094e810aee28a40eff2e09c526afadd53aee44bcb238b76edb6b6b251a2. May 14 18:14:19.190507 containerd[1523]: time="2025-05-14T18:14:19.190460092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"29643b43df029f772f177262b717ffacecc6a094558554cde7773d065905d306\"" May 14 18:14:19.197584 containerd[1523]: time="2025-05-14T18:14:19.197538037Z" level=info msg="CreateContainer within sandbox \"29643b43df029f772f177262b717ffacecc6a094558554cde7773d065905d306\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 18:14:19.224126 containerd[1523]: time="2025-05-14T18:14:19.224075525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1d99a006ab1708ade79f054a333e393429ec023125661c867facb91b16dd200\"" May 14 18:14:19.226442 containerd[1523]: time="2025-05-14T18:14:19.226414715Z" level=info msg="CreateContainer within sandbox \"a1d99a006ab1708ade79f054a333e393429ec023125661c867facb91b16dd200\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 18:14:19.237077 kubelet[2266]: E0514 18:14:19.237041 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="800ms" May 14 18:14:19.250145 containerd[1523]: time="2025-05-14T18:14:19.250114222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e1d58980a67f6aea7c91ccf516c2800f,Namespace:kube-system,Attempt:0,} returns sandbox id \"aafc5094e810aee28a40eff2e09c526afadd53aee44bcb238b76edb6b6b251a2\"" May 14 18:14:19.252133 containerd[1523]: time="2025-05-14T18:14:19.252041198Z" level=info msg="CreateContainer within sandbox \"aafc5094e810aee28a40eff2e09c526afadd53aee44bcb238b76edb6b6b251a2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 18:14:19.302331 containerd[1523]: time="2025-05-14T18:14:19.302289651Z" level=info msg="Container 866f2e6ad5fbd178c778f0a1124db710a0c1c200483ca1c71e25278432f006ba: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:19.305016 containerd[1523]: time="2025-05-14T18:14:19.304988572Z" level=info msg="Container c9dff21ce69a7df9346f9ba49154389b0582ff6627872723a2b0511eb3f3d6f2: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:19.309113 containerd[1523]: time="2025-05-14T18:14:19.308982947Z" level=info msg="Container 181b456d44d13a4fb8501c276fc60f2e297c6f7146183f96b35e6ac496315c22: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:19.312122 containerd[1523]: time="2025-05-14T18:14:19.312077855Z" level=info msg="CreateContainer within sandbox \"29643b43df029f772f177262b717ffacecc6a094558554cde7773d065905d306\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"866f2e6ad5fbd178c778f0a1124db710a0c1c200483ca1c71e25278432f006ba\"" May 14 18:14:19.314044 containerd[1523]: time="2025-05-14T18:14:19.313894576Z" level=info msg="StartContainer for \"866f2e6ad5fbd178c778f0a1124db710a0c1c200483ca1c71e25278432f006ba\"" May 14 18:14:19.315307 containerd[1523]: time="2025-05-14T18:14:19.315268796Z" level=info msg="CreateContainer within sandbox \"a1d99a006ab1708ade79f054a333e393429ec023125661c867facb91b16dd200\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c9dff21ce69a7df9346f9ba49154389b0582ff6627872723a2b0511eb3f3d6f2\"" May 14 18:14:19.315725 containerd[1523]: time="2025-05-14T18:14:19.315670553Z" level=info msg="StartContainer for \"c9dff21ce69a7df9346f9ba49154389b0582ff6627872723a2b0511eb3f3d6f2\"" May 14 18:14:19.317002 containerd[1523]: time="2025-05-14T18:14:19.316970655Z" level=info msg="connecting to shim 866f2e6ad5fbd178c778f0a1124db710a0c1c200483ca1c71e25278432f006ba" address="unix:///run/containerd/s/1e0f32eeb99b293c6b0479b8d44eb1eae512d06e633ff3d842783a4bfcca0f44" protocol=ttrpc version=3 May 14 18:14:19.318560 containerd[1523]: time="2025-05-14T18:14:19.318522957Z" level=info msg="CreateContainer within sandbox \"aafc5094e810aee28a40eff2e09c526afadd53aee44bcb238b76edb6b6b251a2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"181b456d44d13a4fb8501c276fc60f2e297c6f7146183f96b35e6ac496315c22\"" May 14 18:14:19.319042 containerd[1523]: time="2025-05-14T18:14:19.319017381Z" level=info msg="StartContainer for \"181b456d44d13a4fb8501c276fc60f2e297c6f7146183f96b35e6ac496315c22\"" May 14 18:14:19.319361 containerd[1523]: time="2025-05-14T18:14:19.319335926Z" level=info msg="connecting to shim c9dff21ce69a7df9346f9ba49154389b0582ff6627872723a2b0511eb3f3d6f2" address="unix:///run/containerd/s/6cff88799675cd605e8d2b6d8569f8b8decefb7e0cbc7880243fd7ad2330fd79" protocol=ttrpc version=3 May 14 18:14:19.320244 containerd[1523]: time="2025-05-14T18:14:19.320187958Z" level=info msg="connecting to shim 181b456d44d13a4fb8501c276fc60f2e297c6f7146183f96b35e6ac496315c22" address="unix:///run/containerd/s/1a3d9290a14344789aeb136ec66a89465e230074253ba8a6dc2691b2df09b7c5" protocol=ttrpc version=3 May 14 18:14:19.342164 systemd[1]: Started cri-containerd-c9dff21ce69a7df9346f9ba49154389b0582ff6627872723a2b0511eb3f3d6f2.scope - libcontainer container c9dff21ce69a7df9346f9ba49154389b0582ff6627872723a2b0511eb3f3d6f2. May 14 18:14:19.347368 systemd[1]: Started cri-containerd-181b456d44d13a4fb8501c276fc60f2e297c6f7146183f96b35e6ac496315c22.scope - libcontainer container 181b456d44d13a4fb8501c276fc60f2e297c6f7146183f96b35e6ac496315c22. May 14 18:14:19.348775 systemd[1]: Started cri-containerd-866f2e6ad5fbd178c778f0a1124db710a0c1c200483ca1c71e25278432f006ba.scope - libcontainer container 866f2e6ad5fbd178c778f0a1124db710a0c1c200483ca1c71e25278432f006ba. May 14 18:14:19.401650 kubelet[2266]: I0514 18:14:19.401314 2266 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:14:19.402036 kubelet[2266]: E0514 18:14:19.402011 2266 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" May 14 18:14:19.402413 containerd[1523]: time="2025-05-14T18:14:19.402317694Z" level=info msg="StartContainer for \"866f2e6ad5fbd178c778f0a1124db710a0c1c200483ca1c71e25278432f006ba\" returns successfully" May 14 18:14:19.411565 containerd[1523]: time="2025-05-14T18:14:19.404750232Z" level=info msg="StartContainer for \"c9dff21ce69a7df9346f9ba49154389b0582ff6627872723a2b0511eb3f3d6f2\" returns successfully" May 14 18:14:19.427108 containerd[1523]: time="2025-05-14T18:14:19.420446526Z" level=info msg="StartContainer for \"181b456d44d13a4fb8501c276fc60f2e297c6f7146183f96b35e6ac496315c22\" returns successfully" May 14 18:14:19.469246 kubelet[2266]: W0514 18:14:19.464188 2266 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused May 14 18:14:19.469246 kubelet[2266]: E0514 18:14:19.464255 2266 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" May 14 18:14:19.556266 kubelet[2266]: W0514 18:14:19.556153 2266 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused May 14 18:14:19.556266 kubelet[2266]: E0514 18:14:19.556226 2266 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" May 14 18:14:19.567026 kubelet[2266]: W0514 18:14:19.566930 2266 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused May 14 18:14:19.567141 kubelet[2266]: E0514 18:14:19.567005 2266 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" May 14 18:14:19.678980 kubelet[2266]: W0514 18:14:19.678852 2266 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused May 14 18:14:19.678980 kubelet[2266]: E0514 18:14:19.678935 2266 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" May 14 18:14:20.203829 kubelet[2266]: I0514 18:14:20.203542 2266 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:14:21.712912 kubelet[2266]: E0514 18:14:21.712877 2266 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 18:14:21.775779 kubelet[2266]: I0514 18:14:21.775613 2266 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 18:14:21.775779 kubelet[2266]: E0514 18:14:21.775653 2266 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 14 18:14:21.785003 kubelet[2266]: E0514 18:14:21.784962 2266 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:14:21.885448 kubelet[2266]: E0514 18:14:21.885400 2266 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:14:21.986194 kubelet[2266]: E0514 18:14:21.986078 2266 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:14:22.086741 kubelet[2266]: E0514 18:14:22.086700 2266 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:14:22.186822 kubelet[2266]: E0514 18:14:22.186777 2266 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:14:22.287474 kubelet[2266]: E0514 18:14:22.287365 2266 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:14:22.387763 kubelet[2266]: E0514 18:14:22.387720 2266 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:14:22.621204 kubelet[2266]: I0514 18:14:22.621151 2266 apiserver.go:52] "Watching apiserver" May 14 18:14:22.635166 kubelet[2266]: I0514 18:14:22.635121 2266 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 18:14:23.795865 systemd[1]: Reload requested from client PID 2541 ('systemctl') (unit session-7.scope)... May 14 18:14:23.795883 systemd[1]: Reloading... May 14 18:14:23.855086 zram_generator::config[2587]: No configuration found. May 14 18:14:23.921857 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:14:24.029607 systemd[1]: Reloading finished in 233 ms. May 14 18:14:24.055567 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:14:24.082510 systemd[1]: kubelet.service: Deactivated successfully. May 14 18:14:24.083501 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:14:24.083589 systemd[1]: kubelet.service: Consumed 997ms CPU time, 115.6M memory peak. May 14 18:14:24.085619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:14:24.207206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:14:24.221226 (kubelet)[2626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:14:24.254792 kubelet[2626]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:14:24.254792 kubelet[2626]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:14:24.254792 kubelet[2626]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:14:24.255163 kubelet[2626]: I0514 18:14:24.254844 2626 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:14:24.262108 kubelet[2626]: I0514 18:14:24.262061 2626 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 18:14:24.262108 kubelet[2626]: I0514 18:14:24.262092 2626 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:14:24.262346 kubelet[2626]: I0514 18:14:24.262330 2626 server.go:929] "Client rotation is on, will bootstrap in background" May 14 18:14:24.263772 kubelet[2626]: I0514 18:14:24.263744 2626 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 18:14:24.265742 kubelet[2626]: I0514 18:14:24.265691 2626 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:14:24.269560 kubelet[2626]: I0514 18:14:24.269525 2626 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 18:14:24.271924 kubelet[2626]: I0514 18:14:24.271886 2626 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:14:24.272039 kubelet[2626]: I0514 18:14:24.272015 2626 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 18:14:24.272150 kubelet[2626]: I0514 18:14:24.272108 2626 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:14:24.272298 kubelet[2626]: I0514 18:14:24.272135 2626 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 18:14:24.272375 kubelet[2626]: I0514 18:14:24.272300 2626 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:14:24.272375 kubelet[2626]: I0514 18:14:24.272309 2626 container_manager_linux.go:300] "Creating device plugin manager" May 14 18:14:24.272375 kubelet[2626]: I0514 18:14:24.272337 2626 state_mem.go:36] "Initialized new in-memory state store" May 14 18:14:24.272455 kubelet[2626]: I0514 18:14:24.272442 2626 kubelet.go:408] "Attempting to sync node with API server" May 14 18:14:24.272489 kubelet[2626]: I0514 18:14:24.272458 2626 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:14:24.272489 kubelet[2626]: I0514 18:14:24.272485 2626 kubelet.go:314] "Adding apiserver pod source" May 14 18:14:24.272528 kubelet[2626]: I0514 18:14:24.272495 2626 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:14:24.273600 kubelet[2626]: I0514 18:14:24.273573 2626 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:14:24.275528 kubelet[2626]: I0514 18:14:24.274479 2626 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:14:24.275528 kubelet[2626]: I0514 18:14:24.274985 2626 server.go:1269] "Started kubelet" May 14 18:14:24.275528 kubelet[2626]: I0514 18:14:24.275045 2626 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:14:24.276016 kubelet[2626]: I0514 18:14:24.275938 2626 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:14:24.278378 kubelet[2626]: I0514 18:14:24.278353 2626 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:14:24.278518 kubelet[2626]: I0514 18:14:24.277175 2626 server.go:460] "Adding debug handlers to kubelet server" May 14 18:14:24.285772 kubelet[2626]: I0514 18:14:24.285738 2626 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:14:24.292351 kubelet[2626]: I0514 18:14:24.290435 2626 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 18:14:24.295722 kubelet[2626]: E0514 18:14:24.295692 2626 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:14:24.295969 kubelet[2626]: I0514 18:14:24.295892 2626 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:14:24.296124 kubelet[2626]: I0514 18:14:24.296077 2626 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 18:14:24.297031 kubelet[2626]: I0514 18:14:24.296870 2626 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:14:24.297031 kubelet[2626]: I0514 18:14:24.296936 2626 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:14:24.297031 kubelet[2626]: I0514 18:14:24.296989 2626 kubelet.go:2321] "Starting kubelet main sync loop" May 14 18:14:24.297146 kubelet[2626]: E0514 18:14:24.297031 2626 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:14:24.298424 kubelet[2626]: I0514 18:14:24.298390 2626 factory.go:221] Registration of the systemd container factory successfully May 14 18:14:24.298562 kubelet[2626]: I0514 18:14:24.298532 2626 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:14:24.299320 kubelet[2626]: I0514 18:14:24.299258 2626 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 18:14:24.299665 kubelet[2626]: I0514 18:14:24.299636 2626 reconciler.go:26] "Reconciler: start to sync state" May 14 18:14:24.301624 kubelet[2626]: I0514 18:14:24.301580 2626 factory.go:221] Registration of the containerd container factory successfully May 14 18:14:24.334020 kubelet[2626]: I0514 18:14:24.333869 2626 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:14:24.334020 kubelet[2626]: I0514 18:14:24.333888 2626 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:14:24.334020 kubelet[2626]: I0514 18:14:24.333910 2626 state_mem.go:36] "Initialized new in-memory state store" May 14 18:14:24.334995 kubelet[2626]: I0514 18:14:24.334939 2626 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 18:14:24.334995 kubelet[2626]: I0514 18:14:24.334978 2626 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 18:14:24.334995 kubelet[2626]: I0514 18:14:24.335006 2626 policy_none.go:49] "None policy: Start" May 14 18:14:24.335775 kubelet[2626]: I0514 18:14:24.335722 2626 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:14:24.335860 kubelet[2626]: I0514 18:14:24.335850 2626 state_mem.go:35] "Initializing new in-memory state store" May 14 18:14:24.336137 kubelet[2626]: I0514 18:14:24.336117 2626 state_mem.go:75] "Updated machine memory state" May 14 18:14:24.340977 kubelet[2626]: I0514 18:14:24.340932 2626 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:14:24.341487 kubelet[2626]: I0514 18:14:24.341469 2626 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 18:14:24.341909 kubelet[2626]: I0514 18:14:24.341491 2626 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:14:24.341909 kubelet[2626]: I0514 18:14:24.341779 2626 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:14:24.401078 kubelet[2626]: I0514 18:14:24.401012 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 18:14:24.404619 kubelet[2626]: E0514 18:14:24.404520 2626 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 18:14:24.404693 kubelet[2626]: E0514 18:14:24.404652 2626 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 14 18:14:24.447283 kubelet[2626]: I0514 18:14:24.447250 2626 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:14:24.457000 kubelet[2626]: I0514 18:14:24.456390 2626 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 14 18:14:24.457000 kubelet[2626]: I0514 18:14:24.456485 2626 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 18:14:24.501724 kubelet[2626]: I0514 18:14:24.501682 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:14:24.501724 kubelet[2626]: I0514 18:14:24.501723 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:14:24.501897 kubelet[2626]: I0514 18:14:24.501747 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:14:24.501897 kubelet[2626]: I0514 18:14:24.501766 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1d58980a67f6aea7c91ccf516c2800f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e1d58980a67f6aea7c91ccf516c2800f\") " pod="kube-system/kube-apiserver-localhost" May 14 18:14:24.501897 kubelet[2626]: I0514 18:14:24.501782 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1d58980a67f6aea7c91ccf516c2800f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e1d58980a67f6aea7c91ccf516c2800f\") " pod="kube-system/kube-apiserver-localhost" May 14 18:14:24.501897 kubelet[2626]: I0514 18:14:24.501797 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:14:24.501897 kubelet[2626]: I0514 18:14:24.501832 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1d58980a67f6aea7c91ccf516c2800f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e1d58980a67f6aea7c91ccf516c2800f\") " pod="kube-system/kube-apiserver-localhost" May 14 18:14:24.502046 kubelet[2626]: I0514 18:14:24.501850 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:14:25.273032 kubelet[2626]: I0514 18:14:25.272981 2626 apiserver.go:52] "Watching apiserver" May 14 18:14:25.299662 kubelet[2626]: I0514 18:14:25.299609 2626 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 18:14:25.321503 kubelet[2626]: E0514 18:14:25.321459 2626 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 18:14:25.322945 kubelet[2626]: E0514 18:14:25.322912 2626 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 14 18:14:25.371837 kubelet[2626]: I0514 18:14:25.371497 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.371479823 podStartE2EDuration="1.371479823s" podCreationTimestamp="2025-05-14 18:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:14:25.371434254 +0000 UTC m=+1.147448314" watchObservedRunningTime="2025-05-14 18:14:25.371479823 +0000 UTC m=+1.147493883" May 14 18:14:25.396816 kubelet[2626]: I0514 18:14:25.396766 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.396751649 podStartE2EDuration="2.396751649s" podCreationTimestamp="2025-05-14 18:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:14:25.387026812 +0000 UTC m=+1.163040832" watchObservedRunningTime="2025-05-14 18:14:25.396751649 +0000 UTC m=+1.172765669" May 14 18:14:25.397061 kubelet[2626]: I0514 18:14:25.397032 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.3970241030000001 podStartE2EDuration="1.397024103s" podCreationTimestamp="2025-05-14 18:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:14:25.396741358 +0000 UTC m=+1.172755418" watchObservedRunningTime="2025-05-14 18:14:25.397024103 +0000 UTC m=+1.173038163" May 14 18:14:28.669862 kubelet[2626]: I0514 18:14:28.669833 2626 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 18:14:28.670481 containerd[1523]: time="2025-05-14T18:14:28.670151055Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 18:14:28.671273 kubelet[2626]: I0514 18:14:28.670773 2626 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 18:14:28.915487 sudo[1727]: pam_unix(sudo:session): session closed for user root May 14 18:14:28.916566 sshd[1726]: Connection closed by 10.0.0.1 port 34042 May 14 18:14:28.917049 sshd-session[1724]: pam_unix(sshd:session): session closed for user core May 14 18:14:28.919866 systemd[1]: sshd@6-10.0.0.119:22-10.0.0.1:34042.service: Deactivated successfully. May 14 18:14:28.921688 systemd[1]: session-7.scope: Deactivated successfully. May 14 18:14:28.921899 systemd[1]: session-7.scope: Consumed 6.055s CPU time, 232M memory peak. May 14 18:14:28.924584 systemd-logind[1500]: Session 7 logged out. Waiting for processes to exit. May 14 18:14:28.926294 systemd-logind[1500]: Removed session 7. May 14 18:14:29.619913 systemd[1]: Created slice kubepods-besteffort-pod6ea4e0f2_d97a_456b_a287_53e5eda4b10a.slice - libcontainer container kubepods-besteffort-pod6ea4e0f2_d97a_456b_a287_53e5eda4b10a.slice. May 14 18:14:29.637301 kubelet[2626]: I0514 18:14:29.637254 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44qcl\" (UniqueName: \"kubernetes.io/projected/6ea4e0f2-d97a-456b-a287-53e5eda4b10a-kube-api-access-44qcl\") pod \"kube-proxy-29nw9\" (UID: \"6ea4e0f2-d97a-456b-a287-53e5eda4b10a\") " pod="kube-system/kube-proxy-29nw9" May 14 18:14:29.637490 kubelet[2626]: I0514 18:14:29.637296 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ea4e0f2-d97a-456b-a287-53e5eda4b10a-lib-modules\") pod \"kube-proxy-29nw9\" (UID: \"6ea4e0f2-d97a-456b-a287-53e5eda4b10a\") " pod="kube-system/kube-proxy-29nw9" May 14 18:14:29.637583 kubelet[2626]: I0514 18:14:29.637500 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6ea4e0f2-d97a-456b-a287-53e5eda4b10a-kube-proxy\") pod \"kube-proxy-29nw9\" (UID: \"6ea4e0f2-d97a-456b-a287-53e5eda4b10a\") " pod="kube-system/kube-proxy-29nw9" May 14 18:14:29.637583 kubelet[2626]: I0514 18:14:29.637527 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ea4e0f2-d97a-456b-a287-53e5eda4b10a-xtables-lock\") pod \"kube-proxy-29nw9\" (UID: \"6ea4e0f2-d97a-456b-a287-53e5eda4b10a\") " pod="kube-system/kube-proxy-29nw9" May 14 18:14:29.737603 systemd[1]: Created slice kubepods-besteffort-podd14c711f_18e0_4543_be3c_8335b826f29e.slice - libcontainer container kubepods-besteffort-podd14c711f_18e0_4543_be3c_8335b826f29e.slice. May 14 18:14:29.737889 kubelet[2626]: I0514 18:14:29.737860 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb459\" (UniqueName: \"kubernetes.io/projected/d14c711f-18e0-4543-be3c-8335b826f29e-kube-api-access-lb459\") pod \"tigera-operator-6f6897fdc5-q77tn\" (UID: \"d14c711f-18e0-4543-be3c-8335b826f29e\") " pod="tigera-operator/tigera-operator-6f6897fdc5-q77tn" May 14 18:14:29.738149 kubelet[2626]: I0514 18:14:29.737922 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d14c711f-18e0-4543-be3c-8335b826f29e-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-q77tn\" (UID: \"d14c711f-18e0-4543-be3c-8335b826f29e\") " pod="tigera-operator/tigera-operator-6f6897fdc5-q77tn" May 14 18:14:29.932918 containerd[1523]: time="2025-05-14T18:14:29.932818847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29nw9,Uid:6ea4e0f2-d97a-456b-a287-53e5eda4b10a,Namespace:kube-system,Attempt:0,}" May 14 18:14:29.948930 containerd[1523]: time="2025-05-14T18:14:29.948882338Z" level=info msg="connecting to shim 0135c878df8bde2e6bab60d5163646e508185777d84e229de70cc0574ec1002a" address="unix:///run/containerd/s/d5f2f86a39bfb9b9bd91c77efa1ffab9f3f289e31402bfd7c03db689ed74b941" namespace=k8s.io protocol=ttrpc version=3 May 14 18:14:29.971090 systemd[1]: Started cri-containerd-0135c878df8bde2e6bab60d5163646e508185777d84e229de70cc0574ec1002a.scope - libcontainer container 0135c878df8bde2e6bab60d5163646e508185777d84e229de70cc0574ec1002a. May 14 18:14:29.991723 containerd[1523]: time="2025-05-14T18:14:29.991675119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29nw9,Uid:6ea4e0f2-d97a-456b-a287-53e5eda4b10a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0135c878df8bde2e6bab60d5163646e508185777d84e229de70cc0574ec1002a\"" May 14 18:14:29.995838 containerd[1523]: time="2025-05-14T18:14:29.995803796Z" level=info msg="CreateContainer within sandbox \"0135c878df8bde2e6bab60d5163646e508185777d84e229de70cc0574ec1002a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 18:14:30.004460 containerd[1523]: time="2025-05-14T18:14:30.004374833Z" level=info msg="Container c818e07dc9e478ba3e222c6fc2d6fa1478ca0474a10712524bc2f901840a3f67: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:30.012084 containerd[1523]: time="2025-05-14T18:14:30.012047101Z" level=info msg="CreateContainer within sandbox \"0135c878df8bde2e6bab60d5163646e508185777d84e229de70cc0574ec1002a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c818e07dc9e478ba3e222c6fc2d6fa1478ca0474a10712524bc2f901840a3f67\"" May 14 18:14:30.012815 containerd[1523]: time="2025-05-14T18:14:30.012783515Z" level=info msg="StartContainer for \"c818e07dc9e478ba3e222c6fc2d6fa1478ca0474a10712524bc2f901840a3f67\"" May 14 18:14:30.015285 containerd[1523]: time="2025-05-14T18:14:30.015245517Z" level=info msg="connecting to shim c818e07dc9e478ba3e222c6fc2d6fa1478ca0474a10712524bc2f901840a3f67" address="unix:///run/containerd/s/d5f2f86a39bfb9b9bd91c77efa1ffab9f3f289e31402bfd7c03db689ed74b941" protocol=ttrpc version=3 May 14 18:14:30.035093 systemd[1]: Started cri-containerd-c818e07dc9e478ba3e222c6fc2d6fa1478ca0474a10712524bc2f901840a3f67.scope - libcontainer container c818e07dc9e478ba3e222c6fc2d6fa1478ca0474a10712524bc2f901840a3f67. May 14 18:14:30.041414 containerd[1523]: time="2025-05-14T18:14:30.041385677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-q77tn,Uid:d14c711f-18e0-4543-be3c-8335b826f29e,Namespace:tigera-operator,Attempt:0,}" May 14 18:14:30.056244 containerd[1523]: time="2025-05-14T18:14:30.056209967Z" level=info msg="connecting to shim b8921c49fc68fb487f1e0c2cbeca44b5aa5c324386ccf5bbcbafb6fee8e14e95" address="unix:///run/containerd/s/0ea695cb9826e93f5b5cde2da2adb0b434ac219de4ce7224ec23649eb9c235ef" namespace=k8s.io protocol=ttrpc version=3 May 14 18:14:30.070190 containerd[1523]: time="2025-05-14T18:14:30.070079511Z" level=info msg="StartContainer for \"c818e07dc9e478ba3e222c6fc2d6fa1478ca0474a10712524bc2f901840a3f67\" returns successfully" May 14 18:14:30.091707 systemd[1]: Started cri-containerd-b8921c49fc68fb487f1e0c2cbeca44b5aa5c324386ccf5bbcbafb6fee8e14e95.scope - libcontainer container b8921c49fc68fb487f1e0c2cbeca44b5aa5c324386ccf5bbcbafb6fee8e14e95. May 14 18:14:30.125282 containerd[1523]: time="2025-05-14T18:14:30.125238678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-q77tn,Uid:d14c711f-18e0-4543-be3c-8335b826f29e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b8921c49fc68fb487f1e0c2cbeca44b5aa5c324386ccf5bbcbafb6fee8e14e95\"" May 14 18:14:30.127373 containerd[1523]: time="2025-05-14T18:14:30.127347604Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 18:14:30.337765 kubelet[2626]: I0514 18:14:30.337624 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-29nw9" podStartSLOduration=1.337607937 podStartE2EDuration="1.337607937s" podCreationTimestamp="2025-05-14 18:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:14:30.337539043 +0000 UTC m=+6.113553103" watchObservedRunningTime="2025-05-14 18:14:30.337607937 +0000 UTC m=+6.113621997" May 14 18:14:31.544285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3463828569.mount: Deactivated successfully. May 14 18:14:32.189006 containerd[1523]: time="2025-05-14T18:14:32.188795342Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:32.189372 containerd[1523]: time="2025-05-14T18:14:32.189153868Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 14 18:14:32.190031 containerd[1523]: time="2025-05-14T18:14:32.190005773Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:32.192359 containerd[1523]: time="2025-05-14T18:14:32.192325604Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:32.193014 containerd[1523]: time="2025-05-14T18:14:32.192989259Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.065611312s" May 14 18:14:32.193057 containerd[1523]: time="2025-05-14T18:14:32.193016998Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 14 18:14:32.195772 containerd[1523]: time="2025-05-14T18:14:32.195725296Z" level=info msg="CreateContainer within sandbox \"b8921c49fc68fb487f1e0c2cbeca44b5aa5c324386ccf5bbcbafb6fee8e14e95\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 18:14:32.204000 containerd[1523]: time="2025-05-14T18:14:32.203760849Z" level=info msg="Container c0cb1a9677117b6bb2e11153602bc08c034e090fdfd0ceff9b42e25709e07cc2: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:32.208896 containerd[1523]: time="2025-05-14T18:14:32.208851701Z" level=info msg="CreateContainer within sandbox \"b8921c49fc68fb487f1e0c2cbeca44b5aa5c324386ccf5bbcbafb6fee8e14e95\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c0cb1a9677117b6bb2e11153602bc08c034e090fdfd0ceff9b42e25709e07cc2\"" May 14 18:14:32.209381 containerd[1523]: time="2025-05-14T18:14:32.209345119Z" level=info msg="StartContainer for \"c0cb1a9677117b6bb2e11153602bc08c034e090fdfd0ceff9b42e25709e07cc2\"" May 14 18:14:32.210404 containerd[1523]: time="2025-05-14T18:14:32.210376747Z" level=info msg="connecting to shim c0cb1a9677117b6bb2e11153602bc08c034e090fdfd0ceff9b42e25709e07cc2" address="unix:///run/containerd/s/0ea695cb9826e93f5b5cde2da2adb0b434ac219de4ce7224ec23649eb9c235ef" protocol=ttrpc version=3 May 14 18:14:32.229123 systemd[1]: Started cri-containerd-c0cb1a9677117b6bb2e11153602bc08c034e090fdfd0ceff9b42e25709e07cc2.scope - libcontainer container c0cb1a9677117b6bb2e11153602bc08c034e090fdfd0ceff9b42e25709e07cc2. May 14 18:14:32.256501 containerd[1523]: time="2025-05-14T18:14:32.256419452Z" level=info msg="StartContainer for \"c0cb1a9677117b6bb2e11153602bc08c034e090fdfd0ceff9b42e25709e07cc2\" returns successfully" May 14 18:14:32.342889 kubelet[2626]: I0514 18:14:32.342823 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-q77tn" podStartSLOduration=1.275299725 podStartE2EDuration="3.342741227s" podCreationTimestamp="2025-05-14 18:14:29 +0000 UTC" firstStartedPulling="2025-05-14 18:14:30.126376286 +0000 UTC m=+5.902390346" lastFinishedPulling="2025-05-14 18:14:32.193817828 +0000 UTC m=+7.969831848" observedRunningTime="2025-05-14 18:14:32.342554579 +0000 UTC m=+8.118568639" watchObservedRunningTime="2025-05-14 18:14:32.342741227 +0000 UTC m=+8.118755287" May 14 18:14:37.283688 systemd[1]: Created slice kubepods-besteffort-pod8d2540df_28ab_452a_a202_de0a851cad5a.slice - libcontainer container kubepods-besteffort-pod8d2540df_28ab_452a_a202_de0a851cad5a.slice. May 14 18:14:37.288345 kubelet[2626]: I0514 18:14:37.287938 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prn4h\" (UniqueName: \"kubernetes.io/projected/8d2540df-28ab-452a-a202-de0a851cad5a-kube-api-access-prn4h\") pod \"calico-typha-5c45c7f86c-m6g2v\" (UID: \"8d2540df-28ab-452a-a202-de0a851cad5a\") " pod="calico-system/calico-typha-5c45c7f86c-m6g2v" May 14 18:14:37.289359 kubelet[2626]: I0514 18:14:37.289290 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d2540df-28ab-452a-a202-de0a851cad5a-tigera-ca-bundle\") pod \"calico-typha-5c45c7f86c-m6g2v\" (UID: \"8d2540df-28ab-452a-a202-de0a851cad5a\") " pod="calico-system/calico-typha-5c45c7f86c-m6g2v" May 14 18:14:37.289359 kubelet[2626]: I0514 18:14:37.289330 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8d2540df-28ab-452a-a202-de0a851cad5a-typha-certs\") pod \"calico-typha-5c45c7f86c-m6g2v\" (UID: \"8d2540df-28ab-452a-a202-de0a851cad5a\") " pod="calico-system/calico-typha-5c45c7f86c-m6g2v" May 14 18:14:37.378167 systemd[1]: Created slice kubepods-besteffort-podad204f77_3166_479d_93b7_db06e21e14fc.slice - libcontainer container kubepods-besteffort-podad204f77_3166_479d_93b7_db06e21e14fc.slice. May 14 18:14:37.389561 kubelet[2626]: I0514 18:14:37.389518 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-var-run-calico\") pod \"calico-node-srp7v\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " pod="calico-system/calico-node-srp7v" May 14 18:14:37.389561 kubelet[2626]: I0514 18:14:37.389561 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-flexvol-driver-host\") pod \"calico-node-srp7v\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " pod="calico-system/calico-node-srp7v" May 14 18:14:37.389754 kubelet[2626]: I0514 18:14:37.389594 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-cni-bin-dir\") pod \"calico-node-srp7v\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " pod="calico-system/calico-node-srp7v" May 14 18:14:37.389754 kubelet[2626]: I0514 18:14:37.389615 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-cni-net-dir\") pod \"calico-node-srp7v\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " pod="calico-system/calico-node-srp7v" May 14 18:14:37.389754 kubelet[2626]: I0514 18:14:37.389637 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-cni-log-dir\") pod \"calico-node-srp7v\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " pod="calico-system/calico-node-srp7v" May 14 18:14:37.389754 kubelet[2626]: I0514 18:14:37.389657 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-policysync\") pod \"calico-node-srp7v\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " pod="calico-system/calico-node-srp7v" May 14 18:14:37.389754 kubelet[2626]: I0514 18:14:37.389688 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad204f77-3166-479d-93b7-db06e21e14fc-tigera-ca-bundle\") pod \"calico-node-srp7v\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " pod="calico-system/calico-node-srp7v" May 14 18:14:37.389868 kubelet[2626]: I0514 18:14:37.389708 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ad204f77-3166-479d-93b7-db06e21e14fc-node-certs\") pod \"calico-node-srp7v\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " pod="calico-system/calico-node-srp7v" May 14 18:14:37.389868 kubelet[2626]: I0514 18:14:37.389729 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb527\" (UniqueName: \"kubernetes.io/projected/ad204f77-3166-479d-93b7-db06e21e14fc-kube-api-access-tb527\") pod \"calico-node-srp7v\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " pod="calico-system/calico-node-srp7v" May 14 18:14:37.389868 kubelet[2626]: I0514 18:14:37.389761 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-xtables-lock\") pod \"calico-node-srp7v\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " pod="calico-system/calico-node-srp7v" May 14 18:14:37.389868 kubelet[2626]: I0514 18:14:37.389784 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-var-lib-calico\") pod \"calico-node-srp7v\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " pod="calico-system/calico-node-srp7v" May 14 18:14:37.389868 kubelet[2626]: I0514 18:14:37.389812 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-lib-modules\") pod \"calico-node-srp7v\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " pod="calico-system/calico-node-srp7v" May 14 18:14:37.483266 kubelet[2626]: E0514 18:14:37.483188 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sg55z" podUID="2b1d9552-b1a9-492e-abe0-6d929715d5ec" May 14 18:14:37.491967 kubelet[2626]: E0514 18:14:37.491871 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.491967 kubelet[2626]: W0514 18:14:37.491902 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.491967 kubelet[2626]: E0514 18:14:37.491932 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.492322 kubelet[2626]: E0514 18:14:37.492301 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.492322 kubelet[2626]: W0514 18:14:37.492317 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.492400 kubelet[2626]: E0514 18:14:37.492331 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.492838 kubelet[2626]: E0514 18:14:37.492750 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.492838 kubelet[2626]: W0514 18:14:37.492768 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.492838 kubelet[2626]: E0514 18:14:37.492781 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.501065 kubelet[2626]: E0514 18:14:37.501035 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.501065 kubelet[2626]: W0514 18:14:37.501058 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.501200 kubelet[2626]: E0514 18:14:37.501078 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.505060 kubelet[2626]: E0514 18:14:37.505026 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.505060 kubelet[2626]: W0514 18:14:37.505059 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.505173 kubelet[2626]: E0514 18:14:37.505156 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.506946 kubelet[2626]: E0514 18:14:37.506912 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.506946 kubelet[2626]: W0514 18:14:37.506937 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.507062 kubelet[2626]: E0514 18:14:37.506995 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.507164 kubelet[2626]: E0514 18:14:37.507146 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.507164 kubelet[2626]: W0514 18:14:37.507160 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.507232 kubelet[2626]: E0514 18:14:37.507173 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.586092 kubelet[2626]: E0514 18:14:37.585988 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.586092 kubelet[2626]: W0514 18:14:37.586015 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.586092 kubelet[2626]: E0514 18:14:37.586039 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.586927 kubelet[2626]: E0514 18:14:37.586907 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.586927 kubelet[2626]: W0514 18:14:37.586925 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.587028 kubelet[2626]: E0514 18:14:37.586940 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.587192 kubelet[2626]: E0514 18:14:37.587155 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.587192 kubelet[2626]: W0514 18:14:37.587169 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.587192 kubelet[2626]: E0514 18:14:37.587179 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.587378 kubelet[2626]: E0514 18:14:37.587346 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.587378 kubelet[2626]: W0514 18:14:37.587360 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.587378 kubelet[2626]: E0514 18:14:37.587369 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.589100 kubelet[2626]: E0514 18:14:37.589066 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.589223 kubelet[2626]: W0514 18:14:37.589127 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.589223 kubelet[2626]: E0514 18:14:37.589143 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.589575 kubelet[2626]: E0514 18:14:37.589556 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.589575 kubelet[2626]: W0514 18:14:37.589576 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.589688 kubelet[2626]: E0514 18:14:37.589588 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.589895 kubelet[2626]: E0514 18:14:37.589860 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.589895 kubelet[2626]: W0514 18:14:37.589895 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.590053 kubelet[2626]: E0514 18:14:37.589906 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.590151 kubelet[2626]: E0514 18:14:37.590133 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.590151 kubelet[2626]: W0514 18:14:37.590149 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.590353 kubelet[2626]: E0514 18:14:37.590159 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.590496 kubelet[2626]: E0514 18:14:37.590443 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.590496 kubelet[2626]: W0514 18:14:37.590494 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.590692 kubelet[2626]: E0514 18:14:37.590507 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.590692 kubelet[2626]: E0514 18:14:37.590653 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.590692 kubelet[2626]: W0514 18:14:37.590660 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.590692 kubelet[2626]: E0514 18:14:37.590669 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.590820 kubelet[2626]: E0514 18:14:37.590801 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.590820 kubelet[2626]: W0514 18:14:37.590814 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.590869 kubelet[2626]: E0514 18:14:37.590822 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.591039 kubelet[2626]: E0514 18:14:37.590977 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.591039 kubelet[2626]: W0514 18:14:37.590989 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.591039 kubelet[2626]: E0514 18:14:37.590997 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.591984 containerd[1523]: time="2025-05-14T18:14:37.591922711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c45c7f86c-m6g2v,Uid:8d2540df-28ab-452a-a202-de0a851cad5a,Namespace:calico-system,Attempt:0,}" May 14 18:14:37.592876 kubelet[2626]: E0514 18:14:37.592814 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.592876 kubelet[2626]: W0514 18:14:37.592843 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.592876 kubelet[2626]: E0514 18:14:37.592868 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.593334 kubelet[2626]: E0514 18:14:37.593065 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.593334 kubelet[2626]: W0514 18:14:37.593078 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.593334 kubelet[2626]: E0514 18:14:37.593088 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.593334 kubelet[2626]: E0514 18:14:37.593245 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.593334 kubelet[2626]: W0514 18:14:37.593254 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.593334 kubelet[2626]: E0514 18:14:37.593263 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.593456 kubelet[2626]: E0514 18:14:37.593407 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.593456 kubelet[2626]: W0514 18:14:37.593415 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.593456 kubelet[2626]: E0514 18:14:37.593424 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.593687 kubelet[2626]: E0514 18:14:37.593579 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.593687 kubelet[2626]: W0514 18:14:37.593594 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.593687 kubelet[2626]: E0514 18:14:37.593603 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.593778 kubelet[2626]: E0514 18:14:37.593753 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.593778 kubelet[2626]: W0514 18:14:37.593763 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.593778 kubelet[2626]: E0514 18:14:37.593772 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.594246 kubelet[2626]: E0514 18:14:37.594081 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.594246 kubelet[2626]: W0514 18:14:37.594098 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.594246 kubelet[2626]: E0514 18:14:37.594111 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.594368 kubelet[2626]: E0514 18:14:37.594264 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.594368 kubelet[2626]: W0514 18:14:37.594274 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.594368 kubelet[2626]: E0514 18:14:37.594283 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.594605 kubelet[2626]: E0514 18:14:37.594516 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.594605 kubelet[2626]: W0514 18:14:37.594530 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.594605 kubelet[2626]: E0514 18:14:37.594539 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.594605 kubelet[2626]: I0514 18:14:37.594563 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2b1d9552-b1a9-492e-abe0-6d929715d5ec-registration-dir\") pod \"csi-node-driver-sg55z\" (UID: \"2b1d9552-b1a9-492e-abe0-6d929715d5ec\") " pod="calico-system/csi-node-driver-sg55z" May 14 18:14:37.594721 kubelet[2626]: E0514 18:14:37.594708 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.594721 kubelet[2626]: W0514 18:14:37.594717 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.594766 kubelet[2626]: E0514 18:14:37.594731 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.594766 kubelet[2626]: I0514 18:14:37.594747 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5v2\" (UniqueName: \"kubernetes.io/projected/2b1d9552-b1a9-492e-abe0-6d929715d5ec-kube-api-access-kt5v2\") pod \"csi-node-driver-sg55z\" (UID: \"2b1d9552-b1a9-492e-abe0-6d929715d5ec\") " pod="calico-system/csi-node-driver-sg55z" May 14 18:14:37.595275 kubelet[2626]: E0514 18:14:37.594884 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.595275 kubelet[2626]: W0514 18:14:37.594896 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.595275 kubelet[2626]: E0514 18:14:37.594910 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.595275 kubelet[2626]: I0514 18:14:37.594923 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2b1d9552-b1a9-492e-abe0-6d929715d5ec-socket-dir\") pod \"csi-node-driver-sg55z\" (UID: \"2b1d9552-b1a9-492e-abe0-6d929715d5ec\") " pod="calico-system/csi-node-driver-sg55z" May 14 18:14:37.595275 kubelet[2626]: E0514 18:14:37.595146 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.595275 kubelet[2626]: W0514 18:14:37.595159 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.595275 kubelet[2626]: E0514 18:14:37.595180 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.595275 kubelet[2626]: I0514 18:14:37.595195 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2b1d9552-b1a9-492e-abe0-6d929715d5ec-varrun\") pod \"csi-node-driver-sg55z\" (UID: \"2b1d9552-b1a9-492e-abe0-6d929715d5ec\") " pod="calico-system/csi-node-driver-sg55z" May 14 18:14:37.595477 kubelet[2626]: E0514 18:14:37.595371 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.595477 kubelet[2626]: W0514 18:14:37.595380 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.595477 kubelet[2626]: E0514 18:14:37.595393 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.595477 kubelet[2626]: I0514 18:14:37.595410 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b1d9552-b1a9-492e-abe0-6d929715d5ec-kubelet-dir\") pod \"csi-node-driver-sg55z\" (UID: \"2b1d9552-b1a9-492e-abe0-6d929715d5ec\") " pod="calico-system/csi-node-driver-sg55z" May 14 18:14:37.595917 kubelet[2626]: E0514 18:14:37.595892 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.595917 kubelet[2626]: W0514 18:14:37.595911 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.596314 kubelet[2626]: E0514 18:14:37.595935 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.596314 kubelet[2626]: E0514 18:14:37.596107 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.596314 kubelet[2626]: W0514 18:14:37.596116 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.596314 kubelet[2626]: E0514 18:14:37.596142 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.596314 kubelet[2626]: E0514 18:14:37.596271 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.596314 kubelet[2626]: W0514 18:14:37.596278 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.596465 kubelet[2626]: E0514 18:14:37.596347 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.596465 kubelet[2626]: E0514 18:14:37.596443 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.596465 kubelet[2626]: W0514 18:14:37.596452 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.596528 kubelet[2626]: E0514 18:14:37.596507 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.596791 kubelet[2626]: E0514 18:14:37.596679 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.596791 kubelet[2626]: W0514 18:14:37.596693 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.596791 kubelet[2626]: E0514 18:14:37.596775 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.597195 kubelet[2626]: E0514 18:14:37.597179 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.597195 kubelet[2626]: W0514 18:14:37.597195 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.597371 kubelet[2626]: E0514 18:14:37.597351 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.597533 kubelet[2626]: E0514 18:14:37.597519 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.597556 kubelet[2626]: W0514 18:14:37.597533 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.597556 kubelet[2626]: E0514 18:14:37.597544 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.597970 kubelet[2626]: E0514 18:14:37.597912 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.597970 kubelet[2626]: W0514 18:14:37.597929 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.597970 kubelet[2626]: E0514 18:14:37.597940 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.598199 kubelet[2626]: E0514 18:14:37.598182 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.598199 kubelet[2626]: W0514 18:14:37.598198 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.598363 kubelet[2626]: E0514 18:14:37.598208 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.599121 kubelet[2626]: E0514 18:14:37.599046 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.599121 kubelet[2626]: W0514 18:14:37.599065 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.599121 kubelet[2626]: E0514 18:14:37.599079 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.696093 kubelet[2626]: E0514 18:14:37.696066 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.696225 kubelet[2626]: W0514 18:14:37.696086 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.696225 kubelet[2626]: E0514 18:14:37.696157 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.696683 kubelet[2626]: E0514 18:14:37.696656 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.696752 containerd[1523]: time="2025-05-14T18:14:37.696709418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-srp7v,Uid:ad204f77-3166-479d-93b7-db06e21e14fc,Namespace:calico-system,Attempt:0,}" May 14 18:14:37.696940 kubelet[2626]: W0514 18:14:37.696787 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.696940 kubelet[2626]: E0514 18:14:37.696822 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.700684 kubelet[2626]: E0514 18:14:37.700380 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.700684 kubelet[2626]: W0514 18:14:37.700445 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.700684 kubelet[2626]: E0514 18:14:37.700473 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.700799 kubelet[2626]: E0514 18:14:37.700784 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.700799 kubelet[2626]: W0514 18:14:37.700795 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.701174 kubelet[2626]: E0514 18:14:37.701076 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.701174 kubelet[2626]: E0514 18:14:37.701006 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.701174 kubelet[2626]: W0514 18:14:37.701094 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.701174 kubelet[2626]: E0514 18:14:37.701144 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.701515 kubelet[2626]: E0514 18:14:37.701493 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.701649 kubelet[2626]: W0514 18:14:37.701511 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.702022 kubelet[2626]: E0514 18:14:37.701943 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.702022 kubelet[2626]: W0514 18:14:37.701970 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.702750 kubelet[2626]: E0514 18:14:37.702153 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.702750 kubelet[2626]: E0514 18:14:37.702423 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.702750 kubelet[2626]: W0514 18:14:37.702435 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.702750 kubelet[2626]: E0514 18:14:37.702461 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.702750 kubelet[2626]: E0514 18:14:37.702609 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.702869 kubelet[2626]: E0514 18:14:37.702685 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.702869 kubelet[2626]: W0514 18:14:37.702790 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.702869 kubelet[2626]: E0514 18:14:37.702803 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.703686 kubelet[2626]: E0514 18:14:37.703007 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.703686 kubelet[2626]: W0514 18:14:37.703025 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.703686 kubelet[2626]: E0514 18:14:37.703040 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.703686 kubelet[2626]: E0514 18:14:37.703230 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.703686 kubelet[2626]: W0514 18:14:37.703240 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.703686 kubelet[2626]: E0514 18:14:37.703269 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.703686 kubelet[2626]: E0514 18:14:37.703617 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.703686 kubelet[2626]: W0514 18:14:37.703643 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.703686 kubelet[2626]: E0514 18:14:37.703657 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.704084 kubelet[2626]: E0514 18:14:37.704010 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.704084 kubelet[2626]: W0514 18:14:37.704080 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.704253 kubelet[2626]: E0514 18:14:37.704166 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.704556 kubelet[2626]: E0514 18:14:37.704534 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.704556 kubelet[2626]: W0514 18:14:37.704551 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.704753 kubelet[2626]: E0514 18:14:37.704642 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.704753 kubelet[2626]: E0514 18:14:37.704705 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.704753 kubelet[2626]: W0514 18:14:37.704712 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.704822 kubelet[2626]: E0514 18:14:37.704745 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.705568 kubelet[2626]: E0514 18:14:37.704855 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.705568 kubelet[2626]: W0514 18:14:37.704867 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.705568 kubelet[2626]: E0514 18:14:37.705010 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.705568 kubelet[2626]: W0514 18:14:37.705018 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.705568 kubelet[2626]: E0514 18:14:37.705028 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.705568 kubelet[2626]: E0514 18:14:37.705198 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.705568 kubelet[2626]: W0514 18:14:37.705208 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.705568 kubelet[2626]: E0514 18:14:37.705217 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.705568 kubelet[2626]: E0514 18:14:37.705332 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.705568 kubelet[2626]: W0514 18:14:37.705340 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.705834 kubelet[2626]: E0514 18:14:37.705356 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.705834 kubelet[2626]: E0514 18:14:37.705458 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.705834 kubelet[2626]: W0514 18:14:37.705466 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.705834 kubelet[2626]: E0514 18:14:37.705473 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.705915 kubelet[2626]: E0514 18:14:37.705845 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.706061 kubelet[2626]: E0514 18:14:37.705996 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.706061 kubelet[2626]: W0514 18:14:37.706011 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.706061 kubelet[2626]: E0514 18:14:37.706028 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.706249 kubelet[2626]: E0514 18:14:37.706215 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.706249 kubelet[2626]: W0514 18:14:37.706233 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.706249 kubelet[2626]: E0514 18:14:37.706245 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.706437 kubelet[2626]: E0514 18:14:37.706412 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.706437 kubelet[2626]: W0514 18:14:37.706427 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.706437 kubelet[2626]: E0514 18:14:37.706437 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.706707 kubelet[2626]: E0514 18:14:37.706632 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.706707 kubelet[2626]: W0514 18:14:37.706645 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.706707 kubelet[2626]: E0514 18:14:37.706661 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.707495 kubelet[2626]: E0514 18:14:37.707466 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.707592 kubelet[2626]: W0514 18:14:37.707489 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.707592 kubelet[2626]: E0514 18:14:37.707522 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.716480 kubelet[2626]: E0514 18:14:37.716452 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:37.718452 kubelet[2626]: W0514 18:14:37.716827 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:37.718452 kubelet[2626]: E0514 18:14:37.716859 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:37.726085 containerd[1523]: time="2025-05-14T18:14:37.726031668Z" level=info msg="connecting to shim 0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1" address="unix:///run/containerd/s/c512ee31ccbfc70187a3c208d0a453c4aaf63f7de80abbb4b11aa8aef6485d51" namespace=k8s.io protocol=ttrpc version=3 May 14 18:14:37.733839 containerd[1523]: time="2025-05-14T18:14:37.733797526Z" level=info msg="connecting to shim ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876" address="unix:///run/containerd/s/646c2b7f45907b8d0d511f6d8b4dff15a5e40f51362680a56bedfd940a261b6b" namespace=k8s.io protocol=ttrpc version=3 May 14 18:14:37.752122 systemd[1]: Started cri-containerd-0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1.scope - libcontainer container 0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1. May 14 18:14:37.774104 systemd[1]: Started cri-containerd-ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876.scope - libcontainer container ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876. May 14 18:14:37.838619 containerd[1523]: time="2025-05-14T18:14:37.838231338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c45c7f86c-m6g2v,Uid:8d2540df-28ab-452a-a202-de0a851cad5a,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1\"" May 14 18:14:37.844188 containerd[1523]: time="2025-05-14T18:14:37.844139113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-srp7v,Uid:ad204f77-3166-479d-93b7-db06e21e14fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876\"" May 14 18:14:37.850567 containerd[1523]: time="2025-05-14T18:14:37.850535291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 18:14:38.879596 containerd[1523]: time="2025-05-14T18:14:38.879544841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:38.880149 containerd[1523]: time="2025-05-14T18:14:38.880106622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 14 18:14:38.880720 containerd[1523]: time="2025-05-14T18:14:38.880688654Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:38.882629 containerd[1523]: time="2025-05-14T18:14:38.882600824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:38.883168 containerd[1523]: time="2025-05-14T18:14:38.883137354Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.032565005s" May 14 18:14:38.883214 containerd[1523]: time="2025-05-14T18:14:38.883169769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 14 18:14:38.884337 containerd[1523]: time="2025-05-14T18:14:38.884300256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 18:14:38.886610 containerd[1523]: time="2025-05-14T18:14:38.886567632Z" level=info msg="CreateContainer within sandbox \"ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 18:14:38.895106 containerd[1523]: time="2025-05-14T18:14:38.895059028Z" level=info msg="Container 4d99d1f3b47f6d055469d12026f0a67337cc4d7d9bfc9aaa60f8fcf67e620f84: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:38.905567 containerd[1523]: time="2025-05-14T18:14:38.905518420Z" level=info msg="CreateContainer within sandbox \"ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4d99d1f3b47f6d055469d12026f0a67337cc4d7d9bfc9aaa60f8fcf67e620f84\"" May 14 18:14:38.906223 containerd[1523]: time="2025-05-14T18:14:38.906193175Z" level=info msg="StartContainer for \"4d99d1f3b47f6d055469d12026f0a67337cc4d7d9bfc9aaa60f8fcf67e620f84\"" May 14 18:14:38.907703 containerd[1523]: time="2025-05-14T18:14:38.907656736Z" level=info msg="connecting to shim 4d99d1f3b47f6d055469d12026f0a67337cc4d7d9bfc9aaa60f8fcf67e620f84" address="unix:///run/containerd/s/646c2b7f45907b8d0d511f6d8b4dff15a5e40f51362680a56bedfd940a261b6b" protocol=ttrpc version=3 May 14 18:14:38.931146 systemd[1]: Started cri-containerd-4d99d1f3b47f6d055469d12026f0a67337cc4d7d9bfc9aaa60f8fcf67e620f84.scope - libcontainer container 4d99d1f3b47f6d055469d12026f0a67337cc4d7d9bfc9aaa60f8fcf67e620f84. May 14 18:14:38.966018 containerd[1523]: time="2025-05-14T18:14:38.965982227Z" level=info msg="StartContainer for \"4d99d1f3b47f6d055469d12026f0a67337cc4d7d9bfc9aaa60f8fcf67e620f84\" returns successfully" May 14 18:14:39.007690 systemd[1]: cri-containerd-4d99d1f3b47f6d055469d12026f0a67337cc4d7d9bfc9aaa60f8fcf67e620f84.scope: Deactivated successfully. May 14 18:14:39.008252 systemd[1]: cri-containerd-4d99d1f3b47f6d055469d12026f0a67337cc4d7d9bfc9aaa60f8fcf67e620f84.scope: Consumed 57ms CPU time, 8M memory peak, 6.2M written to disk. May 14 18:14:39.036815 containerd[1523]: time="2025-05-14T18:14:39.036033679Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d99d1f3b47f6d055469d12026f0a67337cc4d7d9bfc9aaa60f8fcf67e620f84\" id:\"4d99d1f3b47f6d055469d12026f0a67337cc4d7d9bfc9aaa60f8fcf67e620f84\" pid:3204 exited_at:{seconds:1747246479 nanos:24206554}" May 14 18:14:39.037210 containerd[1523]: time="2025-05-14T18:14:39.037007304Z" level=info msg="received exit event container_id:\"4d99d1f3b47f6d055469d12026f0a67337cc4d7d9bfc9aaa60f8fcf67e620f84\" id:\"4d99d1f3b47f6d055469d12026f0a67337cc4d7d9bfc9aaa60f8fcf67e620f84\" pid:3204 exited_at:{seconds:1747246479 nanos:24206554}" May 14 18:14:39.072657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d99d1f3b47f6d055469d12026f0a67337cc4d7d9bfc9aaa60f8fcf67e620f84-rootfs.mount: Deactivated successfully. May 14 18:14:39.298309 kubelet[2626]: E0514 18:14:39.297826 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sg55z" podUID="2b1d9552-b1a9-492e-abe0-6d929715d5ec" May 14 18:14:40.135121 containerd[1523]: time="2025-05-14T18:14:40.135066651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:40.136141 containerd[1523]: time="2025-05-14T18:14:40.136093872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 14 18:14:40.136974 containerd[1523]: time="2025-05-14T18:14:40.136899241Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:40.139427 containerd[1523]: time="2025-05-14T18:14:40.139375655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:40.139976 containerd[1523]: time="2025-05-14T18:14:40.139825079Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.25549661s" May 14 18:14:40.139976 containerd[1523]: time="2025-05-14T18:14:40.139853131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 14 18:14:40.141193 containerd[1523]: time="2025-05-14T18:14:40.141149502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 18:14:40.153668 containerd[1523]: time="2025-05-14T18:14:40.153518206Z" level=info msg="CreateContainer within sandbox \"0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 18:14:40.160806 containerd[1523]: time="2025-05-14T18:14:40.160751248Z" level=info msg="Container 7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:40.167487 containerd[1523]: time="2025-05-14T18:14:40.167384204Z" level=info msg="CreateContainer within sandbox \"0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2\"" May 14 18:14:40.169067 containerd[1523]: time="2025-05-14T18:14:40.169037160Z" level=info msg="StartContainer for \"7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2\"" May 14 18:14:40.171155 containerd[1523]: time="2025-05-14T18:14:40.171120413Z" level=info msg="connecting to shim 7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2" address="unix:///run/containerd/s/c512ee31ccbfc70187a3c208d0a453c4aaf63f7de80abbb4b11aa8aef6485d51" protocol=ttrpc version=3 May 14 18:14:40.194200 systemd[1]: Started cri-containerd-7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2.scope - libcontainer container 7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2. May 14 18:14:40.230174 containerd[1523]: time="2025-05-14T18:14:40.230064148Z" level=info msg="StartContainer for \"7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2\" returns successfully" May 14 18:14:40.432155 update_engine[1504]: I20250514 18:14:40.432009 1504 update_attempter.cc:509] Updating boot flags... May 14 18:14:41.297649 kubelet[2626]: E0514 18:14:41.297602 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sg55z" podUID="2b1d9552-b1a9-492e-abe0-6d929715d5ec" May 14 18:14:41.363946 kubelet[2626]: I0514 18:14:41.363876 2626 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:14:43.297490 kubelet[2626]: E0514 18:14:43.297398 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sg55z" podUID="2b1d9552-b1a9-492e-abe0-6d929715d5ec" May 14 18:14:44.713516 containerd[1523]: time="2025-05-14T18:14:44.713460519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:44.714024 containerd[1523]: time="2025-05-14T18:14:44.713961837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 14 18:14:44.715224 containerd[1523]: time="2025-05-14T18:14:44.715176581Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:44.718707 containerd[1523]: time="2025-05-14T18:14:44.718660643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:44.720407 containerd[1523]: time="2025-05-14T18:14:44.719922202Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 4.578720721s" May 14 18:14:44.720407 containerd[1523]: time="2025-05-14T18:14:44.719999027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 14 18:14:44.723629 containerd[1523]: time="2025-05-14T18:14:44.723586602Z" level=info msg="CreateContainer within sandbox \"ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 18:14:44.737365 containerd[1523]: time="2025-05-14T18:14:44.737133887Z" level=info msg="Container 36399c4564023e5d1af359218addbb992fbfe7258de0b262d5da1070706162ea: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:44.747503 containerd[1523]: time="2025-05-14T18:14:44.747432345Z" level=info msg="CreateContainer within sandbox \"ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"36399c4564023e5d1af359218addbb992fbfe7258de0b262d5da1070706162ea\"" May 14 18:14:44.748866 containerd[1523]: time="2025-05-14T18:14:44.748397530Z" level=info msg="StartContainer for \"36399c4564023e5d1af359218addbb992fbfe7258de0b262d5da1070706162ea\"" May 14 18:14:44.749973 containerd[1523]: time="2025-05-14T18:14:44.749933016Z" level=info msg="connecting to shim 36399c4564023e5d1af359218addbb992fbfe7258de0b262d5da1070706162ea" address="unix:///run/containerd/s/646c2b7f45907b8d0d511f6d8b4dff15a5e40f51362680a56bedfd940a261b6b" protocol=ttrpc version=3 May 14 18:14:44.777205 systemd[1]: Started cri-containerd-36399c4564023e5d1af359218addbb992fbfe7258de0b262d5da1070706162ea.scope - libcontainer container 36399c4564023e5d1af359218addbb992fbfe7258de0b262d5da1070706162ea. May 14 18:14:44.914858 containerd[1523]: time="2025-05-14T18:14:44.914754673Z" level=info msg="StartContainer for \"36399c4564023e5d1af359218addbb992fbfe7258de0b262d5da1070706162ea\" returns successfully" May 14 18:14:45.298202 kubelet[2626]: E0514 18:14:45.298133 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sg55z" podUID="2b1d9552-b1a9-492e-abe0-6d929715d5ec" May 14 18:14:45.395429 kubelet[2626]: I0514 18:14:45.395346 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c45c7f86c-m6g2v" podStartSLOduration=6.099895025 podStartE2EDuration="8.395327054s" podCreationTimestamp="2025-05-14 18:14:37 +0000 UTC" firstStartedPulling="2025-05-14 18:14:37.845436678 +0000 UTC m=+13.621450698" lastFinishedPulling="2025-05-14 18:14:40.140868707 +0000 UTC m=+15.916882727" observedRunningTime="2025-05-14 18:14:40.373009598 +0000 UTC m=+16.149023658" watchObservedRunningTime="2025-05-14 18:14:45.395327054 +0000 UTC m=+21.171341114" May 14 18:14:45.418843 containerd[1523]: time="2025-05-14T18:14:45.418782851Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:14:45.421102 systemd[1]: cri-containerd-36399c4564023e5d1af359218addbb992fbfe7258de0b262d5da1070706162ea.scope: Deactivated successfully. May 14 18:14:45.421369 systemd[1]: cri-containerd-36399c4564023e5d1af359218addbb992fbfe7258de0b262d5da1070706162ea.scope: Consumed 446ms CPU time, 160.3M memory peak, 4K read from disk, 150.3M written to disk. May 14 18:14:45.421903 containerd[1523]: time="2025-05-14T18:14:45.421857202Z" level=info msg="received exit event container_id:\"36399c4564023e5d1af359218addbb992fbfe7258de0b262d5da1070706162ea\" id:\"36399c4564023e5d1af359218addbb992fbfe7258de0b262d5da1070706162ea\" pid:3324 exited_at:{seconds:1747246485 nanos:421633176}" May 14 18:14:45.422012 containerd[1523]: time="2025-05-14T18:14:45.421986681Z" level=info msg="TaskExit event in podsandbox handler container_id:\"36399c4564023e5d1af359218addbb992fbfe7258de0b262d5da1070706162ea\" id:\"36399c4564023e5d1af359218addbb992fbfe7258de0b262d5da1070706162ea\" pid:3324 exited_at:{seconds:1747246485 nanos:421633176}" May 14 18:14:45.434939 kubelet[2626]: I0514 18:14:45.434891 2626 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 18:14:45.452215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36399c4564023e5d1af359218addbb992fbfe7258de0b262d5da1070706162ea-rootfs.mount: Deactivated successfully. May 14 18:14:45.470516 systemd[1]: Created slice kubepods-besteffort-podc03aac11_237f_4f5f_95ac_fc014337d597.slice - libcontainer container kubepods-besteffort-podc03aac11_237f_4f5f_95ac_fc014337d597.slice. May 14 18:14:45.481050 systemd[1]: Created slice kubepods-burstable-pod771eb332_fe07_45b5_9848_e4cab057df0b.slice - libcontainer container kubepods-burstable-pod771eb332_fe07_45b5_9848_e4cab057df0b.slice. May 14 18:14:45.490051 systemd[1]: Created slice kubepods-besteffort-pod16aa4105_e2a7_46b7_982b_abcf1282d71f.slice - libcontainer container kubepods-besteffort-pod16aa4105_e2a7_46b7_982b_abcf1282d71f.slice. May 14 18:14:45.497026 systemd[1]: Created slice kubepods-besteffort-poda2abc07a_c489_4f6d_83c3_a98e091c2fd2.slice - libcontainer container kubepods-besteffort-poda2abc07a_c489_4f6d_83c3_a98e091c2fd2.slice. May 14 18:14:45.506523 systemd[1]: Created slice kubepods-burstable-podac9d5dd4_bdd4_4158_9256_0d330d2d0532.slice - libcontainer container kubepods-burstable-podac9d5dd4_bdd4_4158_9256_0d330d2d0532.slice. May 14 18:14:45.512386 systemd[1]: Created slice kubepods-besteffort-podd8cb263b_108a_4ea1_98b4_0c6a7ead7022.slice - libcontainer container kubepods-besteffort-podd8cb263b_108a_4ea1_98b4_0c6a7ead7022.slice. May 14 18:14:45.562259 kubelet[2626]: I0514 18:14:45.561751 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2abc07a-c489-4f6d-83c3-a98e091c2fd2-tigera-ca-bundle\") pod \"calico-kube-controllers-85fb4f869d-ch7ss\" (UID: \"a2abc07a-c489-4f6d-83c3-a98e091c2fd2\") " pod="calico-system/calico-kube-controllers-85fb4f869d-ch7ss" May 14 18:14:45.562259 kubelet[2626]: I0514 18:14:45.561798 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj6cb\" (UniqueName: \"kubernetes.io/projected/a2abc07a-c489-4f6d-83c3-a98e091c2fd2-kube-api-access-pj6cb\") pod \"calico-kube-controllers-85fb4f869d-ch7ss\" (UID: \"a2abc07a-c489-4f6d-83c3-a98e091c2fd2\") " pod="calico-system/calico-kube-controllers-85fb4f869d-ch7ss" May 14 18:14:45.562259 kubelet[2626]: I0514 18:14:45.561819 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/771eb332-fe07-45b5-9848-e4cab057df0b-config-volume\") pod \"coredns-6f6b679f8f-bfxr8\" (UID: \"771eb332-fe07-45b5-9848-e4cab057df0b\") " pod="kube-system/coredns-6f6b679f8f-bfxr8" May 14 18:14:45.562259 kubelet[2626]: I0514 18:14:45.561864 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c03aac11-237f-4f5f-95ac-fc014337d597-calico-apiserver-certs\") pod \"calico-apiserver-576d749fbb-j4vhz\" (UID: \"c03aac11-237f-4f5f-95ac-fc014337d597\") " pod="calico-apiserver/calico-apiserver-576d749fbb-j4vhz" May 14 18:14:45.562259 kubelet[2626]: I0514 18:14:45.561900 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7dbz\" (UniqueName: \"kubernetes.io/projected/c03aac11-237f-4f5f-95ac-fc014337d597-kube-api-access-l7dbz\") pod \"calico-apiserver-576d749fbb-j4vhz\" (UID: \"c03aac11-237f-4f5f-95ac-fc014337d597\") " pod="calico-apiserver/calico-apiserver-576d749fbb-j4vhz" May 14 18:14:45.562500 kubelet[2626]: I0514 18:14:45.561917 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5r74\" (UniqueName: \"kubernetes.io/projected/d8cb263b-108a-4ea1-98b4-0c6a7ead7022-kube-api-access-g5r74\") pod \"calico-apiserver-5976b9bcc4-mzcb9\" (UID: \"d8cb263b-108a-4ea1-98b4-0c6a7ead7022\") " pod="calico-apiserver/calico-apiserver-5976b9bcc4-mzcb9" May 14 18:14:45.562500 kubelet[2626]: I0514 18:14:45.561938 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccq8x\" (UniqueName: \"kubernetes.io/projected/16aa4105-e2a7-46b7-982b-abcf1282d71f-kube-api-access-ccq8x\") pod \"calico-apiserver-576d749fbb-vrk6m\" (UID: \"16aa4105-e2a7-46b7-982b-abcf1282d71f\") " pod="calico-apiserver/calico-apiserver-576d749fbb-vrk6m" May 14 18:14:45.562500 kubelet[2626]: I0514 18:14:45.561964 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d8cb263b-108a-4ea1-98b4-0c6a7ead7022-calico-apiserver-certs\") pod \"calico-apiserver-5976b9bcc4-mzcb9\" (UID: \"d8cb263b-108a-4ea1-98b4-0c6a7ead7022\") " pod="calico-apiserver/calico-apiserver-5976b9bcc4-mzcb9" May 14 18:14:45.562500 kubelet[2626]: I0514 18:14:45.561984 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7pjd\" (UniqueName: \"kubernetes.io/projected/771eb332-fe07-45b5-9848-e4cab057df0b-kube-api-access-r7pjd\") pod \"coredns-6f6b679f8f-bfxr8\" (UID: \"771eb332-fe07-45b5-9848-e4cab057df0b\") " pod="kube-system/coredns-6f6b679f8f-bfxr8" May 14 18:14:45.562500 kubelet[2626]: I0514 18:14:45.561999 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbfxr\" (UniqueName: \"kubernetes.io/projected/ac9d5dd4-bdd4-4158-9256-0d330d2d0532-kube-api-access-pbfxr\") pod \"coredns-6f6b679f8f-jfcjv\" (UID: \"ac9d5dd4-bdd4-4158-9256-0d330d2d0532\") " pod="kube-system/coredns-6f6b679f8f-jfcjv" May 14 18:14:45.562600 kubelet[2626]: I0514 18:14:45.562018 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac9d5dd4-bdd4-4158-9256-0d330d2d0532-config-volume\") pod \"coredns-6f6b679f8f-jfcjv\" (UID: \"ac9d5dd4-bdd4-4158-9256-0d330d2d0532\") " pod="kube-system/coredns-6f6b679f8f-jfcjv" May 14 18:14:45.562600 kubelet[2626]: I0514 18:14:45.562039 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/16aa4105-e2a7-46b7-982b-abcf1282d71f-calico-apiserver-certs\") pod \"calico-apiserver-576d749fbb-vrk6m\" (UID: \"16aa4105-e2a7-46b7-982b-abcf1282d71f\") " pod="calico-apiserver/calico-apiserver-576d749fbb-vrk6m" May 14 18:14:45.778453 containerd[1523]: time="2025-05-14T18:14:45.778324916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-576d749fbb-j4vhz,Uid:c03aac11-237f-4f5f-95ac-fc014337d597,Namespace:calico-apiserver,Attempt:0,}" May 14 18:14:45.791056 containerd[1523]: time="2025-05-14T18:14:45.791017680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bfxr8,Uid:771eb332-fe07-45b5-9848-e4cab057df0b,Namespace:kube-system,Attempt:0,}" May 14 18:14:45.793467 containerd[1523]: time="2025-05-14T18:14:45.793439158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-576d749fbb-vrk6m,Uid:16aa4105-e2a7-46b7-982b-abcf1282d71f,Namespace:calico-apiserver,Attempt:0,}" May 14 18:14:45.805680 containerd[1523]: time="2025-05-14T18:14:45.805639776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fb4f869d-ch7ss,Uid:a2abc07a-c489-4f6d-83c3-a98e091c2fd2,Namespace:calico-system,Attempt:0,}" May 14 18:14:45.829362 containerd[1523]: time="2025-05-14T18:14:45.829134664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5976b9bcc4-mzcb9,Uid:d8cb263b-108a-4ea1-98b4-0c6a7ead7022,Namespace:calico-apiserver,Attempt:0,}" May 14 18:14:45.830970 containerd[1523]: time="2025-05-14T18:14:45.830851133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jfcjv,Uid:ac9d5dd4-bdd4-4158-9256-0d330d2d0532,Namespace:kube-system,Attempt:0,}" May 14 18:14:46.349333 containerd[1523]: time="2025-05-14T18:14:46.349283976Z" level=error msg="Failed to destroy network for sandbox \"487c6e244e830496f45a870e1574868d0b8162959ef40981944536ac2d8d089c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:46.353602 containerd[1523]: time="2025-05-14T18:14:46.353535438Z" level=error msg="Failed to destroy network for sandbox \"9efac9a21fb854cfcc0e24e47bfc237bda98294fa846f6acbf27ddffe77e5bf7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:46.355737 containerd[1523]: time="2025-05-14T18:14:46.355616737Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fb4f869d-ch7ss,Uid:a2abc07a-c489-4f6d-83c3-a98e091c2fd2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"487c6e244e830496f45a870e1574868d0b8162959ef40981944536ac2d8d089c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:46.357825 containerd[1523]: time="2025-05-14T18:14:46.356529071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-576d749fbb-vrk6m,Uid:16aa4105-e2a7-46b7-982b-abcf1282d71f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9efac9a21fb854cfcc0e24e47bfc237bda98294fa846f6acbf27ddffe77e5bf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:46.357946 kubelet[2626]: E0514 18:14:46.357278 2626 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9efac9a21fb854cfcc0e24e47bfc237bda98294fa846f6acbf27ddffe77e5bf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:46.357946 kubelet[2626]: E0514 18:14:46.357366 2626 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9efac9a21fb854cfcc0e24e47bfc237bda98294fa846f6acbf27ddffe77e5bf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-576d749fbb-vrk6m" May 14 18:14:46.357946 kubelet[2626]: E0514 18:14:46.357386 2626 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9efac9a21fb854cfcc0e24e47bfc237bda98294fa846f6acbf27ddffe77e5bf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-576d749fbb-vrk6m" May 14 18:14:46.358335 kubelet[2626]: E0514 18:14:46.357439 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-576d749fbb-vrk6m_calico-apiserver(16aa4105-e2a7-46b7-982b-abcf1282d71f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-576d749fbb-vrk6m_calico-apiserver(16aa4105-e2a7-46b7-982b-abcf1282d71f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9efac9a21fb854cfcc0e24e47bfc237bda98294fa846f6acbf27ddffe77e5bf7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-576d749fbb-vrk6m" podUID="16aa4105-e2a7-46b7-982b-abcf1282d71f" May 14 18:14:46.358335 kubelet[2626]: E0514 18:14:46.357700 2626 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"487c6e244e830496f45a870e1574868d0b8162959ef40981944536ac2d8d089c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:46.358335 kubelet[2626]: E0514 18:14:46.357737 2626 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"487c6e244e830496f45a870e1574868d0b8162959ef40981944536ac2d8d089c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fb4f869d-ch7ss" May 14 18:14:46.358433 kubelet[2626]: E0514 18:14:46.357753 2626 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"487c6e244e830496f45a870e1574868d0b8162959ef40981944536ac2d8d089c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fb4f869d-ch7ss" May 14 18:14:46.358433 kubelet[2626]: E0514 18:14:46.357786 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85fb4f869d-ch7ss_calico-system(a2abc07a-c489-4f6d-83c3-a98e091c2fd2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85fb4f869d-ch7ss_calico-system(a2abc07a-c489-4f6d-83c3-a98e091c2fd2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"487c6e244e830496f45a870e1574868d0b8162959ef40981944536ac2d8d089c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85fb4f869d-ch7ss" podUID="a2abc07a-c489-4f6d-83c3-a98e091c2fd2" May 14 18:14:46.358858 containerd[1523]: time="2025-05-14T18:14:46.358822788Z" level=error msg="Failed to destroy network for sandbox \"946370cee88d4f2114692b2c1b86aaf5318622fb53c60b9beae29cc610aa57b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:46.359705 containerd[1523]: time="2025-05-14T18:14:46.359653499Z" level=error msg="Failed to destroy network for sandbox \"2d010f81441873e54b244c538250788d99110de485c1577f4df9b98798610e01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:46.359904 containerd[1523]: time="2025-05-14T18:14:46.359685548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-576d749fbb-j4vhz,Uid:c03aac11-237f-4f5f-95ac-fc014337d597,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"946370cee88d4f2114692b2c1b86aaf5318622fb53c60b9beae29cc610aa57b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:46.360173 kubelet[2626]: E0514 18:14:46.360135 2626 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"946370cee88d4f2114692b2c1b86aaf5318622fb53c60b9beae29cc610aa57b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:46.360254 kubelet[2626]: E0514 18:14:46.360186 2626 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"946370cee88d4f2114692b2c1b86aaf5318622fb53c60b9beae29cc610aa57b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-576d749fbb-j4vhz" May 14 18:14:46.360254 kubelet[2626]: E0514 18:14:46.360205 2626 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"946370cee88d4f2114692b2c1b86aaf5318622fb53c60b9beae29cc610aa57b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-576d749fbb-j4vhz" May 14 18:14:46.360307 kubelet[2626]: E0514 18:14:46.360248 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-576d749fbb-j4vhz_calico-apiserver(c03aac11-237f-4f5f-95ac-fc014337d597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-576d749fbb-j4vhz_calico-apiserver(c03aac11-237f-4f5f-95ac-fc014337d597)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"946370cee88d4f2114692b2c1b86aaf5318622fb53c60b9beae29cc610aa57b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-576d749fbb-j4vhz" podUID="c03aac11-237f-4f5f-95ac-fc014337d597" May 14 18:14:46.364071 containerd[1523]: time="2025-05-14T18:14:46.364021274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jfcjv,Uid:ac9d5dd4-bdd4-4158-9256-0d330d2d0532,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d010f81441873e54b244c538250788d99110de485c1577f4df9b98798610e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:46.364301 kubelet[2626]: E0514 18:14:46.364272 2626 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d010f81441873e54b244c538250788d99110de485c1577f4df9b98798610e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:46.364402 kubelet[2626]: E0514 18:14:46.364384 2626 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d010f81441873e54b244c538250788d99110de485c1577f4df9b98798610e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-jfcjv" May 14 18:14:46.364618 kubelet[2626]: E0514 18:14:46.364502 2626 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d010f81441873e54b244c538250788d99110de485c1577f4df9b98798610e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-jfcjv" May 14 18:14:46.364618 kubelet[2626]: E0514 18:14:46.364554 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-jfcjv_kube-system(ac9d5dd4-bdd4-4158-9256-0d330d2d0532)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-jfcjv_kube-system(ac9d5dd4-bdd4-4158-9256-0d330d2d0532)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d010f81441873e54b244c538250788d99110de485c1577f4df9b98798610e01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-jfcjv" podUID="ac9d5dd4-bdd4-4158-9256-0d330d2d0532" May 14 18:14:46.364744 containerd[1523]: time="2025-05-14T18:14:46.364694261Z" level=error msg="Failed to destroy network for sandbox \"1155de56d18e8751cf170dbc2994d43564665fefd8d40282dfe09048a7281388\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:46.366482 containerd[1523]: time="2025-05-14T18:14:46.366399335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bfxr8,Uid:771eb332-fe07-45b5-9848-e4cab057df0b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1155de56d18e8751cf170dbc2994d43564665fefd8d40282dfe09048a7281388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:46.366686 kubelet[2626]: E0514 18:14:46.366653 2626 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1155de56d18e8751cf170dbc2994d43564665fefd8d40282dfe09048a7281388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:46.366737 kubelet[2626]: E0514 18:14:46.366707 2626 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1155de56d18e8751cf170dbc2994d43564665fefd8d40282dfe09048a7281388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bfxr8" May 14 18:14:46.366737 kubelet[2626]: E0514 18:14:46.366725 2626 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1155de56d18e8751cf170dbc2994d43564665fefd8d40282dfe09048a7281388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bfxr8" May 14 18:14:46.366857 kubelet[2626]: E0514 18:14:46.366760 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bfxr8_kube-system(771eb332-fe07-45b5-9848-e4cab057df0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bfxr8_kube-system(771eb332-fe07-45b5-9848-e4cab057df0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1155de56d18e8751cf170dbc2994d43564665fefd8d40282dfe09048a7281388\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bfxr8" podUID="771eb332-fe07-45b5-9848-e4cab057df0b" May 14 18:14:46.367650 containerd[1523]: time="2025-05-14T18:14:46.367555376Z" level=error msg="Failed to destroy network for sandbox \"198c52627dfd520a260b38bf678cfdf271292fac440d4acc4019b8409ea61ffd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:46.371000 containerd[1523]: time="2025-05-14T18:14:46.370421813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5976b9bcc4-mzcb9,Uid:d8cb263b-108a-4ea1-98b4-0c6a7ead7022,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"198c52627dfd520a260b38bf678cfdf271292fac440d4acc4019b8409ea61ffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:46.371150 kubelet[2626]: E0514 18:14:46.370650 2626 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"198c52627dfd520a260b38bf678cfdf271292fac440d4acc4019b8409ea61ffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:46.371150 kubelet[2626]: E0514 18:14:46.370693 2626 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"198c52627dfd520a260b38bf678cfdf271292fac440d4acc4019b8409ea61ffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5976b9bcc4-mzcb9" May 14 18:14:46.371150 kubelet[2626]: E0514 18:14:46.370720 2626 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"198c52627dfd520a260b38bf678cfdf271292fac440d4acc4019b8409ea61ffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5976b9bcc4-mzcb9" May 14 18:14:46.371379 kubelet[2626]: E0514 18:14:46.370751 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5976b9bcc4-mzcb9_calico-apiserver(d8cb263b-108a-4ea1-98b4-0c6a7ead7022)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5976b9bcc4-mzcb9_calico-apiserver(d8cb263b-108a-4ea1-98b4-0c6a7ead7022)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"198c52627dfd520a260b38bf678cfdf271292fac440d4acc4019b8409ea61ffd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5976b9bcc4-mzcb9" podUID="d8cb263b-108a-4ea1-98b4-0c6a7ead7022" May 14 18:14:46.382218 containerd[1523]: time="2025-05-14T18:14:46.382167799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 18:14:46.736204 systemd[1]: run-netns-cni\x2d3af59fa2\x2d33e4\x2df88a\x2d59a5\x2d8e2e20c8779a.mount: Deactivated successfully. May 14 18:14:46.736298 systemd[1]: run-netns-cni\x2d0eb068be\x2d3de7\x2d7583\x2d3f68\x2d8aa0c02c5cec.mount: Deactivated successfully. May 14 18:14:46.736341 systemd[1]: run-netns-cni\x2dadd28023\x2d26d4\x2dfd40\x2defcf\x2d029cd6f068d3.mount: Deactivated successfully. May 14 18:14:46.736384 systemd[1]: run-netns-cni\x2d569de36a\x2dc932\x2dcecf\x2d3a03\x2d0e8a62b0fc48.mount: Deactivated successfully. May 14 18:14:47.304243 systemd[1]: Created slice kubepods-besteffort-pod2b1d9552_b1a9_492e_abe0_6d929715d5ec.slice - libcontainer container kubepods-besteffort-pod2b1d9552_b1a9_492e_abe0_6d929715d5ec.slice. May 14 18:14:47.306331 containerd[1523]: time="2025-05-14T18:14:47.306295109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sg55z,Uid:2b1d9552-b1a9-492e-abe0-6d929715d5ec,Namespace:calico-system,Attempt:0,}" May 14 18:14:47.352885 containerd[1523]: time="2025-05-14T18:14:47.352821717Z" level=error msg="Failed to destroy network for sandbox \"7fe4af3cbdc8ae93538b9fedd56d5bfa9405c0eb93ca6c659297c3dbe4423d40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:47.354064 containerd[1523]: time="2025-05-14T18:14:47.354003425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sg55z,Uid:2b1d9552-b1a9-492e-abe0-6d929715d5ec,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fe4af3cbdc8ae93538b9fedd56d5bfa9405c0eb93ca6c659297c3dbe4423d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:47.354286 kubelet[2626]: E0514 18:14:47.354232 2626 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fe4af3cbdc8ae93538b9fedd56d5bfa9405c0eb93ca6c659297c3dbe4423d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:47.354353 kubelet[2626]: E0514 18:14:47.354308 2626 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fe4af3cbdc8ae93538b9fedd56d5bfa9405c0eb93ca6c659297c3dbe4423d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sg55z" May 14 18:14:47.354353 kubelet[2626]: E0514 18:14:47.354327 2626 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fe4af3cbdc8ae93538b9fedd56d5bfa9405c0eb93ca6c659297c3dbe4423d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sg55z" May 14 18:14:47.354402 kubelet[2626]: E0514 18:14:47.354376 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sg55z_calico-system(2b1d9552-b1a9-492e-abe0-6d929715d5ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sg55z_calico-system(2b1d9552-b1a9-492e-abe0-6d929715d5ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fe4af3cbdc8ae93538b9fedd56d5bfa9405c0eb93ca6c659297c3dbe4423d40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sg55z" podUID="2b1d9552-b1a9-492e-abe0-6d929715d5ec" May 14 18:14:47.354732 systemd[1]: run-netns-cni\x2d69b42288\x2db17a\x2da210\x2df099\x2d17a91f6cc09a.mount: Deactivated successfully. May 14 18:14:49.591710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount697899288.mount: Deactivated successfully. May 14 18:14:49.911066 containerd[1523]: time="2025-05-14T18:14:49.892454430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 14 18:14:49.911066 containerd[1523]: time="2025-05-14T18:14:49.895306403Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 3.513089072s" May 14 18:14:49.911544 containerd[1523]: time="2025-05-14T18:14:49.911083098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 14 18:14:49.911544 containerd[1523]: time="2025-05-14T18:14:49.908989338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:49.912148 containerd[1523]: time="2025-05-14T18:14:49.912023033Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:49.913805 containerd[1523]: time="2025-05-14T18:14:49.913250834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:49.921108 containerd[1523]: time="2025-05-14T18:14:49.921056903Z" level=info msg="CreateContainer within sandbox \"ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 18:14:49.941998 containerd[1523]: time="2025-05-14T18:14:49.941523592Z" level=info msg="Container 232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:49.973579 containerd[1523]: time="2025-05-14T18:14:49.973514401Z" level=info msg="CreateContainer within sandbox \"ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957\"" May 14 18:14:49.974134 containerd[1523]: time="2025-05-14T18:14:49.974104176Z" level=info msg="StartContainer for \"232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957\"" May 14 18:14:49.975874 containerd[1523]: time="2025-05-14T18:14:49.975834132Z" level=info msg="connecting to shim 232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957" address="unix:///run/containerd/s/646c2b7f45907b8d0d511f6d8b4dff15a5e40f51362680a56bedfd940a261b6b" protocol=ttrpc version=3 May 14 18:14:49.999175 systemd[1]: Started cri-containerd-232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957.scope - libcontainer container 232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957. May 14 18:14:50.050920 containerd[1523]: time="2025-05-14T18:14:50.050830920Z" level=info msg="StartContainer for \"232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957\" returns successfully" May 14 18:14:50.206570 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 18:14:50.206672 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 18:14:50.418148 kubelet[2626]: I0514 18:14:50.418083 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-srp7v" podStartSLOduration=1.351290956 podStartE2EDuration="13.418066997s" podCreationTimestamp="2025-05-14 18:14:37 +0000 UTC" firstStartedPulling="2025-05-14 18:14:37.845427714 +0000 UTC m=+13.621441774" lastFinishedPulling="2025-05-14 18:14:49.912203795 +0000 UTC m=+25.688217815" observedRunningTime="2025-05-14 18:14:50.412406741 +0000 UTC m=+26.188420801" watchObservedRunningTime="2025-05-14 18:14:50.418066997 +0000 UTC m=+26.194081057" May 14 18:14:50.518533 containerd[1523]: time="2025-05-14T18:14:50.518427633Z" level=info msg="TaskExit event in podsandbox handler container_id:\"232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957\" id:\"4cb07770948db08dac7fb0135e1e1d34cd402a6855bd7a2d25228e52834d9f1c\" pid:3704 exit_status:1 exited_at:{seconds:1747246490 nanos:517891798}" May 14 18:14:51.518357 containerd[1523]: time="2025-05-14T18:14:51.518221842Z" level=info msg="TaskExit event in podsandbox handler container_id:\"232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957\" id:\"b126579cdafc7d7231ccfe2411cb9dd61d347acd9c1bfd01d3f77f8f0c4efebd\" pid:3728 exit_status:1 exited_at:{seconds:1747246491 nanos:517961270}" May 14 18:14:52.457056 containerd[1523]: time="2025-05-14T18:14:52.456937647Z" level=info msg="TaskExit event in podsandbox handler container_id:\"232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957\" id:\"4683fa0fe017c4ec0fdda66563e88e7e81fbf07dc4a5eec09e48fed6d0c7d57f\" pid:3851 exit_status:1 exited_at:{seconds:1747246492 nanos:456648592}" May 14 18:14:56.160250 systemd[1]: Started sshd@7-10.0.0.119:22-10.0.0.1:40408.service - OpenSSH per-connection server daemon (10.0.0.1:40408). May 14 18:14:56.208965 sshd[3961]: Accepted publickey for core from 10.0.0.1 port 40408 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:14:56.210363 sshd-session[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:56.215385 systemd-logind[1500]: New session 8 of user core. May 14 18:14:56.226098 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 18:14:56.352535 sshd[3963]: Connection closed by 10.0.0.1 port 40408 May 14 18:14:56.353048 sshd-session[3961]: pam_unix(sshd:session): session closed for user core May 14 18:14:56.356350 systemd[1]: sshd@7-10.0.0.119:22-10.0.0.1:40408.service: Deactivated successfully. May 14 18:14:56.358149 systemd[1]: session-8.scope: Deactivated successfully. May 14 18:14:56.358868 systemd-logind[1500]: Session 8 logged out. Waiting for processes to exit. May 14 18:14:56.360264 systemd-logind[1500]: Removed session 8. May 14 18:14:57.298557 containerd[1523]: time="2025-05-14T18:14:57.298499826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bfxr8,Uid:771eb332-fe07-45b5-9848-e4cab057df0b,Namespace:kube-system,Attempt:0,}" May 14 18:14:57.299160 containerd[1523]: time="2025-05-14T18:14:57.298509668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fb4f869d-ch7ss,Uid:a2abc07a-c489-4f6d-83c3-a98e091c2fd2,Namespace:calico-system,Attempt:0,}" May 14 18:14:57.583369 systemd-networkd[1432]: cali8ba88036d99: Link UP May 14 18:14:57.584005 systemd-networkd[1432]: cali8ba88036d99: Gained carrier May 14 18:14:57.600638 containerd[1523]: 2025-05-14 18:14:57.323 [INFO][4005] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 18:14:57.600638 containerd[1523]: 2025-05-14 18:14:57.421 [INFO][4005] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--bfxr8-eth0 coredns-6f6b679f8f- kube-system 771eb332-fe07-45b5-9848-e4cab057df0b 681 0 2025-05-14 18:14:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-bfxr8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8ba88036d99 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119" Namespace="kube-system" Pod="coredns-6f6b679f8f-bfxr8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bfxr8-" May 14 18:14:57.600638 containerd[1523]: 2025-05-14 18:14:57.421 [INFO][4005] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119" Namespace="kube-system" Pod="coredns-6f6b679f8f-bfxr8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bfxr8-eth0" May 14 18:14:57.600638 containerd[1523]: 2025-05-14 18:14:57.530 [INFO][4035] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119" HandleID="k8s-pod-network.4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119" Workload="localhost-k8s-coredns--6f6b679f8f--bfxr8-eth0" May 14 18:14:57.601165 containerd[1523]: 2025-05-14 18:14:57.547 [INFO][4035] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119" HandleID="k8s-pod-network.4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119" Workload="localhost-k8s-coredns--6f6b679f8f--bfxr8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d9da0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-bfxr8", "timestamp":"2025-05-14 18:14:57.530246431 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:14:57.601165 containerd[1523]: 2025-05-14 18:14:57.547 [INFO][4035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:14:57.601165 containerd[1523]: 2025-05-14 18:14:57.547 [INFO][4035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:14:57.601165 containerd[1523]: 2025-05-14 18:14:57.547 [INFO][4035] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:14:57.601165 containerd[1523]: 2025-05-14 18:14:57.549 [INFO][4035] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119" host="localhost" May 14 18:14:57.601165 containerd[1523]: 2025-05-14 18:14:57.554 [INFO][4035] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:14:57.601165 containerd[1523]: 2025-05-14 18:14:57.559 [INFO][4035] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:14:57.601165 containerd[1523]: 2025-05-14 18:14:57.560 [INFO][4035] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:14:57.601165 containerd[1523]: 2025-05-14 18:14:57.562 [INFO][4035] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:14:57.601165 containerd[1523]: 2025-05-14 18:14:57.563 [INFO][4035] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119" host="localhost" May 14 18:14:57.601402 containerd[1523]: 2025-05-14 18:14:57.566 [INFO][4035] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119 May 14 18:14:57.601402 containerd[1523]: 2025-05-14 18:14:57.569 [INFO][4035] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119" host="localhost" May 14 18:14:57.601402 containerd[1523]: 2025-05-14 18:14:57.574 [INFO][4035] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119" host="localhost" May 14 18:14:57.601402 containerd[1523]: 2025-05-14 18:14:57.574 [INFO][4035] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119" host="localhost" May 14 18:14:57.601402 containerd[1523]: 2025-05-14 18:14:57.574 [INFO][4035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:14:57.601402 containerd[1523]: 2025-05-14 18:14:57.574 [INFO][4035] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119" HandleID="k8s-pod-network.4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119" Workload="localhost-k8s-coredns--6f6b679f8f--bfxr8-eth0" May 14 18:14:57.601509 containerd[1523]: 2025-05-14 18:14:57.576 [INFO][4005] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119" Namespace="kube-system" Pod="coredns-6f6b679f8f-bfxr8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bfxr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--bfxr8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"771eb332-fe07-45b5-9848-e4cab057df0b", ResourceVersion:"681", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-bfxr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ba88036d99", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:14:57.601562 containerd[1523]: 2025-05-14 18:14:57.576 [INFO][4005] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119" Namespace="kube-system" Pod="coredns-6f6b679f8f-bfxr8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bfxr8-eth0" May 14 18:14:57.601562 containerd[1523]: 2025-05-14 18:14:57.576 [INFO][4005] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ba88036d99 ContainerID="4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119" Namespace="kube-system" Pod="coredns-6f6b679f8f-bfxr8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bfxr8-eth0" May 14 18:14:57.601562 containerd[1523]: 2025-05-14 18:14:57.584 [INFO][4005] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119" Namespace="kube-system" Pod="coredns-6f6b679f8f-bfxr8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bfxr8-eth0" May 14 18:14:57.601619 containerd[1523]: 2025-05-14 18:14:57.584 [INFO][4005] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119" Namespace="kube-system" Pod="coredns-6f6b679f8f-bfxr8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bfxr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--bfxr8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"771eb332-fe07-45b5-9848-e4cab057df0b", ResourceVersion:"681", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119", Pod:"coredns-6f6b679f8f-bfxr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ba88036d99", MAC:"9a:bb:13:c0:0d:17", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:14:57.601619 containerd[1523]: 2025-05-14 18:14:57.595 [INFO][4005] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119" Namespace="kube-system" Pod="coredns-6f6b679f8f-bfxr8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bfxr8-eth0" May 14 18:14:57.655586 containerd[1523]: time="2025-05-14T18:14:57.655502316Z" level=info msg="connecting to shim 4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119" address="unix:///run/containerd/s/bd9c776a4d3da6229bc41ab95cc3c17f24bd6c15454889843203bee9327ce7eb" namespace=k8s.io protocol=ttrpc version=3 May 14 18:14:57.683490 systemd-networkd[1432]: cali067b29c4684: Link UP May 14 18:14:57.684177 systemd-networkd[1432]: cali067b29c4684: Gained carrier May 14 18:14:57.689327 systemd[1]: Started cri-containerd-4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119.scope - libcontainer container 4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119. May 14 18:14:57.702010 containerd[1523]: 2025-05-14 18:14:57.327 [INFO][4012] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 18:14:57.702010 containerd[1523]: 2025-05-14 18:14:57.423 [INFO][4012] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0 calico-kube-controllers-85fb4f869d- calico-system a2abc07a-c489-4f6d-83c3-a98e091c2fd2 683 0 2025-05-14 18:14:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85fb4f869d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-85fb4f869d-ch7ss eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali067b29c4684 [] []}} ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" Namespace="calico-system" Pod="calico-kube-controllers-85fb4f869d-ch7ss" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-" May 14 18:14:57.702010 containerd[1523]: 2025-05-14 18:14:57.423 [INFO][4012] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" Namespace="calico-system" Pod="calico-kube-controllers-85fb4f869d-ch7ss" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" May 14 18:14:57.702010 containerd[1523]: 2025-05-14 18:14:57.530 [INFO][4037] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" HandleID="k8s-pod-network.38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" Workload="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" May 14 18:14:57.702010 containerd[1523]: 2025-05-14 18:14:57.549 [INFO][4037] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" HandleID="k8s-pod-network.38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" Workload="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000304b20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-85fb4f869d-ch7ss", "timestamp":"2025-05-14 18:14:57.530255432 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:14:57.702010 containerd[1523]: 2025-05-14 18:14:57.549 [INFO][4037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:14:57.702010 containerd[1523]: 2025-05-14 18:14:57.574 [INFO][4037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:14:57.702010 containerd[1523]: 2025-05-14 18:14:57.574 [INFO][4037] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:14:57.702010 containerd[1523]: 2025-05-14 18:14:57.651 [INFO][4037] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" host="localhost" May 14 18:14:57.702010 containerd[1523]: 2025-05-14 18:14:57.656 [INFO][4037] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:14:57.702010 containerd[1523]: 2025-05-14 18:14:57.660 [INFO][4037] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:14:57.702010 containerd[1523]: 2025-05-14 18:14:57.662 [INFO][4037] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:14:57.702010 containerd[1523]: 2025-05-14 18:14:57.664 [INFO][4037] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:14:57.702010 containerd[1523]: 2025-05-14 18:14:57.664 [INFO][4037] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" host="localhost" May 14 18:14:57.702010 containerd[1523]: 2025-05-14 18:14:57.666 [INFO][4037] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801 May 14 18:14:57.702010 containerd[1523]: 2025-05-14 18:14:57.671 [INFO][4037] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" host="localhost" May 14 18:14:57.702010 containerd[1523]: 2025-05-14 18:14:57.678 [INFO][4037] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" host="localhost" May 14 18:14:57.702010 containerd[1523]: 2025-05-14 18:14:57.678 [INFO][4037] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" host="localhost" May 14 18:14:57.702010 containerd[1523]: 2025-05-14 18:14:57.678 [INFO][4037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:14:57.702010 containerd[1523]: 2025-05-14 18:14:57.678 [INFO][4037] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" HandleID="k8s-pod-network.38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" Workload="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" May 14 18:14:57.702503 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:14:57.702920 containerd[1523]: 2025-05-14 18:14:57.681 [INFO][4012] cni-plugin/k8s.go 386: Populated endpoint ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" Namespace="calico-system" Pod="calico-kube-controllers-85fb4f869d-ch7ss" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0", GenerateName:"calico-kube-controllers-85fb4f869d-", Namespace:"calico-system", SelfLink:"", UID:"a2abc07a-c489-4f6d-83c3-a98e091c2fd2", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85fb4f869d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-85fb4f869d-ch7ss", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali067b29c4684", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:14:57.702920 containerd[1523]: 2025-05-14 18:14:57.682 [INFO][4012] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" Namespace="calico-system" Pod="calico-kube-controllers-85fb4f869d-ch7ss" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" May 14 18:14:57.702920 containerd[1523]: 2025-05-14 18:14:57.682 [INFO][4012] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali067b29c4684 ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" Namespace="calico-system" Pod="calico-kube-controllers-85fb4f869d-ch7ss" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" May 14 18:14:57.702920 containerd[1523]: 2025-05-14 18:14:57.684 [INFO][4012] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" Namespace="calico-system" Pod="calico-kube-controllers-85fb4f869d-ch7ss" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" May 14 18:14:57.702920 containerd[1523]: 2025-05-14 18:14:57.684 [INFO][4012] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" Namespace="calico-system" Pod="calico-kube-controllers-85fb4f869d-ch7ss" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0", GenerateName:"calico-kube-controllers-85fb4f869d-", Namespace:"calico-system", SelfLink:"", UID:"a2abc07a-c489-4f6d-83c3-a98e091c2fd2", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85fb4f869d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801", Pod:"calico-kube-controllers-85fb4f869d-ch7ss", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali067b29c4684", MAC:"92:d0:3f:41:4b:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:14:57.702920 containerd[1523]: 2025-05-14 18:14:57.698 [INFO][4012] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" Namespace="calico-system" Pod="calico-kube-controllers-85fb4f869d-ch7ss" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" May 14 18:14:57.728510 containerd[1523]: time="2025-05-14T18:14:57.728471973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bfxr8,Uid:771eb332-fe07-45b5-9848-e4cab057df0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119\"" May 14 18:14:57.733361 containerd[1523]: time="2025-05-14T18:14:57.733201259Z" level=info msg="CreateContainer within sandbox \"4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:14:57.735306 containerd[1523]: time="2025-05-14T18:14:57.735270102Z" level=info msg="connecting to shim 38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" address="unix:///run/containerd/s/9da1d2db7d962ddd0e1051f6031c392101c54278c3f137746b7627f2b9a8939d" namespace=k8s.io protocol=ttrpc version=3 May 14 18:14:57.743672 containerd[1523]: time="2025-05-14T18:14:57.743639446Z" level=info msg="Container fbfc59c14784a3f0d126186381caaddcb940bb134ea1f4376c0e216d43dfd149: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:57.750554 containerd[1523]: time="2025-05-14T18:14:57.750513906Z" level=info msg="CreateContainer within sandbox \"4642f89fd4758cc243f28fde99b1ed9f33eade1c45f2334cbc8b0ab48a857119\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fbfc59c14784a3f0d126186381caaddcb940bb134ea1f4376c0e216d43dfd149\"" May 14 18:14:57.751394 containerd[1523]: time="2025-05-14T18:14:57.751367463Z" level=info msg="StartContainer for \"fbfc59c14784a3f0d126186381caaddcb940bb134ea1f4376c0e216d43dfd149\"" May 14 18:14:57.754279 containerd[1523]: time="2025-05-14T18:14:57.754232535Z" level=info msg="connecting to shim fbfc59c14784a3f0d126186381caaddcb940bb134ea1f4376c0e216d43dfd149" address="unix:///run/containerd/s/bd9c776a4d3da6229bc41ab95cc3c17f24bd6c15454889843203bee9327ce7eb" protocol=ttrpc version=3 May 14 18:14:57.759130 systemd[1]: Started cri-containerd-38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801.scope - libcontainer container 38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801. May 14 18:14:57.782108 systemd[1]: Started cri-containerd-fbfc59c14784a3f0d126186381caaddcb940bb134ea1f4376c0e216d43dfd149.scope - libcontainer container fbfc59c14784a3f0d126186381caaddcb940bb134ea1f4376c0e216d43dfd149. May 14 18:14:57.785980 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:14:57.814133 containerd[1523]: time="2025-05-14T18:14:57.814079237Z" level=info msg="StartContainer for \"fbfc59c14784a3f0d126186381caaddcb940bb134ea1f4376c0e216d43dfd149\" returns successfully" May 14 18:14:57.815169 containerd[1523]: time="2025-05-14T18:14:57.815138142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fb4f869d-ch7ss,Uid:a2abc07a-c489-4f6d-83c3-a98e091c2fd2,Namespace:calico-system,Attempt:0,} returns sandbox id \"38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801\"" May 14 18:14:57.818199 containerd[1523]: time="2025-05-14T18:14:57.818171757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 18:14:58.427535 kubelet[2626]: I0514 18:14:58.427098 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bfxr8" podStartSLOduration=29.427050043 podStartE2EDuration="29.427050043s" podCreationTimestamp="2025-05-14 18:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:14:58.426574622 +0000 UTC m=+34.202588722" watchObservedRunningTime="2025-05-14 18:14:58.427050043 +0000 UTC m=+34.203064143" May 14 18:14:58.743084 systemd-networkd[1432]: cali067b29c4684: Gained IPv6LL May 14 18:14:58.807064 systemd-networkd[1432]: cali8ba88036d99: Gained IPv6LL May 14 18:14:59.156759 containerd[1523]: time="2025-05-14T18:14:59.156684822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 14 18:14:59.157818 containerd[1523]: time="2025-05-14T18:14:59.157743517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:59.159215 containerd[1523]: time="2025-05-14T18:14:59.159186646Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:59.160356 containerd[1523]: time="2025-05-14T18:14:59.160100808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:59.160633 containerd[1523]: time="2025-05-14T18:14:59.160585172Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.342382531s" May 14 18:14:59.160714 containerd[1523]: time="2025-05-14T18:14:59.160699662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 14 18:14:59.171867 containerd[1523]: time="2025-05-14T18:14:59.171799538Z" level=info msg="CreateContainer within sandbox \"38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 18:14:59.178890 containerd[1523]: time="2025-05-14T18:14:59.177887485Z" level=info msg="Container 037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:59.184394 containerd[1523]: time="2025-05-14T18:14:59.184343664Z" level=info msg="CreateContainer within sandbox \"38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c\"" May 14 18:14:59.185302 containerd[1523]: time="2025-05-14T18:14:59.185259827Z" level=info msg="StartContainer for \"037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c\"" May 14 18:14:59.186574 containerd[1523]: time="2025-05-14T18:14:59.186548342Z" level=info msg="connecting to shim 037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c" address="unix:///run/containerd/s/9da1d2db7d962ddd0e1051f6031c392101c54278c3f137746b7627f2b9a8939d" protocol=ttrpc version=3 May 14 18:14:59.205108 systemd[1]: Started cri-containerd-037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c.scope - libcontainer container 037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c. May 14 18:14:59.255297 containerd[1523]: time="2025-05-14T18:14:59.255263230Z" level=info msg="StartContainer for \"037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c\" returns successfully" May 14 18:14:59.445591 kubelet[2626]: I0514 18:14:59.445129 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-85fb4f869d-ch7ss" podStartSLOduration=21.101525576 podStartE2EDuration="22.445107911s" podCreationTimestamp="2025-05-14 18:14:37 +0000 UTC" firstStartedPulling="2025-05-14 18:14:57.817869635 +0000 UTC m=+33.593883695" lastFinishedPulling="2025-05-14 18:14:59.16145197 +0000 UTC m=+34.937466030" observedRunningTime="2025-05-14 18:14:59.44320074 +0000 UTC m=+35.219214800" watchObservedRunningTime="2025-05-14 18:14:59.445107911 +0000 UTC m=+35.221122011" May 14 18:14:59.465645 containerd[1523]: time="2025-05-14T18:14:59.465605311Z" level=info msg="TaskExit event in podsandbox handler container_id:\"037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c\" id:\"bb50f6aaf7a30ece7b87611c46f351d469512e2deb002379ef34f8d95763e622\" pid:4307 exited_at:{seconds:1747246499 nanos:465300684}" May 14 18:15:00.298310 containerd[1523]: time="2025-05-14T18:15:00.298182104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-576d749fbb-vrk6m,Uid:16aa4105-e2a7-46b7-982b-abcf1282d71f,Namespace:calico-apiserver,Attempt:0,}" May 14 18:15:00.298310 containerd[1523]: time="2025-05-14T18:15:00.298245669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-576d749fbb-j4vhz,Uid:c03aac11-237f-4f5f-95ac-fc014337d597,Namespace:calico-apiserver,Attempt:0,}" May 14 18:15:00.420492 systemd-networkd[1432]: cali0ca96547cb8: Link UP May 14 18:15:00.421548 systemd-networkd[1432]: cali0ca96547cb8: Gained carrier May 14 18:15:00.433760 containerd[1523]: 2025-05-14 18:15:00.321 [INFO][4344] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 18:15:00.433760 containerd[1523]: 2025-05-14 18:15:00.336 [INFO][4344] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0 calico-apiserver-576d749fbb- calico-apiserver c03aac11-237f-4f5f-95ac-fc014337d597 680 0 2025-05-14 18:14:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:576d749fbb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-576d749fbb-j4vhz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0ca96547cb8 [] []}} ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" Namespace="calico-apiserver" Pod="calico-apiserver-576d749fbb-j4vhz" WorkloadEndpoint="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-" May 14 18:15:00.433760 containerd[1523]: 2025-05-14 18:15:00.336 [INFO][4344] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" Namespace="calico-apiserver" Pod="calico-apiserver-576d749fbb-j4vhz" WorkloadEndpoint="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" May 14 18:15:00.433760 containerd[1523]: 2025-05-14 18:15:00.374 [INFO][4371] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" HandleID="k8s-pod-network.277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" Workload="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" May 14 18:15:00.433760 containerd[1523]: 2025-05-14 18:15:00.389 [INFO][4371] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" HandleID="k8s-pod-network.277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" Workload="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003637f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-576d749fbb-j4vhz", "timestamp":"2025-05-14 18:15:00.37457181 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:15:00.433760 containerd[1523]: 2025-05-14 18:15:00.389 [INFO][4371] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:15:00.433760 containerd[1523]: 2025-05-14 18:15:00.389 [INFO][4371] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:15:00.433760 containerd[1523]: 2025-05-14 18:15:00.389 [INFO][4371] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:15:00.433760 containerd[1523]: 2025-05-14 18:15:00.391 [INFO][4371] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" host="localhost" May 14 18:15:00.433760 containerd[1523]: 2025-05-14 18:15:00.396 [INFO][4371] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:15:00.433760 containerd[1523]: 2025-05-14 18:15:00.401 [INFO][4371] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:15:00.433760 containerd[1523]: 2025-05-14 18:15:00.402 [INFO][4371] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:15:00.433760 containerd[1523]: 2025-05-14 18:15:00.404 [INFO][4371] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:15:00.433760 containerd[1523]: 2025-05-14 18:15:00.405 [INFO][4371] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" host="localhost" May 14 18:15:00.433760 containerd[1523]: 2025-05-14 18:15:00.406 [INFO][4371] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10 May 14 18:15:00.433760 containerd[1523]: 2025-05-14 18:15:00.409 [INFO][4371] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" host="localhost" May 14 18:15:00.433760 containerd[1523]: 2025-05-14 18:15:00.415 [INFO][4371] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" host="localhost" May 14 18:15:00.433760 containerd[1523]: 2025-05-14 18:15:00.415 [INFO][4371] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" host="localhost" May 14 18:15:00.433760 containerd[1523]: 2025-05-14 18:15:00.415 [INFO][4371] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:15:00.433760 containerd[1523]: 2025-05-14 18:15:00.415 [INFO][4371] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" HandleID="k8s-pod-network.277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" Workload="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" May 14 18:15:00.434344 containerd[1523]: 2025-05-14 18:15:00.418 [INFO][4344] cni-plugin/k8s.go 386: Populated endpoint ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" Namespace="calico-apiserver" Pod="calico-apiserver-576d749fbb-j4vhz" WorkloadEndpoint="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0", GenerateName:"calico-apiserver-576d749fbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"c03aac11-237f-4f5f-95ac-fc014337d597", ResourceVersion:"680", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"576d749fbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-576d749fbb-j4vhz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ca96547cb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:00.434344 containerd[1523]: 2025-05-14 18:15:00.418 [INFO][4344] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" Namespace="calico-apiserver" Pod="calico-apiserver-576d749fbb-j4vhz" WorkloadEndpoint="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" May 14 18:15:00.434344 containerd[1523]: 2025-05-14 18:15:00.418 [INFO][4344] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ca96547cb8 ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" Namespace="calico-apiserver" Pod="calico-apiserver-576d749fbb-j4vhz" WorkloadEndpoint="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" May 14 18:15:00.434344 containerd[1523]: 2025-05-14 18:15:00.421 [INFO][4344] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" Namespace="calico-apiserver" Pod="calico-apiserver-576d749fbb-j4vhz" WorkloadEndpoint="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" May 14 18:15:00.434344 containerd[1523]: 2025-05-14 18:15:00.421 [INFO][4344] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" Namespace="calico-apiserver" Pod="calico-apiserver-576d749fbb-j4vhz" WorkloadEndpoint="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0", GenerateName:"calico-apiserver-576d749fbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"c03aac11-237f-4f5f-95ac-fc014337d597", ResourceVersion:"680", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"576d749fbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10", Pod:"calico-apiserver-576d749fbb-j4vhz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ca96547cb8", MAC:"ae:83:5c:0d:0f:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:00.434344 containerd[1523]: 2025-05-14 18:15:00.430 [INFO][4344] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" Namespace="calico-apiserver" Pod="calico-apiserver-576d749fbb-j4vhz" WorkloadEndpoint="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" May 14 18:15:00.462817 containerd[1523]: time="2025-05-14T18:15:00.462737824Z" level=info msg="connecting to shim 277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" address="unix:///run/containerd/s/e98a81b0c69a0608b2495aea7df753f02f4ca9ed026cd1cbed1775aeed21c7e8" namespace=k8s.io protocol=ttrpc version=3 May 14 18:15:00.492114 systemd[1]: Started cri-containerd-277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10.scope - libcontainer container 277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10. May 14 18:15:00.511550 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:15:00.527975 systemd-networkd[1432]: cali9edb9c2ff42: Link UP May 14 18:15:00.528228 systemd-networkd[1432]: cali9edb9c2ff42: Gained carrier May 14 18:15:00.538959 containerd[1523]: time="2025-05-14T18:15:00.538903991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-576d749fbb-j4vhz,Uid:c03aac11-237f-4f5f-95ac-fc014337d597,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10\"" May 14 18:15:00.542706 containerd[1523]: time="2025-05-14T18:15:00.542626396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 18:15:00.545490 containerd[1523]: 2025-05-14 18:15:00.335 [INFO][4355] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 18:15:00.545490 containerd[1523]: 2025-05-14 18:15:00.350 [INFO][4355] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0 calico-apiserver-576d749fbb- calico-apiserver 16aa4105-e2a7-46b7-982b-abcf1282d71f 676 0 2025-05-14 18:14:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:576d749fbb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-576d749fbb-vrk6m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9edb9c2ff42 [] []}} ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" Namespace="calico-apiserver" Pod="calico-apiserver-576d749fbb-vrk6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-" May 14 18:15:00.545490 containerd[1523]: 2025-05-14 18:15:00.350 [INFO][4355] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" Namespace="calico-apiserver" Pod="calico-apiserver-576d749fbb-vrk6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" May 14 18:15:00.545490 containerd[1523]: 2025-05-14 18:15:00.380 [INFO][4377] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" HandleID="k8s-pod-network.080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" Workload="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" May 14 18:15:00.545490 containerd[1523]: 2025-05-14 18:15:00.394 [INFO][4377] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" HandleID="k8s-pod-network.080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" Workload="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000193260), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-576d749fbb-vrk6m", "timestamp":"2025-05-14 18:15:00.380659181 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:15:00.545490 containerd[1523]: 2025-05-14 18:15:00.394 [INFO][4377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:15:00.545490 containerd[1523]: 2025-05-14 18:15:00.415 [INFO][4377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:15:00.545490 containerd[1523]: 2025-05-14 18:15:00.415 [INFO][4377] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:15:00.545490 containerd[1523]: 2025-05-14 18:15:00.493 [INFO][4377] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" host="localhost" May 14 18:15:00.545490 containerd[1523]: 2025-05-14 18:15:00.499 [INFO][4377] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:15:00.545490 containerd[1523]: 2025-05-14 18:15:00.503 [INFO][4377] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:15:00.545490 containerd[1523]: 2025-05-14 18:15:00.504 [INFO][4377] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:15:00.545490 containerd[1523]: 2025-05-14 18:15:00.507 [INFO][4377] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:15:00.545490 containerd[1523]: 2025-05-14 18:15:00.507 [INFO][4377] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" host="localhost" May 14 18:15:00.545490 containerd[1523]: 2025-05-14 18:15:00.509 [INFO][4377] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad May 14 18:15:00.545490 containerd[1523]: 2025-05-14 18:15:00.515 [INFO][4377] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" host="localhost" May 14 18:15:00.545490 containerd[1523]: 2025-05-14 18:15:00.520 [INFO][4377] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" host="localhost" May 14 18:15:00.545490 containerd[1523]: 2025-05-14 18:15:00.521 [INFO][4377] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" host="localhost" May 14 18:15:00.545490 containerd[1523]: 2025-05-14 18:15:00.521 [INFO][4377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:15:00.545490 containerd[1523]: 2025-05-14 18:15:00.521 [INFO][4377] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" HandleID="k8s-pod-network.080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" Workload="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" May 14 18:15:00.546134 containerd[1523]: 2025-05-14 18:15:00.525 [INFO][4355] cni-plugin/k8s.go 386: Populated endpoint ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" Namespace="calico-apiserver" Pod="calico-apiserver-576d749fbb-vrk6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0", GenerateName:"calico-apiserver-576d749fbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"16aa4105-e2a7-46b7-982b-abcf1282d71f", ResourceVersion:"676", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"576d749fbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-576d749fbb-vrk6m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9edb9c2ff42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:00.546134 containerd[1523]: 2025-05-14 18:15:00.525 [INFO][4355] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" Namespace="calico-apiserver" Pod="calico-apiserver-576d749fbb-vrk6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" May 14 18:15:00.546134 containerd[1523]: 2025-05-14 18:15:00.525 [INFO][4355] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9edb9c2ff42 ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" Namespace="calico-apiserver" Pod="calico-apiserver-576d749fbb-vrk6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" May 14 18:15:00.546134 containerd[1523]: 2025-05-14 18:15:00.528 [INFO][4355] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" Namespace="calico-apiserver" Pod="calico-apiserver-576d749fbb-vrk6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" May 14 18:15:00.546134 containerd[1523]: 2025-05-14 18:15:00.528 [INFO][4355] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" Namespace="calico-apiserver" Pod="calico-apiserver-576d749fbb-vrk6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0", GenerateName:"calico-apiserver-576d749fbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"16aa4105-e2a7-46b7-982b-abcf1282d71f", ResourceVersion:"676", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"576d749fbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad", Pod:"calico-apiserver-576d749fbb-vrk6m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9edb9c2ff42", MAC:"be:fb:89:e7:1a:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:00.546134 containerd[1523]: 2025-05-14 18:15:00.540 [INFO][4355] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" Namespace="calico-apiserver" Pod="calico-apiserver-576d749fbb-vrk6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" May 14 18:15:00.562697 containerd[1523]: time="2025-05-14T18:15:00.562221306Z" level=info msg="connecting to shim 080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" address="unix:///run/containerd/s/61e6f80118602d49f1c89ee3766a0c5590ebe90d6bfce0f69d99d04dad3990a5" namespace=k8s.io protocol=ttrpc version=3 May 14 18:15:00.589106 systemd[1]: Started cri-containerd-080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad.scope - libcontainer container 080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad. May 14 18:15:00.599852 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:15:00.617961 containerd[1523]: time="2025-05-14T18:15:00.617900485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-576d749fbb-vrk6m,Uid:16aa4105-e2a7-46b7-982b-abcf1282d71f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad\"" May 14 18:15:01.298873 containerd[1523]: time="2025-05-14T18:15:01.298822636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5976b9bcc4-mzcb9,Uid:d8cb263b-108a-4ea1-98b4-0c6a7ead7022,Namespace:calico-apiserver,Attempt:0,}" May 14 18:15:01.299351 containerd[1523]: time="2025-05-14T18:15:01.299242112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jfcjv,Uid:ac9d5dd4-bdd4-4158-9256-0d330d2d0532,Namespace:kube-system,Attempt:0,}" May 14 18:15:01.366206 systemd[1]: Started sshd@8-10.0.0.119:22-10.0.0.1:40424.service - OpenSSH per-connection server daemon (10.0.0.1:40424). May 14 18:15:01.413397 systemd-networkd[1432]: caliee3c08b6a4f: Link UP May 14 18:15:01.413977 systemd-networkd[1432]: caliee3c08b6a4f: Gained carrier May 14 18:15:01.429280 sshd[4570]: Accepted publickey for core from 10.0.0.1 port 40424 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:15:01.430572 containerd[1523]: 2025-05-14 18:15:01.320 [INFO][4525] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 18:15:01.430572 containerd[1523]: 2025-05-14 18:15:01.339 [INFO][4525] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5976b9bcc4--mzcb9-eth0 calico-apiserver-5976b9bcc4- calico-apiserver d8cb263b-108a-4ea1-98b4-0c6a7ead7022 684 0 2025-05-14 18:14:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5976b9bcc4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5976b9bcc4-mzcb9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliee3c08b6a4f [] []}} ContainerID="dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a" Namespace="calico-apiserver" Pod="calico-apiserver-5976b9bcc4-mzcb9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b9bcc4--mzcb9-" May 14 18:15:01.430572 containerd[1523]: 2025-05-14 18:15:01.339 [INFO][4525] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a" Namespace="calico-apiserver" Pod="calico-apiserver-5976b9bcc4-mzcb9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b9bcc4--mzcb9-eth0" May 14 18:15:01.430572 containerd[1523]: 2025-05-14 18:15:01.369 [INFO][4557] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a" HandleID="k8s-pod-network.dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a" Workload="localhost-k8s-calico--apiserver--5976b9bcc4--mzcb9-eth0" May 14 18:15:01.430572 containerd[1523]: 2025-05-14 18:15:01.381 [INFO][4557] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a" HandleID="k8s-pod-network.dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a" Workload="localhost-k8s-calico--apiserver--5976b9bcc4--mzcb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a8fd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5976b9bcc4-mzcb9", "timestamp":"2025-05-14 18:15:01.369129162 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:15:01.430572 containerd[1523]: 2025-05-14 18:15:01.381 [INFO][4557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:15:01.430572 containerd[1523]: 2025-05-14 18:15:01.381 [INFO][4557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:15:01.430572 containerd[1523]: 2025-05-14 18:15:01.381 [INFO][4557] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:15:01.430572 containerd[1523]: 2025-05-14 18:15:01.384 [INFO][4557] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a" host="localhost" May 14 18:15:01.430572 containerd[1523]: 2025-05-14 18:15:01.387 [INFO][4557] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:15:01.430572 containerd[1523]: 2025-05-14 18:15:01.391 [INFO][4557] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:15:01.430572 containerd[1523]: 2025-05-14 18:15:01.393 [INFO][4557] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:15:01.430572 containerd[1523]: 2025-05-14 18:15:01.395 [INFO][4557] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:15:01.430572 containerd[1523]: 2025-05-14 18:15:01.395 [INFO][4557] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a" host="localhost" May 14 18:15:01.430572 containerd[1523]: 2025-05-14 18:15:01.397 [INFO][4557] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a May 14 18:15:01.430572 containerd[1523]: 2025-05-14 18:15:01.401 [INFO][4557] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a" host="localhost" May 14 18:15:01.430572 containerd[1523]: 2025-05-14 18:15:01.406 [INFO][4557] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a" host="localhost" May 14 18:15:01.430572 containerd[1523]: 2025-05-14 18:15:01.406 [INFO][4557] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a" host="localhost" May 14 18:15:01.430572 containerd[1523]: 2025-05-14 18:15:01.406 [INFO][4557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:15:01.430572 containerd[1523]: 2025-05-14 18:15:01.406 [INFO][4557] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a" HandleID="k8s-pod-network.dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a" Workload="localhost-k8s-calico--apiserver--5976b9bcc4--mzcb9-eth0" May 14 18:15:01.431483 containerd[1523]: 2025-05-14 18:15:01.410 [INFO][4525] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a" Namespace="calico-apiserver" Pod="calico-apiserver-5976b9bcc4-mzcb9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b9bcc4--mzcb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5976b9bcc4--mzcb9-eth0", GenerateName:"calico-apiserver-5976b9bcc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8cb263b-108a-4ea1-98b4-0c6a7ead7022", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5976b9bcc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5976b9bcc4-mzcb9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee3c08b6a4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:01.431483 containerd[1523]: 2025-05-14 18:15:01.410 [INFO][4525] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a" Namespace="calico-apiserver" Pod="calico-apiserver-5976b9bcc4-mzcb9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b9bcc4--mzcb9-eth0" May 14 18:15:01.431483 containerd[1523]: 2025-05-14 18:15:01.410 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee3c08b6a4f ContainerID="dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a" Namespace="calico-apiserver" Pod="calico-apiserver-5976b9bcc4-mzcb9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b9bcc4--mzcb9-eth0" May 14 18:15:01.431483 containerd[1523]: 2025-05-14 18:15:01.414 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a" Namespace="calico-apiserver" Pod="calico-apiserver-5976b9bcc4-mzcb9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b9bcc4--mzcb9-eth0" May 14 18:15:01.431483 containerd[1523]: 2025-05-14 18:15:01.415 [INFO][4525] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a" Namespace="calico-apiserver" Pod="calico-apiserver-5976b9bcc4-mzcb9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b9bcc4--mzcb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5976b9bcc4--mzcb9-eth0", GenerateName:"calico-apiserver-5976b9bcc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8cb263b-108a-4ea1-98b4-0c6a7ead7022", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5976b9bcc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a", Pod:"calico-apiserver-5976b9bcc4-mzcb9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee3c08b6a4f", MAC:"ce:2e:72:a0:9f:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:01.431483 containerd[1523]: 2025-05-14 18:15:01.427 [INFO][4525] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a" Namespace="calico-apiserver" Pod="calico-apiserver-5976b9bcc4-mzcb9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b9bcc4--mzcb9-eth0" May 14 18:15:01.432397 sshd-session[4570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:01.438016 systemd-logind[1500]: New session 9 of user core. May 14 18:15:01.446115 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 18:15:01.451971 containerd[1523]: time="2025-05-14T18:15:01.451918267Z" level=info msg="connecting to shim dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a" address="unix:///run/containerd/s/4b772aa1ee5d1752df6a689ddc771de52b0c58c7ebdf48ffe1556b402918a642" namespace=k8s.io protocol=ttrpc version=3 May 14 18:15:01.481152 systemd[1]: Started cri-containerd-dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a.scope - libcontainer container dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a. May 14 18:15:01.502816 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:15:01.523710 systemd-networkd[1432]: cali1ddc99af60f: Link UP May 14 18:15:01.523894 systemd-networkd[1432]: cali1ddc99af60f: Gained carrier May 14 18:15:01.540242 containerd[1523]: 2025-05-14 18:15:01.326 [INFO][4538] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 18:15:01.540242 containerd[1523]: 2025-05-14 18:15:01.341 [INFO][4538] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--jfcjv-eth0 coredns-6f6b679f8f- kube-system ac9d5dd4-bdd4-4158-9256-0d330d2d0532 682 0 2025-05-14 18:14:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-jfcjv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1ddc99af60f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84" Namespace="kube-system" Pod="coredns-6f6b679f8f-jfcjv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jfcjv-" May 14 18:15:01.540242 containerd[1523]: 2025-05-14 18:15:01.341 [INFO][4538] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84" Namespace="kube-system" Pod="coredns-6f6b679f8f-jfcjv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jfcjv-eth0" May 14 18:15:01.540242 containerd[1523]: 2025-05-14 18:15:01.374 [INFO][4559] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84" HandleID="k8s-pod-network.e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84" Workload="localhost-k8s-coredns--6f6b679f8f--jfcjv-eth0" May 14 18:15:01.540242 containerd[1523]: 2025-05-14 18:15:01.388 [INFO][4559] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84" HandleID="k8s-pod-network.e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84" Workload="localhost-k8s-coredns--6f6b679f8f--jfcjv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000360cf0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-jfcjv", "timestamp":"2025-05-14 18:15:01.374366886 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:15:01.540242 containerd[1523]: 2025-05-14 18:15:01.388 [INFO][4559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:15:01.540242 containerd[1523]: 2025-05-14 18:15:01.406 [INFO][4559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:15:01.540242 containerd[1523]: 2025-05-14 18:15:01.406 [INFO][4559] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:15:01.540242 containerd[1523]: 2025-05-14 18:15:01.485 [INFO][4559] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84" host="localhost" May 14 18:15:01.540242 containerd[1523]: 2025-05-14 18:15:01.490 [INFO][4559] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:15:01.540242 containerd[1523]: 2025-05-14 18:15:01.495 [INFO][4559] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:15:01.540242 containerd[1523]: 2025-05-14 18:15:01.498 [INFO][4559] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:15:01.540242 containerd[1523]: 2025-05-14 18:15:01.503 [INFO][4559] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:15:01.540242 containerd[1523]: 2025-05-14 18:15:01.503 [INFO][4559] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84" host="localhost" May 14 18:15:01.540242 containerd[1523]: 2025-05-14 18:15:01.505 [INFO][4559] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84 May 14 18:15:01.540242 containerd[1523]: 2025-05-14 18:15:01.509 [INFO][4559] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84" host="localhost" May 14 18:15:01.540242 containerd[1523]: 2025-05-14 18:15:01.516 [INFO][4559] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84" host="localhost" May 14 18:15:01.540242 containerd[1523]: 2025-05-14 18:15:01.517 [INFO][4559] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84" host="localhost" May 14 18:15:01.540242 containerd[1523]: 2025-05-14 18:15:01.517 [INFO][4559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:15:01.540242 containerd[1523]: 2025-05-14 18:15:01.517 [INFO][4559] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84" HandleID="k8s-pod-network.e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84" Workload="localhost-k8s-coredns--6f6b679f8f--jfcjv-eth0" May 14 18:15:01.541007 containerd[1523]: 2025-05-14 18:15:01.520 [INFO][4538] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84" Namespace="kube-system" Pod="coredns-6f6b679f8f-jfcjv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jfcjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--jfcjv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ac9d5dd4-bdd4-4158-9256-0d330d2d0532", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-jfcjv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ddc99af60f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:01.541007 containerd[1523]: 2025-05-14 18:15:01.520 [INFO][4538] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84" Namespace="kube-system" Pod="coredns-6f6b679f8f-jfcjv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jfcjv-eth0" May 14 18:15:01.541007 containerd[1523]: 2025-05-14 18:15:01.520 [INFO][4538] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ddc99af60f ContainerID="e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84" Namespace="kube-system" Pod="coredns-6f6b679f8f-jfcjv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jfcjv-eth0" May 14 18:15:01.541007 containerd[1523]: 2025-05-14 18:15:01.522 [INFO][4538] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84" Namespace="kube-system" Pod="coredns-6f6b679f8f-jfcjv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jfcjv-eth0" May 14 18:15:01.541007 containerd[1523]: 2025-05-14 18:15:01.525 [INFO][4538] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84" Namespace="kube-system" Pod="coredns-6f6b679f8f-jfcjv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jfcjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--jfcjv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ac9d5dd4-bdd4-4158-9256-0d330d2d0532", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84", Pod:"coredns-6f6b679f8f-jfcjv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ddc99af60f", MAC:"22:67:ad:57:26:a9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:01.541007 containerd[1523]: 2025-05-14 18:15:01.537 [INFO][4538] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84" Namespace="kube-system" Pod="coredns-6f6b679f8f-jfcjv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--jfcjv-eth0" May 14 18:15:01.545633 containerd[1523]: time="2025-05-14T18:15:01.545599776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5976b9bcc4-mzcb9,Uid:d8cb263b-108a-4ea1-98b4-0c6a7ead7022,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a\"" May 14 18:15:01.567632 containerd[1523]: time="2025-05-14T18:15:01.567456511Z" level=info msg="connecting to shim e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84" address="unix:///run/containerd/s/afea924713648fab35f2d9fdb8d3dbb2b8d0d3f10a6e0a566b76d37789c1d09c" namespace=k8s.io protocol=ttrpc version=3 May 14 18:15:01.600898 sshd[4599]: Connection closed by 10.0.0.1 port 40424 May 14 18:15:01.601308 sshd-session[4570]: pam_unix(sshd:session): session closed for user core May 14 18:15:01.601336 systemd[1]: Started cri-containerd-e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84.scope - libcontainer container e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84. May 14 18:15:01.610078 systemd[1]: sshd@8-10.0.0.119:22-10.0.0.1:40424.service: Deactivated successfully. May 14 18:15:01.614651 systemd[1]: session-9.scope: Deactivated successfully. May 14 18:15:01.616161 systemd-logind[1500]: Session 9 logged out. Waiting for processes to exit. May 14 18:15:01.619034 systemd-logind[1500]: Removed session 9. May 14 18:15:01.627304 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:15:01.651625 containerd[1523]: time="2025-05-14T18:15:01.651583329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jfcjv,Uid:ac9d5dd4-bdd4-4158-9256-0d330d2d0532,Namespace:kube-system,Attempt:0,} returns sandbox id \"e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84\"" May 14 18:15:01.655238 containerd[1523]: time="2025-05-14T18:15:01.655204956Z" level=info msg="CreateContainer within sandbox \"e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:15:01.676755 containerd[1523]: time="2025-05-14T18:15:01.676707581Z" level=info msg="Container 0068eb1a86f4cfb7fb6adc03dcd7388627dfb76f8acebaef8b6308976a7c0a18: CDI devices from CRI Config.CDIDevices: []" May 14 18:15:01.685031 containerd[1523]: time="2025-05-14T18:15:01.684992164Z" level=info msg="CreateContainer within sandbox \"e69a5a27142fbaf8f828d4e944452cae562bb453a5c955b79c666a322a04ed84\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0068eb1a86f4cfb7fb6adc03dcd7388627dfb76f8acebaef8b6308976a7c0a18\"" May 14 18:15:01.686215 containerd[1523]: time="2025-05-14T18:15:01.686166263Z" level=info msg="StartContainer for \"0068eb1a86f4cfb7fb6adc03dcd7388627dfb76f8acebaef8b6308976a7c0a18\"" May 14 18:15:01.686977 containerd[1523]: time="2025-05-14T18:15:01.686932168Z" level=info msg="connecting to shim 0068eb1a86f4cfb7fb6adc03dcd7388627dfb76f8acebaef8b6308976a7c0a18" address="unix:///run/containerd/s/afea924713648fab35f2d9fdb8d3dbb2b8d0d3f10a6e0a566b76d37789c1d09c" protocol=ttrpc version=3 May 14 18:15:01.687105 systemd-networkd[1432]: cali0ca96547cb8: Gained IPv6LL May 14 18:15:01.712364 systemd[1]: Started cri-containerd-0068eb1a86f4cfb7fb6adc03dcd7388627dfb76f8acebaef8b6308976a7c0a18.scope - libcontainer container 0068eb1a86f4cfb7fb6adc03dcd7388627dfb76f8acebaef8b6308976a7c0a18. May 14 18:15:01.741944 containerd[1523]: time="2025-05-14T18:15:01.741909994Z" level=info msg="StartContainer for \"0068eb1a86f4cfb7fb6adc03dcd7388627dfb76f8acebaef8b6308976a7c0a18\" returns successfully" May 14 18:15:02.007110 systemd-networkd[1432]: cali9edb9c2ff42: Gained IPv6LL May 14 18:15:02.299006 containerd[1523]: time="2025-05-14T18:15:02.298881878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sg55z,Uid:2b1d9552-b1a9-492e-abe0-6d929715d5ec,Namespace:calico-system,Attempt:0,}" May 14 18:15:02.452364 systemd-networkd[1432]: cali29d97b9b525: Link UP May 14 18:15:02.452624 systemd-networkd[1432]: cali29d97b9b525: Gained carrier May 14 18:15:02.460507 kubelet[2626]: I0514 18:15:02.460436 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-jfcjv" podStartSLOduration=33.460415047 podStartE2EDuration="33.460415047s" podCreationTimestamp="2025-05-14 18:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:15:02.459433966 +0000 UTC m=+38.235448026" watchObservedRunningTime="2025-05-14 18:15:02.460415047 +0000 UTC m=+38.236429107" May 14 18:15:02.475551 containerd[1523]: 2025-05-14 18:15:02.334 [INFO][4768] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 18:15:02.475551 containerd[1523]: 2025-05-14 18:15:02.353 [INFO][4768] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--sg55z-eth0 csi-node-driver- calico-system 2b1d9552-b1a9-492e-abe0-6d929715d5ec 601 0 2025-05-14 18:14:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-sg55z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali29d97b9b525 [] []}} ContainerID="741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9" Namespace="calico-system" Pod="csi-node-driver-sg55z" WorkloadEndpoint="localhost-k8s-csi--node--driver--sg55z-" May 14 18:15:02.475551 containerd[1523]: 2025-05-14 18:15:02.353 [INFO][4768] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9" Namespace="calico-system" Pod="csi-node-driver-sg55z" WorkloadEndpoint="localhost-k8s-csi--node--driver--sg55z-eth0" May 14 18:15:02.475551 containerd[1523]: 2025-05-14 18:15:02.394 [INFO][4784] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9" HandleID="k8s-pod-network.741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9" Workload="localhost-k8s-csi--node--driver--sg55z-eth0" May 14 18:15:02.475551 containerd[1523]: 2025-05-14 18:15:02.406 [INFO][4784] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9" HandleID="k8s-pod-network.741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9" Workload="localhost-k8s-csi--node--driver--sg55z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004daf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-sg55z", "timestamp":"2025-05-14 18:15:02.39452541 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:15:02.475551 containerd[1523]: 2025-05-14 18:15:02.406 [INFO][4784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:15:02.475551 containerd[1523]: 2025-05-14 18:15:02.406 [INFO][4784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:15:02.475551 containerd[1523]: 2025-05-14 18:15:02.406 [INFO][4784] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:15:02.475551 containerd[1523]: 2025-05-14 18:15:02.408 [INFO][4784] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9" host="localhost" May 14 18:15:02.475551 containerd[1523]: 2025-05-14 18:15:02.412 [INFO][4784] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:15:02.475551 containerd[1523]: 2025-05-14 18:15:02.419 [INFO][4784] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:15:02.475551 containerd[1523]: 2025-05-14 18:15:02.422 [INFO][4784] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:15:02.475551 containerd[1523]: 2025-05-14 18:15:02.426 [INFO][4784] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:15:02.475551 containerd[1523]: 2025-05-14 18:15:02.426 [INFO][4784] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9" host="localhost" May 14 18:15:02.475551 containerd[1523]: 2025-05-14 18:15:02.428 [INFO][4784] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9 May 14 18:15:02.475551 containerd[1523]: 2025-05-14 18:15:02.433 [INFO][4784] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9" host="localhost" May 14 18:15:02.475551 containerd[1523]: 2025-05-14 18:15:02.442 [INFO][4784] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9" host="localhost" May 14 18:15:02.475551 containerd[1523]: 2025-05-14 18:15:02.442 [INFO][4784] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9" host="localhost" May 14 18:15:02.475551 containerd[1523]: 2025-05-14 18:15:02.442 [INFO][4784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:15:02.475551 containerd[1523]: 2025-05-14 18:15:02.442 [INFO][4784] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9" HandleID="k8s-pod-network.741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9" Workload="localhost-k8s-csi--node--driver--sg55z-eth0" May 14 18:15:02.476745 containerd[1523]: 2025-05-14 18:15:02.449 [INFO][4768] cni-plugin/k8s.go 386: Populated endpoint ContainerID="741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9" Namespace="calico-system" Pod="csi-node-driver-sg55z" WorkloadEndpoint="localhost-k8s-csi--node--driver--sg55z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sg55z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2b1d9552-b1a9-492e-abe0-6d929715d5ec", ResourceVersion:"601", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-sg55z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali29d97b9b525", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:02.476745 containerd[1523]: 2025-05-14 18:15:02.449 [INFO][4768] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.135/32] ContainerID="741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9" Namespace="calico-system" Pod="csi-node-driver-sg55z" WorkloadEndpoint="localhost-k8s-csi--node--driver--sg55z-eth0" May 14 18:15:02.476745 containerd[1523]: 2025-05-14 18:15:02.449 [INFO][4768] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29d97b9b525 ContainerID="741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9" Namespace="calico-system" Pod="csi-node-driver-sg55z" WorkloadEndpoint="localhost-k8s-csi--node--driver--sg55z-eth0" May 14 18:15:02.476745 containerd[1523]: 2025-05-14 18:15:02.452 [INFO][4768] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9" Namespace="calico-system" Pod="csi-node-driver-sg55z" WorkloadEndpoint="localhost-k8s-csi--node--driver--sg55z-eth0" May 14 18:15:02.476745 containerd[1523]: 2025-05-14 18:15:02.454 [INFO][4768] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9" Namespace="calico-system" Pod="csi-node-driver-sg55z" WorkloadEndpoint="localhost-k8s-csi--node--driver--sg55z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sg55z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2b1d9552-b1a9-492e-abe0-6d929715d5ec", ResourceVersion:"601", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9", Pod:"csi-node-driver-sg55z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali29d97b9b525", MAC:"8e:75:60:3b:20:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:15:02.476745 containerd[1523]: 2025-05-14 18:15:02.470 [INFO][4768] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9" Namespace="calico-system" Pod="csi-node-driver-sg55z" WorkloadEndpoint="localhost-k8s-csi--node--driver--sg55z-eth0" May 14 18:15:02.519054 containerd[1523]: time="2025-05-14T18:15:02.518873631Z" level=info msg="connecting to shim 741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9" address="unix:///run/containerd/s/dbf2d09403e6fb324ce887f51499e8de9d4ad5d0a7779d89f432b762dc52bb3d" namespace=k8s.io protocol=ttrpc version=3 May 14 18:15:02.545152 systemd[1]: Started cri-containerd-741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9.scope - libcontainer container 741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9. May 14 18:15:02.559344 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:15:02.575215 containerd[1523]: time="2025-05-14T18:15:02.574988661Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:02.575797 containerd[1523]: time="2025-05-14T18:15:02.575766045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 14 18:15:02.577210 containerd[1523]: time="2025-05-14T18:15:02.577105876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sg55z,Uid:2b1d9552-b1a9-492e-abe0-6d929715d5ec,Namespace:calico-system,Attempt:0,} returns sandbox id \"741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9\"" May 14 18:15:02.577752 containerd[1523]: time="2025-05-14T18:15:02.577714566Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:02.579993 containerd[1523]: time="2025-05-14T18:15:02.579806978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:02.583964 containerd[1523]: time="2025-05-14T18:15:02.582162533Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 2.03945425s" May 14 18:15:02.583964 containerd[1523]: time="2025-05-14T18:15:02.582314865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 14 18:15:02.585197 containerd[1523]: time="2025-05-14T18:15:02.585168501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 18:15:02.586848 containerd[1523]: time="2025-05-14T18:15:02.586502891Z" level=info msg="CreateContainer within sandbox \"277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 18:15:02.594989 containerd[1523]: time="2025-05-14T18:15:02.594926906Z" level=info msg="Container 12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca: CDI devices from CRI Config.CDIDevices: []" May 14 18:15:02.603136 containerd[1523]: time="2025-05-14T18:15:02.603019094Z" level=info msg="CreateContainer within sandbox \"277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca\"" May 14 18:15:02.603615 containerd[1523]: time="2025-05-14T18:15:02.603583100Z" level=info msg="StartContainer for \"12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca\"" May 14 18:15:02.606616 containerd[1523]: time="2025-05-14T18:15:02.606543705Z" level=info msg="connecting to shim 12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca" address="unix:///run/containerd/s/e98a81b0c69a0608b2495aea7df753f02f4ca9ed026cd1cbed1775aeed21c7e8" protocol=ttrpc version=3 May 14 18:15:02.624120 systemd[1]: Started cri-containerd-12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca.scope - libcontainer container 12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca. May 14 18:15:02.667982 containerd[1523]: time="2025-05-14T18:15:02.667929410Z" level=info msg="StartContainer for \"12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca\" returns successfully" May 14 18:15:02.987758 containerd[1523]: time="2025-05-14T18:15:02.987056422Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:02.989220 containerd[1523]: time="2025-05-14T18:15:02.989183478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 14 18:15:02.990800 containerd[1523]: time="2025-05-14T18:15:02.990748367Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 405.542063ms" May 14 18:15:02.990899 containerd[1523]: time="2025-05-14T18:15:02.990885218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 14 18:15:02.992155 containerd[1523]: time="2025-05-14T18:15:02.992130921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 18:15:02.994166 containerd[1523]: time="2025-05-14T18:15:02.994134166Z" level=info msg="CreateContainer within sandbox \"080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 18:15:03.042086 containerd[1523]: time="2025-05-14T18:15:03.041911136Z" level=info msg="Container 8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6: CDI devices from CRI Config.CDIDevices: []" May 14 18:15:03.052399 containerd[1523]: time="2025-05-14T18:15:03.052235524Z" level=info msg="CreateContainer within sandbox \"080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6\"" May 14 18:15:03.053614 containerd[1523]: time="2025-05-14T18:15:03.053566391Z" level=info msg="StartContainer for \"8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6\"" May 14 18:15:03.054958 containerd[1523]: time="2025-05-14T18:15:03.054927020Z" level=info msg="connecting to shim 8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6" address="unix:///run/containerd/s/61e6f80118602d49f1c89ee3766a0c5590ebe90d6bfce0f69d99d04dad3990a5" protocol=ttrpc version=3 May 14 18:15:03.083183 systemd[1]: Started cri-containerd-8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6.scope - libcontainer container 8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6. May 14 18:15:03.095109 systemd-networkd[1432]: caliee3c08b6a4f: Gained IPv6LL May 14 18:15:03.127472 containerd[1523]: time="2025-05-14T18:15:03.127392635Z" level=info msg="StartContainer for \"8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6\" returns successfully" May 14 18:15:03.287074 systemd-networkd[1432]: cali1ddc99af60f: Gained IPv6LL May 14 18:15:03.299391 containerd[1523]: time="2025-05-14T18:15:03.299340594Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:03.300133 containerd[1523]: time="2025-05-14T18:15:03.300006247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 14 18:15:03.301813 containerd[1523]: time="2025-05-14T18:15:03.301774629Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 309.30748ms" May 14 18:15:03.301845 containerd[1523]: time="2025-05-14T18:15:03.301817392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 14 18:15:03.303188 containerd[1523]: time="2025-05-14T18:15:03.303163740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 18:15:03.304161 containerd[1523]: time="2025-05-14T18:15:03.304130098Z" level=info msg="CreateContainer within sandbox \"dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 18:15:03.316552 containerd[1523]: time="2025-05-14T18:15:03.316458487Z" level=info msg="Container 3afc17f38738d78b4c7a714698dab3e6b3a7c18f01d5a2e501604efa0ae88787: CDI devices from CRI Config.CDIDevices: []" May 14 18:15:03.322881 containerd[1523]: time="2025-05-14T18:15:03.322835399Z" level=info msg="CreateContainer within sandbox \"dcff43121194541eed4a70cbdbacb7f1fcfbc6db0768edbc1937091e9b34fb5a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3afc17f38738d78b4c7a714698dab3e6b3a7c18f01d5a2e501604efa0ae88787\"" May 14 18:15:03.324330 containerd[1523]: time="2025-05-14T18:15:03.324279115Z" level=info msg="StartContainer for \"3afc17f38738d78b4c7a714698dab3e6b3a7c18f01d5a2e501604efa0ae88787\"" May 14 18:15:03.326396 containerd[1523]: time="2025-05-14T18:15:03.326366202Z" level=info msg="connecting to shim 3afc17f38738d78b4c7a714698dab3e6b3a7c18f01d5a2e501604efa0ae88787" address="unix:///run/containerd/s/4b772aa1ee5d1752df6a689ddc771de52b0c58c7ebdf48ffe1556b402918a642" protocol=ttrpc version=3 May 14 18:15:03.347134 systemd[1]: Started cri-containerd-3afc17f38738d78b4c7a714698dab3e6b3a7c18f01d5a2e501604efa0ae88787.scope - libcontainer container 3afc17f38738d78b4c7a714698dab3e6b3a7c18f01d5a2e501604efa0ae88787. May 14 18:15:03.393479 containerd[1523]: time="2025-05-14T18:15:03.393437985Z" level=info msg="StartContainer for \"3afc17f38738d78b4c7a714698dab3e6b3a7c18f01d5a2e501604efa0ae88787\" returns successfully" May 14 18:15:03.487431 kubelet[2626]: I0514 18:15:03.487363 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-576d749fbb-j4vhz" podStartSLOduration=25.444905103 podStartE2EDuration="27.487347001s" podCreationTimestamp="2025-05-14 18:14:36 +0000 UTC" firstStartedPulling="2025-05-14 18:15:00.54233257 +0000 UTC m=+36.318346590" lastFinishedPulling="2025-05-14 18:15:02.584774428 +0000 UTC m=+38.360788488" observedRunningTime="2025-05-14 18:15:03.487003213 +0000 UTC m=+39.263017313" watchObservedRunningTime="2025-05-14 18:15:03.487347001 +0000 UTC m=+39.263361101" May 14 18:15:03.539364 kubelet[2626]: I0514 18:15:03.539222 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5976b9bcc4-mzcb9" podStartSLOduration=24.783801378 podStartE2EDuration="26.539202682s" podCreationTimestamp="2025-05-14 18:14:37 +0000 UTC" firstStartedPulling="2025-05-14 18:15:01.547159388 +0000 UTC m=+37.323173448" lastFinishedPulling="2025-05-14 18:15:03.302560692 +0000 UTC m=+39.078574752" observedRunningTime="2025-05-14 18:15:03.524009623 +0000 UTC m=+39.300023723" watchObservedRunningTime="2025-05-14 18:15:03.539202682 +0000 UTC m=+39.315216742" May 14 18:15:04.375080 systemd-networkd[1432]: cali29d97b9b525: Gained IPv6LL May 14 18:15:04.460325 kubelet[2626]: I0514 18:15:04.460294 2626 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:15:04.460856 kubelet[2626]: I0514 18:15:04.460455 2626 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:15:04.843311 containerd[1523]: time="2025-05-14T18:15:04.843199194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:04.844283 containerd[1523]: time="2025-05-14T18:15:04.844244476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 14 18:15:04.850692 containerd[1523]: time="2025-05-14T18:15:04.850644015Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:04.853317 containerd[1523]: time="2025-05-14T18:15:04.853274101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:04.853941 containerd[1523]: time="2025-05-14T18:15:04.853903110Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.550708087s" May 14 18:15:04.853941 containerd[1523]: time="2025-05-14T18:15:04.853937432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 14 18:15:04.856030 containerd[1523]: time="2025-05-14T18:15:04.855998273Z" level=info msg="CreateContainer within sandbox \"741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 18:15:04.863138 containerd[1523]: time="2025-05-14T18:15:04.863098668Z" level=info msg="Container 55298745c2b8f8cf28a106f5f604eac5a68fa5e2951657705a5bdcf896229bdf: CDI devices from CRI Config.CDIDevices: []" May 14 18:15:04.870506 containerd[1523]: time="2025-05-14T18:15:04.870333552Z" level=info msg="CreateContainer within sandbox \"741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"55298745c2b8f8cf28a106f5f604eac5a68fa5e2951657705a5bdcf896229bdf\"" May 14 18:15:04.871573 containerd[1523]: time="2025-05-14T18:15:04.871473241Z" level=info msg="StartContainer for \"55298745c2b8f8cf28a106f5f604eac5a68fa5e2951657705a5bdcf896229bdf\"" May 14 18:15:04.874128 containerd[1523]: time="2025-05-14T18:15:04.873534842Z" level=info msg="connecting to shim 55298745c2b8f8cf28a106f5f604eac5a68fa5e2951657705a5bdcf896229bdf" address="unix:///run/containerd/s/dbf2d09403e6fb324ce887f51499e8de9d4ad5d0a7779d89f432b762dc52bb3d" protocol=ttrpc version=3 May 14 18:15:04.902126 systemd[1]: Started cri-containerd-55298745c2b8f8cf28a106f5f604eac5a68fa5e2951657705a5bdcf896229bdf.scope - libcontainer container 55298745c2b8f8cf28a106f5f604eac5a68fa5e2951657705a5bdcf896229bdf. May 14 18:15:04.991886 containerd[1523]: time="2025-05-14T18:15:04.991841636Z" level=info msg="StartContainer for \"55298745c2b8f8cf28a106f5f604eac5a68fa5e2951657705a5bdcf896229bdf\" returns successfully" May 14 18:15:04.993254 containerd[1523]: time="2025-05-14T18:15:04.993224744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 18:15:05.464999 kubelet[2626]: I0514 18:15:05.464972 2626 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:15:05.526010 kubelet[2626]: I0514 18:15:05.525803 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-576d749fbb-vrk6m" podStartSLOduration=27.159631941 podStartE2EDuration="29.525784074s" podCreationTimestamp="2025-05-14 18:14:36 +0000 UTC" firstStartedPulling="2025-05-14 18:15:00.62552403 +0000 UTC m=+36.401538050" lastFinishedPulling="2025-05-14 18:15:02.991676123 +0000 UTC m=+38.767690183" observedRunningTime="2025-05-14 18:15:03.538768487 +0000 UTC m=+39.314782627" watchObservedRunningTime="2025-05-14 18:15:05.525784074 +0000 UTC m=+41.301798134" May 14 18:15:05.551537 kubelet[2626]: I0514 18:15:05.551501 2626 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:15:05.558120 containerd[1523]: time="2025-05-14T18:15:05.558085447Z" level=info msg="StopContainer for \"8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6\" with timeout 30 (s)" May 14 18:15:05.558494 containerd[1523]: time="2025-05-14T18:15:05.558445514Z" level=info msg="Stop container \"8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6\" with signal terminated" May 14 18:15:05.627799 systemd[1]: cri-containerd-8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6.scope: Deactivated successfully. May 14 18:15:05.628267 systemd[1]: cri-containerd-8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6.scope: Consumed 1.527s CPU time, 39M memory peak. May 14 18:15:05.630464 containerd[1523]: time="2025-05-14T18:15:05.630421939Z" level=info msg="received exit event container_id:\"8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6\" id:\"8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6\" pid:4904 exit_status:1 exited_at:{seconds:1747246505 nanos:630127877}" May 14 18:15:05.630568 containerd[1523]: time="2025-05-14T18:15:05.630457262Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6\" id:\"8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6\" pid:4904 exit_status:1 exited_at:{seconds:1747246505 nanos:630127877}" May 14 18:15:05.650661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6-rootfs.mount: Deactivated successfully. May 14 18:15:05.714995 containerd[1523]: time="2025-05-14T18:15:05.714933035Z" level=info msg="StopContainer for \"8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6\" returns successfully" May 14 18:15:05.716190 containerd[1523]: time="2025-05-14T18:15:05.716118165Z" level=info msg="StopPodSandbox for \"080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad\"" May 14 18:15:05.716190 containerd[1523]: time="2025-05-14T18:15:05.716180650Z" level=info msg="Container to stop \"8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:15:05.727325 systemd[1]: cri-containerd-080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad.scope: Deactivated successfully. May 14 18:15:05.732419 containerd[1523]: time="2025-05-14T18:15:05.732371799Z" level=info msg="TaskExit event in podsandbox handler container_id:\"080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad\" id:\"080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad\" pid:4491 exit_status:137 exited_at:{seconds:1747246505 nanos:732127621}" May 14 18:15:05.754808 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad-rootfs.mount: Deactivated successfully. May 14 18:15:05.755995 containerd[1523]: time="2025-05-14T18:15:05.755935348Z" level=info msg="shim disconnected" id=080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad namespace=k8s.io May 14 18:15:05.756076 containerd[1523]: time="2025-05-14T18:15:05.755990513Z" level=warning msg="cleaning up after shim disconnected" id=080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad namespace=k8s.io May 14 18:15:05.756076 containerd[1523]: time="2025-05-14T18:15:05.756021435Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 18:15:05.778715 containerd[1523]: time="2025-05-14T18:15:05.778669235Z" level=info msg="received exit event sandbox_id:\"080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad\" exit_status:137 exited_at:{seconds:1747246505 nanos:732127621}" May 14 18:15:05.781062 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad-shm.mount: Deactivated successfully. May 14 18:15:05.843004 systemd-networkd[1432]: cali9edb9c2ff42: Link DOWN May 14 18:15:05.843010 systemd-networkd[1432]: cali9edb9c2ff42: Lost carrier May 14 18:15:05.997917 containerd[1523]: 2025-05-14 18:15:05.841 [INFO][5157] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" May 14 18:15:05.997917 containerd[1523]: 2025-05-14 18:15:05.842 [INFO][5157] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" iface="eth0" netns="/var/run/netns/cni-f958ba71-54bc-365c-3433-2eeff7ac885d" May 14 18:15:05.997917 containerd[1523]: 2025-05-14 18:15:05.842 [INFO][5157] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" iface="eth0" netns="/var/run/netns/cni-f958ba71-54bc-365c-3433-2eeff7ac885d" May 14 18:15:05.997917 containerd[1523]: 2025-05-14 18:15:05.849 [INFO][5157] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" after=6.787355ms iface="eth0" netns="/var/run/netns/cni-f958ba71-54bc-365c-3433-2eeff7ac885d" May 14 18:15:05.997917 containerd[1523]: 2025-05-14 18:15:05.849 [INFO][5157] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" May 14 18:15:05.997917 containerd[1523]: 2025-05-14 18:15:05.849 [INFO][5157] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" May 14 18:15:05.997917 containerd[1523]: 2025-05-14 18:15:05.867 [INFO][5168] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" HandleID="k8s-pod-network.080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" Workload="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" May 14 18:15:05.997917 containerd[1523]: 2025-05-14 18:15:05.867 [INFO][5168] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:15:05.997917 containerd[1523]: 2025-05-14 18:15:05.867 [INFO][5168] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:15:05.997917 containerd[1523]: 2025-05-14 18:15:05.992 [INFO][5168] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" HandleID="k8s-pod-network.080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" Workload="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" May 14 18:15:05.997917 containerd[1523]: 2025-05-14 18:15:05.992 [INFO][5168] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" HandleID="k8s-pod-network.080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" Workload="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" May 14 18:15:05.997917 containerd[1523]: 2025-05-14 18:15:05.993 [INFO][5168] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:15:05.997917 containerd[1523]: 2025-05-14 18:15:05.995 [INFO][5157] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" May 14 18:15:05.998652 containerd[1523]: time="2025-05-14T18:15:05.998406238Z" level=info msg="TearDown network for sandbox \"080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad\" successfully" May 14 18:15:05.998652 containerd[1523]: time="2025-05-14T18:15:05.998432680Z" level=info msg="StopPodSandbox for \"080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad\" returns successfully" May 14 18:15:06.001335 systemd[1]: run-netns-cni\x2df958ba71\x2d54bc\x2d365c\x2d3433\x2d2eeff7ac885d.mount: Deactivated successfully. May 14 18:15:06.108454 kubelet[2626]: I0514 18:15:06.108408 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/16aa4105-e2a7-46b7-982b-abcf1282d71f-calico-apiserver-certs\") pod \"16aa4105-e2a7-46b7-982b-abcf1282d71f\" (UID: \"16aa4105-e2a7-46b7-982b-abcf1282d71f\") " May 14 18:15:06.108620 kubelet[2626]: I0514 18:15:06.108470 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccq8x\" (UniqueName: \"kubernetes.io/projected/16aa4105-e2a7-46b7-982b-abcf1282d71f-kube-api-access-ccq8x\") pod \"16aa4105-e2a7-46b7-982b-abcf1282d71f\" (UID: \"16aa4105-e2a7-46b7-982b-abcf1282d71f\") " May 14 18:15:06.128518 systemd[1]: var-lib-kubelet-pods-16aa4105\x2de2a7\x2d46b7\x2d982b\x2dabcf1282d71f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dccq8x.mount: Deactivated successfully. May 14 18:15:06.134092 systemd[1]: var-lib-kubelet-pods-16aa4105\x2de2a7\x2d46b7\x2d982b\x2dabcf1282d71f-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 14 18:15:06.136019 kubelet[2626]: I0514 18:15:06.135039 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16aa4105-e2a7-46b7-982b-abcf1282d71f-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "16aa4105-e2a7-46b7-982b-abcf1282d71f" (UID: "16aa4105-e2a7-46b7-982b-abcf1282d71f"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 18:15:06.136019 kubelet[2626]: I0514 18:15:06.135395 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16aa4105-e2a7-46b7-982b-abcf1282d71f-kube-api-access-ccq8x" (OuterVolumeSpecName: "kube-api-access-ccq8x") pod "16aa4105-e2a7-46b7-982b-abcf1282d71f" (UID: "16aa4105-e2a7-46b7-982b-abcf1282d71f"). InnerVolumeSpecName "kube-api-access-ccq8x". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:15:06.209134 kubelet[2626]: I0514 18:15:06.209081 2626 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ccq8x\" (UniqueName: \"kubernetes.io/projected/16aa4105-e2a7-46b7-982b-abcf1282d71f-kube-api-access-ccq8x\") on node \"localhost\" DevicePath \"\"" May 14 18:15:06.209134 kubelet[2626]: I0514 18:15:06.209118 2626 reconciler_common.go:288] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/16aa4105-e2a7-46b7-982b-abcf1282d71f-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 14 18:15:06.267562 containerd[1523]: time="2025-05-14T18:15:06.267447035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:06.268970 containerd[1523]: time="2025-05-14T18:15:06.268790615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 14 18:15:06.269626 containerd[1523]: time="2025-05-14T18:15:06.269598954Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:06.271890 containerd[1523]: time="2025-05-14T18:15:06.271858561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:15:06.273304 containerd[1523]: time="2025-05-14T18:15:06.273038208Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.279763661s" May 14 18:15:06.273304 containerd[1523]: time="2025-05-14T18:15:06.273088252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 14 18:15:06.276330 containerd[1523]: time="2025-05-14T18:15:06.276301929Z" level=info msg="CreateContainer within sandbox \"741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 18:15:06.283965 containerd[1523]: time="2025-05-14T18:15:06.283709797Z" level=info msg="Container c5cdd00b94483337031812a0da7812b0c4f5d360af205d84e70a29d909cebb81: CDI devices from CRI Config.CDIDevices: []" May 14 18:15:06.290225 containerd[1523]: time="2025-05-14T18:15:06.290122670Z" level=info msg="CreateContainer within sandbox \"741e28b937ff7d411db82bd9f0c752119bbac1223025c6bd9a2eb858ee94ebe9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c5cdd00b94483337031812a0da7812b0c4f5d360af205d84e70a29d909cebb81\"" May 14 18:15:06.291233 containerd[1523]: time="2025-05-14T18:15:06.291199470Z" level=info msg="StartContainer for \"c5cdd00b94483337031812a0da7812b0c4f5d360af205d84e70a29d909cebb81\"" May 14 18:15:06.294181 containerd[1523]: time="2025-05-14T18:15:06.294141287Z" level=info msg="connecting to shim c5cdd00b94483337031812a0da7812b0c4f5d360af205d84e70a29d909cebb81" address="unix:///run/containerd/s/dbf2d09403e6fb324ce887f51499e8de9d4ad5d0a7779d89f432b762dc52bb3d" protocol=ttrpc version=3 May 14 18:15:06.318043 systemd[1]: Removed slice kubepods-besteffort-pod16aa4105_e2a7_46b7_982b_abcf1282d71f.slice - libcontainer container kubepods-besteffort-pod16aa4105_e2a7_46b7_982b_abcf1282d71f.slice. May 14 18:15:06.318234 systemd[1]: kubepods-besteffort-pod16aa4105_e2a7_46b7_982b_abcf1282d71f.slice: Consumed 1.543s CPU time, 39.2M memory peak. May 14 18:15:06.349109 systemd[1]: Started cri-containerd-c5cdd00b94483337031812a0da7812b0c4f5d360af205d84e70a29d909cebb81.scope - libcontainer container c5cdd00b94483337031812a0da7812b0c4f5d360af205d84e70a29d909cebb81. May 14 18:15:06.379441 containerd[1523]: time="2025-05-14T18:15:06.379393144Z" level=info msg="StartContainer for \"c5cdd00b94483337031812a0da7812b0c4f5d360af205d84e70a29d909cebb81\" returns successfully" May 14 18:15:06.471000 kubelet[2626]: I0514 18:15:06.469585 2626 scope.go:117] "RemoveContainer" containerID="8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6" May 14 18:15:06.475170 containerd[1523]: time="2025-05-14T18:15:06.475129055Z" level=info msg="RemoveContainer for \"8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6\"" May 14 18:15:06.483055 containerd[1523]: time="2025-05-14T18:15:06.483015398Z" level=info msg="RemoveContainer for \"8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6\" returns successfully" May 14 18:15:06.483451 kubelet[2626]: I0514 18:15:06.483414 2626 scope.go:117] "RemoveContainer" containerID="8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6" May 14 18:15:06.483784 containerd[1523]: time="2025-05-14T18:15:06.483737971Z" level=error msg="ContainerStatus for \"8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6\": not found" May 14 18:15:06.490220 kubelet[2626]: E0514 18:15:06.490106 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6\": not found" containerID="8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6" May 14 18:15:06.492583 kubelet[2626]: I0514 18:15:06.492461 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6"} err="failed to get container status \"8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"8286123feb2b67b273f3fbd48b08aa615d4f79abeea27f19c2d00d2b218a65e6\": not found" May 14 18:15:06.498379 kubelet[2626]: I0514 18:15:06.497742 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-sg55z" podStartSLOduration=25.803796643 podStartE2EDuration="29.497727565s" podCreationTimestamp="2025-05-14 18:14:37 +0000 UTC" firstStartedPulling="2025-05-14 18:15:02.579766175 +0000 UTC m=+38.355780195" lastFinishedPulling="2025-05-14 18:15:06.273697057 +0000 UTC m=+42.049711117" observedRunningTime="2025-05-14 18:15:06.487672982 +0000 UTC m=+42.263687042" watchObservedRunningTime="2025-05-14 18:15:06.497727565 +0000 UTC m=+42.273741625" May 14 18:15:06.616627 systemd[1]: Started sshd@9-10.0.0.119:22-10.0.0.1:43122.service - OpenSSH per-connection server daemon (10.0.0.1:43122). May 14 18:15:06.670100 sshd[5242]: Accepted publickey for core from 10.0.0.1 port 43122 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:15:06.671524 sshd-session[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:06.676011 systemd-logind[1500]: New session 10 of user core. May 14 18:15:06.687110 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 18:15:06.889616 sshd[5244]: Connection closed by 10.0.0.1 port 43122 May 14 18:15:06.889886 sshd-session[5242]: pam_unix(sshd:session): session closed for user core May 14 18:15:06.900504 systemd[1]: sshd@9-10.0.0.119:22-10.0.0.1:43122.service: Deactivated successfully. May 14 18:15:06.902358 systemd[1]: session-10.scope: Deactivated successfully. May 14 18:15:06.905163 systemd-logind[1500]: Session 10 logged out. Waiting for processes to exit. May 14 18:15:06.908434 systemd[1]: Started sshd@10-10.0.0.119:22-10.0.0.1:43132.service - OpenSSH per-connection server daemon (10.0.0.1:43132). May 14 18:15:06.909813 systemd-logind[1500]: Removed session 10. May 14 18:15:06.958861 sshd[5258]: Accepted publickey for core from 10.0.0.1 port 43132 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:15:06.960378 sshd-session[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:06.965057 systemd-logind[1500]: New session 11 of user core. May 14 18:15:06.978139 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 18:15:07.060172 kubelet[2626]: I0514 18:15:07.060087 2626 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:15:07.200819 sshd[5260]: Connection closed by 10.0.0.1 port 43132 May 14 18:15:07.200314 sshd-session[5258]: pam_unix(sshd:session): session closed for user core May 14 18:15:07.211456 systemd[1]: sshd@10-10.0.0.119:22-10.0.0.1:43132.service: Deactivated successfully. May 14 18:15:07.216341 systemd[1]: session-11.scope: Deactivated successfully. May 14 18:15:07.218938 systemd-logind[1500]: Session 11 logged out. Waiting for processes to exit. May 14 18:15:07.223355 systemd[1]: Started sshd@11-10.0.0.119:22-10.0.0.1:43142.service - OpenSSH per-connection server daemon (10.0.0.1:43142). May 14 18:15:07.226257 kubelet[2626]: I0514 18:15:07.226176 2626 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:15:07.227030 systemd-logind[1500]: Removed session 11. May 14 18:15:07.279941 sshd[5271]: Accepted publickey for core from 10.0.0.1 port 43142 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:15:07.281410 sshd-session[5271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:07.285570 systemd-logind[1500]: New session 12 of user core. May 14 18:15:07.300145 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 18:15:07.390828 kubelet[2626]: I0514 18:15:07.390786 2626 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 18:15:07.407548 kubelet[2626]: I0514 18:15:07.406989 2626 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 18:15:07.469212 sshd[5273]: Connection closed by 10.0.0.1 port 43142 May 14 18:15:07.470155 sshd-session[5271]: pam_unix(sshd:session): session closed for user core May 14 18:15:07.476577 systemd[1]: sshd@11-10.0.0.119:22-10.0.0.1:43142.service: Deactivated successfully. May 14 18:15:07.476866 systemd-logind[1500]: Session 12 logged out. Waiting for processes to exit. May 14 18:15:07.479705 systemd[1]: session-12.scope: Deactivated successfully. May 14 18:15:07.481556 systemd-logind[1500]: Removed session 12. May 14 18:15:08.251493 systemd-networkd[1432]: vxlan.calico: Link UP May 14 18:15:08.251499 systemd-networkd[1432]: vxlan.calico: Gained carrier May 14 18:15:08.300972 kubelet[2626]: I0514 18:15:08.300924 2626 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16aa4105-e2a7-46b7-982b-abcf1282d71f" path="/var/lib/kubelet/pods/16aa4105-e2a7-46b7-982b-abcf1282d71f/volumes" May 14 18:15:09.752330 systemd-networkd[1432]: vxlan.calico: Gained IPv6LL May 14 18:15:12.492730 systemd[1]: Started sshd@12-10.0.0.119:22-10.0.0.1:58498.service - OpenSSH per-connection server daemon (10.0.0.1:58498). May 14 18:15:12.553398 sshd[5437]: Accepted publickey for core from 10.0.0.1 port 58498 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:15:12.555388 sshd-session[5437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:12.560472 systemd-logind[1500]: New session 13 of user core. May 14 18:15:12.573146 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 18:15:12.758060 sshd[5439]: Connection closed by 10.0.0.1 port 58498 May 14 18:15:12.758526 sshd-session[5437]: pam_unix(sshd:session): session closed for user core May 14 18:15:12.762182 systemd[1]: session-13.scope: Deactivated successfully. May 14 18:15:12.763493 systemd[1]: sshd@12-10.0.0.119:22-10.0.0.1:58498.service: Deactivated successfully. May 14 18:15:12.766265 systemd-logind[1500]: Session 13 logged out. Waiting for processes to exit. May 14 18:15:12.767497 systemd-logind[1500]: Removed session 13. May 14 18:15:14.537551 containerd[1523]: time="2025-05-14T18:15:14.537510149Z" level=info msg="TaskExit event in podsandbox handler container_id:\"232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957\" id:\"2bc0b4415569de2e489d4d30e53250316e4785f37c54f3efbddf5a824983223d\" pid:5475 exited_at:{seconds:1747246514 nanos:537233012}" May 14 18:15:17.769614 systemd[1]: Started sshd@13-10.0.0.119:22-10.0.0.1:58504.service - OpenSSH per-connection server daemon (10.0.0.1:58504). May 14 18:15:17.812897 sshd[5490]: Accepted publickey for core from 10.0.0.1 port 58504 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:15:17.814310 sshd-session[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:17.818247 systemd-logind[1500]: New session 14 of user core. May 14 18:15:17.826108 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 18:15:17.961670 sshd[5492]: Connection closed by 10.0.0.1 port 58504 May 14 18:15:17.962010 sshd-session[5490]: pam_unix(sshd:session): session closed for user core May 14 18:15:17.965335 systemd[1]: sshd@13-10.0.0.119:22-10.0.0.1:58504.service: Deactivated successfully. May 14 18:15:17.967429 systemd[1]: session-14.scope: Deactivated successfully. May 14 18:15:17.969214 systemd-logind[1500]: Session 14 logged out. Waiting for processes to exit. May 14 18:15:17.970858 systemd-logind[1500]: Removed session 14. May 14 18:15:22.980631 systemd[1]: Started sshd@14-10.0.0.119:22-10.0.0.1:51958.service - OpenSSH per-connection server daemon (10.0.0.1:51958). May 14 18:15:23.033561 sshd[5518]: Accepted publickey for core from 10.0.0.1 port 51958 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:15:23.034401 sshd-session[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:23.038900 systemd-logind[1500]: New session 15 of user core. May 14 18:15:23.047164 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 18:15:23.201714 sshd[5520]: Connection closed by 10.0.0.1 port 51958 May 14 18:15:23.202656 sshd-session[5518]: pam_unix(sshd:session): session closed for user core May 14 18:15:23.212025 systemd[1]: sshd@14-10.0.0.119:22-10.0.0.1:51958.service: Deactivated successfully. May 14 18:15:23.216837 systemd[1]: session-15.scope: Deactivated successfully. May 14 18:15:23.219176 systemd-logind[1500]: Session 15 logged out. Waiting for processes to exit. May 14 18:15:23.222414 systemd-logind[1500]: Removed session 15. May 14 18:15:23.236836 containerd[1523]: time="2025-05-14T18:15:23.236255417Z" level=info msg="StopContainer for \"12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca\" with timeout 30 (s)" May 14 18:15:23.238110 containerd[1523]: time="2025-05-14T18:15:23.237997219Z" level=info msg="Stop container \"12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca\" with signal terminated" May 14 18:15:23.256791 systemd[1]: cri-containerd-12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca.scope: Deactivated successfully. May 14 18:15:23.257518 systemd[1]: cri-containerd-12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca.scope: Consumed 1.058s CPU time, 38.5M memory peak, 197K read from disk. May 14 18:15:23.260304 containerd[1523]: time="2025-05-14T18:15:23.260181507Z" level=info msg="received exit event container_id:\"12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca\" id:\"12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca\" pid:4872 exit_status:1 exited_at:{seconds:1747246523 nanos:259224501}" May 14 18:15:23.260654 containerd[1523]: time="2025-05-14T18:15:23.260633648Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca\" id:\"12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca\" pid:4872 exit_status:1 exited_at:{seconds:1747246523 nanos:259224501}" May 14 18:15:23.279872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca-rootfs.mount: Deactivated successfully. May 14 18:15:23.293747 containerd[1523]: time="2025-05-14T18:15:23.293704209Z" level=info msg="StopContainer for \"12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca\" returns successfully" May 14 18:15:23.294375 containerd[1523]: time="2025-05-14T18:15:23.294345920Z" level=info msg="StopPodSandbox for \"277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10\"" May 14 18:15:23.294429 containerd[1523]: time="2025-05-14T18:15:23.294414163Z" level=info msg="Container to stop \"12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:15:23.301193 systemd[1]: cri-containerd-277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10.scope: Deactivated successfully. May 14 18:15:23.303173 containerd[1523]: time="2025-05-14T18:15:23.303107934Z" level=info msg="TaskExit event in podsandbox handler container_id:\"277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10\" id:\"277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10\" pid:4435 exit_status:137 exited_at:{seconds:1747246523 nanos:302753077}" May 14 18:15:23.327044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10-rootfs.mount: Deactivated successfully. May 14 18:15:23.327417 containerd[1523]: time="2025-05-14T18:15:23.327367119Z" level=info msg="shim disconnected" id=277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10 namespace=k8s.io May 14 18:15:23.327477 containerd[1523]: time="2025-05-14T18:15:23.327400401Z" level=warning msg="cleaning up after shim disconnected" id=277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10 namespace=k8s.io May 14 18:15:23.327477 containerd[1523]: time="2025-05-14T18:15:23.327459803Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 18:15:23.341669 containerd[1523]: time="2025-05-14T18:15:23.341363060Z" level=info msg="received exit event sandbox_id:\"277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10\" exit_status:137 exited_at:{seconds:1747246523 nanos:302753077}" May 14 18:15:23.343146 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10-shm.mount: Deactivated successfully. May 14 18:15:23.388385 systemd-networkd[1432]: cali0ca96547cb8: Link DOWN May 14 18:15:23.388395 systemd-networkd[1432]: cali0ca96547cb8: Lost carrier May 14 18:15:23.457573 containerd[1523]: 2025-05-14 18:15:23.385 [INFO][5613] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" May 14 18:15:23.457573 containerd[1523]: 2025-05-14 18:15:23.386 [INFO][5613] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" iface="eth0" netns="/var/run/netns/cni-12dcc303-960c-8265-bb2f-b855992009ee" May 14 18:15:23.457573 containerd[1523]: 2025-05-14 18:15:23.386 [INFO][5613] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" iface="eth0" netns="/var/run/netns/cni-12dcc303-960c-8265-bb2f-b855992009ee" May 14 18:15:23.457573 containerd[1523]: 2025-05-14 18:15:23.396 [INFO][5613] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" after=9.997912ms iface="eth0" netns="/var/run/netns/cni-12dcc303-960c-8265-bb2f-b855992009ee" May 14 18:15:23.457573 containerd[1523]: 2025-05-14 18:15:23.396 [INFO][5613] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" May 14 18:15:23.457573 containerd[1523]: 2025-05-14 18:15:23.396 [INFO][5613] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" May 14 18:15:23.457573 containerd[1523]: 2025-05-14 18:15:23.417 [INFO][5628] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" HandleID="k8s-pod-network.277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" Workload="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" May 14 18:15:23.457573 containerd[1523]: 2025-05-14 18:15:23.418 [INFO][5628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:15:23.457573 containerd[1523]: 2025-05-14 18:15:23.418 [INFO][5628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:15:23.457573 containerd[1523]: 2025-05-14 18:15:23.451 [INFO][5628] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" HandleID="k8s-pod-network.277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" Workload="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" May 14 18:15:23.457573 containerd[1523]: 2025-05-14 18:15:23.451 [INFO][5628] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" HandleID="k8s-pod-network.277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" Workload="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" May 14 18:15:23.457573 containerd[1523]: 2025-05-14 18:15:23.453 [INFO][5628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:15:23.457573 containerd[1523]: 2025-05-14 18:15:23.455 [INFO][5613] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" May 14 18:15:23.458436 containerd[1523]: time="2025-05-14T18:15:23.458351064Z" level=info msg="TearDown network for sandbox \"277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10\" successfully" May 14 18:15:23.458436 containerd[1523]: time="2025-05-14T18:15:23.458384665Z" level=info msg="StopPodSandbox for \"277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10\" returns successfully" May 14 18:15:23.460107 systemd[1]: run-netns-cni\x2d12dcc303\x2d960c\x2d8265\x2dbb2f\x2db855992009ee.mount: Deactivated successfully. May 14 18:15:23.517364 kubelet[2626]: I0514 18:15:23.517272 2626 scope.go:117] "RemoveContainer" containerID="12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca" May 14 18:15:23.520001 containerd[1523]: time="2025-05-14T18:15:23.519972093Z" level=info msg="RemoveContainer for \"12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca\"" May 14 18:15:23.526920 containerd[1523]: time="2025-05-14T18:15:23.526881299Z" level=info msg="RemoveContainer for \"12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca\" returns successfully" May 14 18:15:23.527204 kubelet[2626]: I0514 18:15:23.527140 2626 scope.go:117] "RemoveContainer" containerID="12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca" May 14 18:15:23.527434 containerd[1523]: time="2025-05-14T18:15:23.527389123Z" level=error msg="ContainerStatus for \"12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca\": not found" May 14 18:15:23.527566 kubelet[2626]: E0514 18:15:23.527540 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca\": not found" containerID="12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca" May 14 18:15:23.527601 kubelet[2626]: I0514 18:15:23.527575 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca"} err="failed to get container status \"12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca\": rpc error: code = NotFound desc = an error occurred when try to find container \"12fe0e5f69b6debed422c4529ce64af5f52ef524a0397fc82f061c78958919ca\": not found" May 14 18:15:23.611217 kubelet[2626]: I0514 18:15:23.611184 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c03aac11-237f-4f5f-95ac-fc014337d597-calico-apiserver-certs\") pod \"c03aac11-237f-4f5f-95ac-fc014337d597\" (UID: \"c03aac11-237f-4f5f-95ac-fc014337d597\") " May 14 18:15:23.611303 kubelet[2626]: I0514 18:15:23.611231 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7dbz\" (UniqueName: \"kubernetes.io/projected/c03aac11-237f-4f5f-95ac-fc014337d597-kube-api-access-l7dbz\") pod \"c03aac11-237f-4f5f-95ac-fc014337d597\" (UID: \"c03aac11-237f-4f5f-95ac-fc014337d597\") " May 14 18:15:23.614004 kubelet[2626]: I0514 18:15:23.613697 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03aac11-237f-4f5f-95ac-fc014337d597-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "c03aac11-237f-4f5f-95ac-fc014337d597" (UID: "c03aac11-237f-4f5f-95ac-fc014337d597"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 18:15:23.614004 kubelet[2626]: I0514 18:15:23.613911 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03aac11-237f-4f5f-95ac-fc014337d597-kube-api-access-l7dbz" (OuterVolumeSpecName: "kube-api-access-l7dbz") pod "c03aac11-237f-4f5f-95ac-fc014337d597" (UID: "c03aac11-237f-4f5f-95ac-fc014337d597"). InnerVolumeSpecName "kube-api-access-l7dbz". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:15:23.615225 systemd[1]: var-lib-kubelet-pods-c03aac11\x2d237f\x2d4f5f\x2d95ac\x2dfc014337d597-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl7dbz.mount: Deactivated successfully. May 14 18:15:23.615403 systemd[1]: var-lib-kubelet-pods-c03aac11\x2d237f\x2d4f5f\x2d95ac\x2dfc014337d597-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 14 18:15:23.712266 kubelet[2626]: I0514 18:15:23.712222 2626 reconciler_common.go:288] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c03aac11-237f-4f5f-95ac-fc014337d597-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 14 18:15:23.712266 kubelet[2626]: I0514 18:15:23.712255 2626 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-l7dbz\" (UniqueName: \"kubernetes.io/projected/c03aac11-237f-4f5f-95ac-fc014337d597-kube-api-access-l7dbz\") on node \"localhost\" DevicePath \"\"" May 14 18:15:23.822687 systemd[1]: Removed slice kubepods-besteffort-podc03aac11_237f_4f5f_95ac_fc014337d597.slice - libcontainer container kubepods-besteffort-podc03aac11_237f_4f5f_95ac_fc014337d597.slice. May 14 18:15:23.823023 systemd[1]: kubepods-besteffort-podc03aac11_237f_4f5f_95ac_fc014337d597.slice: Consumed 1.076s CPU time, 38.8M memory peak, 197K read from disk. May 14 18:15:24.233774 containerd[1523]: time="2025-05-14T18:15:24.233142576Z" level=info msg="StopContainer for \"7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2\" with timeout 300 (s)" May 14 18:15:24.234133 containerd[1523]: time="2025-05-14T18:15:24.234049858Z" level=info msg="Stop container \"7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2\" with signal terminated" May 14 18:15:24.300160 containerd[1523]: time="2025-05-14T18:15:24.299153856Z" level=info msg="StopPodSandbox for \"080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad\"" May 14 18:15:24.304038 kubelet[2626]: I0514 18:15:24.303891 2626 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03aac11-237f-4f5f-95ac-fc014337d597" path="/var/lib/kubelet/pods/c03aac11-237f-4f5f-95ac-fc014337d597/volumes" May 14 18:15:24.339878 containerd[1523]: time="2025-05-14T18:15:24.339832650Z" level=info msg="StopContainer for \"037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c\" with timeout 30 (s)" May 14 18:15:24.341946 containerd[1523]: time="2025-05-14T18:15:24.341904225Z" level=info msg="Stop container \"037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c\" with signal terminated" May 14 18:15:24.374413 systemd[1]: cri-containerd-037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c.scope: Deactivated successfully. May 14 18:15:24.378931 containerd[1523]: time="2025-05-14T18:15:24.378885568Z" level=info msg="TaskExit event in podsandbox handler container_id:\"037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c\" id:\"037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c\" pid:4276 exit_status:2 exited_at:{seconds:1747246524 nanos:378581914}" May 14 18:15:24.379646 containerd[1523]: time="2025-05-14T18:15:24.379609602Z" level=info msg="received exit event container_id:\"037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c\" id:\"037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c\" pid:4276 exit_status:2 exited_at:{seconds:1747246524 nanos:378581914}" May 14 18:15:24.414452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c-rootfs.mount: Deactivated successfully. May 14 18:15:24.425752 containerd[1523]: time="2025-05-14T18:15:24.425713965Z" level=info msg="TaskExit event in podsandbox handler container_id:\"232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957\" id:\"0e7acfd79cc2f5f02f37a9deb935495ad9bda9f6322e4cb09c54fae1e3c9a6d6\" pid:5675 exited_at:{seconds:1747246524 nanos:425404231}" May 14 18:15:24.429621 containerd[1523]: time="2025-05-14T18:15:24.429585223Z" level=info msg="StopContainer for \"232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957\" with timeout 5 (s)" May 14 18:15:24.430726 containerd[1523]: time="2025-05-14T18:15:24.430694474Z" level=info msg="Stop container \"232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957\" with signal terminated" May 14 18:15:24.435447 containerd[1523]: time="2025-05-14T18:15:24.435401611Z" level=info msg="StopContainer for \"037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c\" returns successfully" May 14 18:15:24.435898 containerd[1523]: time="2025-05-14T18:15:24.435864632Z" level=info msg="StopPodSandbox for \"38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801\"" May 14 18:15:24.435985 containerd[1523]: time="2025-05-14T18:15:24.435932155Z" level=info msg="Container to stop \"037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:15:24.452311 systemd[1]: cri-containerd-38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801.scope: Deactivated successfully. May 14 18:15:24.455046 containerd[1523]: time="2025-05-14T18:15:24.454998073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801\" id:\"38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801\" pid:4160 exit_status:137 exited_at:{seconds:1747246524 nanos:454495530}" May 14 18:15:24.461497 containerd[1523]: 2025-05-14 18:15:24.384 [WARNING][5683] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" WorkloadEndpoint="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" May 14 18:15:24.461497 containerd[1523]: 2025-05-14 18:15:24.385 [INFO][5683] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" May 14 18:15:24.461497 containerd[1523]: 2025-05-14 18:15:24.385 [INFO][5683] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" iface="eth0" netns="" May 14 18:15:24.461497 containerd[1523]: 2025-05-14 18:15:24.385 [INFO][5683] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" May 14 18:15:24.461497 containerd[1523]: 2025-05-14 18:15:24.385 [INFO][5683] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" May 14 18:15:24.461497 containerd[1523]: 2025-05-14 18:15:24.436 [INFO][5704] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" HandleID="k8s-pod-network.080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" Workload="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" May 14 18:15:24.461497 containerd[1523]: 2025-05-14 18:15:24.436 [INFO][5704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:15:24.461497 containerd[1523]: 2025-05-14 18:15:24.436 [INFO][5704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:15:24.461497 containerd[1523]: 2025-05-14 18:15:24.448 [WARNING][5704] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" HandleID="k8s-pod-network.080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" Workload="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" May 14 18:15:24.461497 containerd[1523]: 2025-05-14 18:15:24.448 [INFO][5704] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" HandleID="k8s-pod-network.080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" Workload="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" May 14 18:15:24.461497 containerd[1523]: 2025-05-14 18:15:24.451 [INFO][5704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:15:24.461497 containerd[1523]: 2025-05-14 18:15:24.456 [INFO][5683] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" May 14 18:15:24.463045 containerd[1523]: time="2025-05-14T18:15:24.461613058Z" level=info msg="TearDown network for sandbox \"080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad\" successfully" May 14 18:15:24.463045 containerd[1523]: time="2025-05-14T18:15:24.461636779Z" level=info msg="StopPodSandbox for \"080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad\" returns successfully" May 14 18:15:24.463045 containerd[1523]: time="2025-05-14T18:15:24.462669707Z" level=info msg="RemovePodSandbox for \"080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad\"" May 14 18:15:24.466979 containerd[1523]: time="2025-05-14T18:15:24.466295034Z" level=info msg="Forcibly stopping sandbox \"080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad\"" May 14 18:15:24.469494 systemd[1]: cri-containerd-232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957.scope: Deactivated successfully. May 14 18:15:24.470099 systemd[1]: cri-containerd-232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957.scope: Consumed 3.752s CPU time, 167.6M memory peak, 19.3M read from disk, 2.5M written to disk. May 14 18:15:24.472129 containerd[1523]: time="2025-05-14T18:15:24.472097341Z" level=info msg="received exit event container_id:\"232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957\" id:\"232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957\" pid:3642 exited_at:{seconds:1747246524 nanos:471764646}" May 14 18:15:24.493598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801-rootfs.mount: Deactivated successfully. May 14 18:15:24.496089 containerd[1523]: time="2025-05-14T18:15:24.496028643Z" level=info msg="shim disconnected" id=38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801 namespace=k8s.io May 14 18:15:24.496238 containerd[1523]: time="2025-05-14T18:15:24.496090486Z" level=warning msg="cleaning up after shim disconnected" id=38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801 namespace=k8s.io May 14 18:15:24.496277 containerd[1523]: time="2025-05-14T18:15:24.496237613Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 18:15:24.506214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957-rootfs.mount: Deactivated successfully. May 14 18:15:24.527972 containerd[1523]: time="2025-05-14T18:15:24.527918032Z" level=info msg="StopContainer for \"232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957\" returns successfully" May 14 18:15:24.531364 containerd[1523]: time="2025-05-14T18:15:24.531306748Z" level=info msg="StopPodSandbox for \"ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876\"" May 14 18:15:24.531603 containerd[1523]: time="2025-05-14T18:15:24.531390191Z" level=info msg="Container to stop \"36399c4564023e5d1af359218addbb992fbfe7258de0b262d5da1070706162ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:15:24.531603 containerd[1523]: time="2025-05-14T18:15:24.531402832Z" level=info msg="Container to stop \"4d99d1f3b47f6d055469d12026f0a67337cc4d7d9bfc9aaa60f8fcf67e620f84\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:15:24.531603 containerd[1523]: time="2025-05-14T18:15:24.531411152Z" level=info msg="Container to stop \"232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:15:24.536856 containerd[1523]: time="2025-05-14T18:15:24.536809561Z" level=info msg="TaskExit event in podsandbox handler container_id:\"232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957\" id:\"232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957\" pid:3642 exited_at:{seconds:1747246524 nanos:471764646}" May 14 18:15:24.537052 containerd[1523]: time="2025-05-14T18:15:24.536888605Z" level=info msg="received exit event sandbox_id:\"38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801\" exit_status:137 exited_at:{seconds:1747246524 nanos:454495530}" May 14 18:15:24.540590 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801-shm.mount: Deactivated successfully. May 14 18:15:24.553471 systemd[1]: cri-containerd-ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876.scope: Deactivated successfully. May 14 18:15:24.555250 containerd[1523]: time="2025-05-14T18:15:24.555202168Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876\" id:\"ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876\" pid:3168 exit_status:137 exited_at:{seconds:1747246524 nanos:554563579}" May 14 18:15:24.635011 containerd[1523]: time="2025-05-14T18:15:24.633618339Z" level=info msg="shim disconnected" id=ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876 namespace=k8s.io May 14 18:15:24.635011 containerd[1523]: time="2025-05-14T18:15:24.633698623Z" level=warning msg="cleaning up after shim disconnected" id=ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876 namespace=k8s.io May 14 18:15:24.635011 containerd[1523]: time="2025-05-14T18:15:24.633730665Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 18:15:24.650450 containerd[1523]: 2025-05-14 18:15:24.543 [WARNING][5767] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" WorkloadEndpoint="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" May 14 18:15:24.650450 containerd[1523]: 2025-05-14 18:15:24.544 [INFO][5767] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" May 14 18:15:24.650450 containerd[1523]: 2025-05-14 18:15:24.544 [INFO][5767] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" iface="eth0" netns="" May 14 18:15:24.650450 containerd[1523]: 2025-05-14 18:15:24.545 [INFO][5767] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" May 14 18:15:24.650450 containerd[1523]: 2025-05-14 18:15:24.545 [INFO][5767] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" May 14 18:15:24.650450 containerd[1523]: 2025-05-14 18:15:24.613 [INFO][5820] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" HandleID="k8s-pod-network.080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" Workload="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" May 14 18:15:24.650450 containerd[1523]: 2025-05-14 18:15:24.617 [INFO][5820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:15:24.650450 containerd[1523]: 2025-05-14 18:15:24.617 [INFO][5820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:15:24.650450 containerd[1523]: 2025-05-14 18:15:24.634 [WARNING][5820] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" HandleID="k8s-pod-network.080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" Workload="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" May 14 18:15:24.650450 containerd[1523]: 2025-05-14 18:15:24.634 [INFO][5820] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" HandleID="k8s-pod-network.080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" Workload="localhost-k8s-calico--apiserver--576d749fbb--vrk6m-eth0" May 14 18:15:24.650450 containerd[1523]: 2025-05-14 18:15:24.637 [INFO][5820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:15:24.650450 containerd[1523]: 2025-05-14 18:15:24.642 [INFO][5767] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad" May 14 18:15:24.650450 containerd[1523]: time="2025-05-14T18:15:24.649148855Z" level=info msg="TearDown network for sandbox \"080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad\" successfully" May 14 18:15:24.654161 systemd-networkd[1432]: cali067b29c4684: Link DOWN May 14 18:15:24.654167 systemd-networkd[1432]: cali067b29c4684: Lost carrier May 14 18:15:24.658105 containerd[1523]: time="2025-05-14T18:15:24.658068785Z" level=info msg="Ensure that sandbox 080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad in task-service has been cleanup successfully" May 14 18:15:24.673090 containerd[1523]: time="2025-05-14T18:15:24.673051275Z" level=info msg="RemovePodSandbox \"080e353fe769d9f149a8c87bdabb67280234b28b7b8b1ee78f592b24a14401ad\" returns successfully" May 14 18:15:24.675065 containerd[1523]: time="2025-05-14T18:15:24.674808116Z" level=info msg="StopPodSandbox for \"277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10\"" May 14 18:15:24.686093 containerd[1523]: time="2025-05-14T18:15:24.685252237Z" level=info msg="received exit event sandbox_id:\"ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876\" exit_status:137 exited_at:{seconds:1747246524 nanos:554563579}" May 14 18:15:24.686602 containerd[1523]: time="2025-05-14T18:15:24.686563538Z" level=info msg="TearDown network for sandbox \"ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876\" successfully" May 14 18:15:24.686602 containerd[1523]: time="2025-05-14T18:15:24.686592059Z" level=info msg="StopPodSandbox for \"ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876\" returns successfully" May 14 18:15:24.735925 kubelet[2626]: E0514 18:15:24.735887 2626 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ad204f77-3166-479d-93b7-db06e21e14fc" containerName="calico-node" May 14 18:15:24.735925 kubelet[2626]: E0514 18:15:24.735919 2626 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16aa4105-e2a7-46b7-982b-abcf1282d71f" containerName="calico-apiserver" May 14 18:15:24.735925 kubelet[2626]: E0514 18:15:24.735926 2626 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ad204f77-3166-479d-93b7-db06e21e14fc" containerName="flexvol-driver" May 14 18:15:24.735925 kubelet[2626]: E0514 18:15:24.735935 2626 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ad204f77-3166-479d-93b7-db06e21e14fc" containerName="install-cni" May 14 18:15:24.736416 kubelet[2626]: E0514 18:15:24.735941 2626 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c03aac11-237f-4f5f-95ac-fc014337d597" containerName="calico-apiserver" May 14 18:15:24.736416 kubelet[2626]: I0514 18:15:24.736110 2626 memory_manager.go:354] "RemoveStaleState removing state" podUID="c03aac11-237f-4f5f-95ac-fc014337d597" containerName="calico-apiserver" May 14 18:15:24.736416 kubelet[2626]: I0514 18:15:24.736120 2626 memory_manager.go:354] "RemoveStaleState removing state" podUID="16aa4105-e2a7-46b7-982b-abcf1282d71f" containerName="calico-apiserver" May 14 18:15:24.736416 kubelet[2626]: I0514 18:15:24.736126 2626 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad204f77-3166-479d-93b7-db06e21e14fc" containerName="calico-node" May 14 18:15:24.747520 systemd[1]: Created slice kubepods-besteffort-pod02d85e80_b0dd_4631_875b_725e625e3a64.slice - libcontainer container kubepods-besteffort-pod02d85e80_b0dd_4631_875b_725e625e3a64.slice. May 14 18:15:24.771179 containerd[1523]: 2025-05-14 18:15:24.653 [INFO][5836] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" May 14 18:15:24.771179 containerd[1523]: 2025-05-14 18:15:24.653 [INFO][5836] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" iface="eth0" netns="/var/run/netns/cni-777bad87-4f74-3a33-5f89-f294f876ed47" May 14 18:15:24.771179 containerd[1523]: 2025-05-14 18:15:24.653 [INFO][5836] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" iface="eth0" netns="/var/run/netns/cni-777bad87-4f74-3a33-5f89-f294f876ed47" May 14 18:15:24.771179 containerd[1523]: 2025-05-14 18:15:24.662 [INFO][5836] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" after=8.591796ms iface="eth0" netns="/var/run/netns/cni-777bad87-4f74-3a33-5f89-f294f876ed47" May 14 18:15:24.771179 containerd[1523]: 2025-05-14 18:15:24.662 [INFO][5836] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" May 14 18:15:24.771179 containerd[1523]: 2025-05-14 18:15:24.662 [INFO][5836] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" May 14 18:15:24.771179 containerd[1523]: 2025-05-14 18:15:24.700 [INFO][5876] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" HandleID="k8s-pod-network.38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" Workload="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" May 14 18:15:24.771179 containerd[1523]: 2025-05-14 18:15:24.700 [INFO][5876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:15:24.771179 containerd[1523]: 2025-05-14 18:15:24.700 [INFO][5876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:15:24.771179 containerd[1523]: 2025-05-14 18:15:24.761 [INFO][5876] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" HandleID="k8s-pod-network.38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" Workload="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" May 14 18:15:24.771179 containerd[1523]: 2025-05-14 18:15:24.761 [INFO][5876] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" HandleID="k8s-pod-network.38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" Workload="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" May 14 18:15:24.771179 containerd[1523]: 2025-05-14 18:15:24.765 [INFO][5876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:15:24.771179 containerd[1523]: 2025-05-14 18:15:24.766 [INFO][5836] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" May 14 18:15:24.772837 containerd[1523]: time="2025-05-14T18:15:24.772782268Z" level=info msg="TearDown network for sandbox \"38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801\" successfully" May 14 18:15:24.772837 containerd[1523]: time="2025-05-14T18:15:24.772828670Z" level=info msg="StopPodSandbox for \"38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801\" returns successfully" May 14 18:15:24.817996 kubelet[2626]: I0514 18:15:24.817592 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad204f77-3166-479d-93b7-db06e21e14fc-tigera-ca-bundle\") pod \"ad204f77-3166-479d-93b7-db06e21e14fc\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " May 14 18:15:24.817996 kubelet[2626]: I0514 18:15:24.817636 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-lib-modules\") pod \"ad204f77-3166-479d-93b7-db06e21e14fc\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " May 14 18:15:24.817996 kubelet[2626]: I0514 18:15:24.817653 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-cni-net-dir\") pod \"ad204f77-3166-479d-93b7-db06e21e14fc\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " May 14 18:15:24.817996 kubelet[2626]: I0514 18:15:24.817668 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-var-lib-calico\") pod \"ad204f77-3166-479d-93b7-db06e21e14fc\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " May 14 18:15:24.817996 kubelet[2626]: I0514 18:15:24.817687 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-flexvol-driver-host\") pod \"ad204f77-3166-479d-93b7-db06e21e14fc\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " May 14 18:15:24.817996 kubelet[2626]: I0514 18:15:24.817711 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-policysync\") pod \"ad204f77-3166-479d-93b7-db06e21e14fc\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " May 14 18:15:24.818252 kubelet[2626]: I0514 18:15:24.817728 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb527\" (UniqueName: \"kubernetes.io/projected/ad204f77-3166-479d-93b7-db06e21e14fc-kube-api-access-tb527\") pod \"ad204f77-3166-479d-93b7-db06e21e14fc\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " May 14 18:15:24.818252 kubelet[2626]: I0514 18:15:24.817748 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-xtables-lock\") pod \"ad204f77-3166-479d-93b7-db06e21e14fc\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " May 14 18:15:24.818252 kubelet[2626]: I0514 18:15:24.817787 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-var-run-calico\") pod \"ad204f77-3166-479d-93b7-db06e21e14fc\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " May 14 18:15:24.818252 kubelet[2626]: I0514 18:15:24.817801 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-cni-bin-dir\") pod \"ad204f77-3166-479d-93b7-db06e21e14fc\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " May 14 18:15:24.818252 kubelet[2626]: I0514 18:15:24.817820 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ad204f77-3166-479d-93b7-db06e21e14fc-node-certs\") pod \"ad204f77-3166-479d-93b7-db06e21e14fc\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " May 14 18:15:24.818252 kubelet[2626]: I0514 18:15:24.817837 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-cni-log-dir\") pod \"ad204f77-3166-479d-93b7-db06e21e14fc\" (UID: \"ad204f77-3166-479d-93b7-db06e21e14fc\") " May 14 18:15:24.818387 kubelet[2626]: I0514 18:15:24.817918 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "ad204f77-3166-479d-93b7-db06e21e14fc" (UID: "ad204f77-3166-479d-93b7-db06e21e14fc"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:15:24.818387 kubelet[2626]: I0514 18:15:24.818087 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-policysync" (OuterVolumeSpecName: "policysync") pod "ad204f77-3166-479d-93b7-db06e21e14fc" (UID: "ad204f77-3166-479d-93b7-db06e21e14fc"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:15:24.818387 kubelet[2626]: I0514 18:15:24.818137 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ad204f77-3166-479d-93b7-db06e21e14fc" (UID: "ad204f77-3166-479d-93b7-db06e21e14fc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:15:24.818387 kubelet[2626]: I0514 18:15:24.818159 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "ad204f77-3166-479d-93b7-db06e21e14fc" (UID: "ad204f77-3166-479d-93b7-db06e21e14fc"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:15:24.818387 kubelet[2626]: I0514 18:15:24.818178 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "ad204f77-3166-479d-93b7-db06e21e14fc" (UID: "ad204f77-3166-479d-93b7-db06e21e14fc"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:15:24.818497 kubelet[2626]: I0514 18:15:24.818195 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "ad204f77-3166-479d-93b7-db06e21e14fc" (UID: "ad204f77-3166-479d-93b7-db06e21e14fc"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:15:24.818497 kubelet[2626]: I0514 18:15:24.818222 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "ad204f77-3166-479d-93b7-db06e21e14fc" (UID: "ad204f77-3166-479d-93b7-db06e21e14fc"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:15:24.818497 kubelet[2626]: I0514 18:15:24.818237 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ad204f77-3166-479d-93b7-db06e21e14fc" (UID: "ad204f77-3166-479d-93b7-db06e21e14fc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:15:24.818497 kubelet[2626]: I0514 18:15:24.818255 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "ad204f77-3166-479d-93b7-db06e21e14fc" (UID: "ad204f77-3166-479d-93b7-db06e21e14fc"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:15:24.821238 kubelet[2626]: I0514 18:15:24.821199 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad204f77-3166-479d-93b7-db06e21e14fc-node-certs" (OuterVolumeSpecName: "node-certs") pod "ad204f77-3166-479d-93b7-db06e21e14fc" (UID: "ad204f77-3166-479d-93b7-db06e21e14fc"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 18:15:24.822892 kubelet[2626]: I0514 18:15:24.822803 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad204f77-3166-479d-93b7-db06e21e14fc-kube-api-access-tb527" (OuterVolumeSpecName: "kube-api-access-tb527") pod "ad204f77-3166-479d-93b7-db06e21e14fc" (UID: "ad204f77-3166-479d-93b7-db06e21e14fc"). InnerVolumeSpecName "kube-api-access-tb527". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:15:24.828833 kubelet[2626]: I0514 18:15:24.828469 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad204f77-3166-479d-93b7-db06e21e14fc-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "ad204f77-3166-479d-93b7-db06e21e14fc" (UID: "ad204f77-3166-479d-93b7-db06e21e14fc"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 18:15:24.834884 containerd[1523]: 2025-05-14 18:15:24.762 [WARNING][5896] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" WorkloadEndpoint="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" May 14 18:15:24.834884 containerd[1523]: 2025-05-14 18:15:24.762 [INFO][5896] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" May 14 18:15:24.834884 containerd[1523]: 2025-05-14 18:15:24.762 [INFO][5896] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" iface="eth0" netns="" May 14 18:15:24.834884 containerd[1523]: 2025-05-14 18:15:24.762 [INFO][5896] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" May 14 18:15:24.834884 containerd[1523]: 2025-05-14 18:15:24.762 [INFO][5896] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" May 14 18:15:24.834884 containerd[1523]: 2025-05-14 18:15:24.803 [INFO][5907] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" HandleID="k8s-pod-network.277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" Workload="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" May 14 18:15:24.834884 containerd[1523]: 2025-05-14 18:15:24.804 [INFO][5907] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:15:24.834884 containerd[1523]: 2025-05-14 18:15:24.804 [INFO][5907] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:15:24.834884 containerd[1523]: 2025-05-14 18:15:24.826 [WARNING][5907] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" HandleID="k8s-pod-network.277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" Workload="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" May 14 18:15:24.834884 containerd[1523]: 2025-05-14 18:15:24.826 [INFO][5907] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" HandleID="k8s-pod-network.277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" Workload="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" May 14 18:15:24.834884 containerd[1523]: 2025-05-14 18:15:24.829 [INFO][5907] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:15:24.834884 containerd[1523]: 2025-05-14 18:15:24.831 [INFO][5896] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" May 14 18:15:24.835343 containerd[1523]: time="2025-05-14T18:15:24.834940491Z" level=info msg="TearDown network for sandbox \"277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10\" successfully" May 14 18:15:24.835343 containerd[1523]: time="2025-05-14T18:15:24.835015854Z" level=info msg="StopPodSandbox for \"277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10\" returns successfully" May 14 18:15:24.836070 containerd[1523]: time="2025-05-14T18:15:24.836040982Z" level=info msg="RemovePodSandbox for \"277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10\"" May 14 18:15:24.836163 containerd[1523]: time="2025-05-14T18:15:24.836078143Z" level=info msg="Forcibly stopping sandbox \"277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10\"" May 14 18:15:24.905681 containerd[1523]: 2025-05-14 18:15:24.871 [WARNING][5933] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" WorkloadEndpoint="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" May 14 18:15:24.905681 containerd[1523]: 2025-05-14 18:15:24.871 [INFO][5933] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" May 14 18:15:24.905681 containerd[1523]: 2025-05-14 18:15:24.871 [INFO][5933] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" iface="eth0" netns="" May 14 18:15:24.905681 containerd[1523]: 2025-05-14 18:15:24.871 [INFO][5933] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" May 14 18:15:24.905681 containerd[1523]: 2025-05-14 18:15:24.871 [INFO][5933] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" May 14 18:15:24.905681 containerd[1523]: 2025-05-14 18:15:24.890 [INFO][5942] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" HandleID="k8s-pod-network.277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" Workload="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" May 14 18:15:24.905681 containerd[1523]: 2025-05-14 18:15:24.891 [INFO][5942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:15:24.905681 containerd[1523]: 2025-05-14 18:15:24.891 [INFO][5942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:15:24.905681 containerd[1523]: 2025-05-14 18:15:24.901 [WARNING][5942] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" HandleID="k8s-pod-network.277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" Workload="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" May 14 18:15:24.905681 containerd[1523]: 2025-05-14 18:15:24.901 [INFO][5942] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" HandleID="k8s-pod-network.277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" Workload="localhost-k8s-calico--apiserver--576d749fbb--j4vhz-eth0" May 14 18:15:24.905681 containerd[1523]: 2025-05-14 18:15:24.902 [INFO][5942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:15:24.905681 containerd[1523]: 2025-05-14 18:15:24.904 [INFO][5933] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10" May 14 18:15:24.906048 containerd[1523]: time="2025-05-14T18:15:24.905717070Z" level=info msg="TearDown network for sandbox \"277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10\" successfully" May 14 18:15:24.910679 containerd[1523]: time="2025-05-14T18:15:24.910468049Z" level=info msg="Ensure that sandbox 277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10 in task-service has been cleanup successfully" May 14 18:15:24.915496 containerd[1523]: time="2025-05-14T18:15:24.915470119Z" level=info msg="RemovePodSandbox \"277c91135f988cd0c017535de9747d29c202728a88029ab8a3a3116c344c5a10\" returns successfully" May 14 18:15:24.918072 kubelet[2626]: I0514 18:15:24.918020 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj6cb\" (UniqueName: \"kubernetes.io/projected/a2abc07a-c489-4f6d-83c3-a98e091c2fd2-kube-api-access-pj6cb\") pod \"a2abc07a-c489-4f6d-83c3-a98e091c2fd2\" (UID: \"a2abc07a-c489-4f6d-83c3-a98e091c2fd2\") " May 14 18:15:24.918159 kubelet[2626]: I0514 18:15:24.918085 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2abc07a-c489-4f6d-83c3-a98e091c2fd2-tigera-ca-bundle\") pod \"a2abc07a-c489-4f6d-83c3-a98e091c2fd2\" (UID: \"a2abc07a-c489-4f6d-83c3-a98e091c2fd2\") " May 14 18:15:24.918159 kubelet[2626]: I0514 18:15:24.918150 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/02d85e80-b0dd-4631-875b-725e625e3a64-node-certs\") pod \"calico-node-kcptb\" (UID: \"02d85e80-b0dd-4631-875b-725e625e3a64\") " pod="calico-system/calico-node-kcptb" May 14 18:15:24.918207 kubelet[2626]: I0514 18:15:24.918173 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/02d85e80-b0dd-4631-875b-725e625e3a64-policysync\") pod \"calico-node-kcptb\" (UID: \"02d85e80-b0dd-4631-875b-725e625e3a64\") " pod="calico-system/calico-node-kcptb" May 14 18:15:24.918207 kubelet[2626]: I0514 18:15:24.918191 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/02d85e80-b0dd-4631-875b-725e625e3a64-cni-net-dir\") pod \"calico-node-kcptb\" (UID: \"02d85e80-b0dd-4631-875b-725e625e3a64\") " pod="calico-system/calico-node-kcptb" May 14 18:15:24.918245 kubelet[2626]: I0514 18:15:24.918205 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/02d85e80-b0dd-4631-875b-725e625e3a64-var-run-calico\") pod \"calico-node-kcptb\" (UID: \"02d85e80-b0dd-4631-875b-725e625e3a64\") " pod="calico-system/calico-node-kcptb" May 14 18:15:24.918245 kubelet[2626]: I0514 18:15:24.918221 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr8xl\" (UniqueName: \"kubernetes.io/projected/02d85e80-b0dd-4631-875b-725e625e3a64-kube-api-access-pr8xl\") pod \"calico-node-kcptb\" (UID: \"02d85e80-b0dd-4631-875b-725e625e3a64\") " pod="calico-system/calico-node-kcptb" May 14 18:15:24.918245 kubelet[2626]: I0514 18:15:24.918238 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02d85e80-b0dd-4631-875b-725e625e3a64-lib-modules\") pod \"calico-node-kcptb\" (UID: \"02d85e80-b0dd-4631-875b-725e625e3a64\") " pod="calico-system/calico-node-kcptb" May 14 18:15:24.918311 kubelet[2626]: I0514 18:15:24.918253 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/02d85e80-b0dd-4631-875b-725e625e3a64-var-lib-calico\") pod \"calico-node-kcptb\" (UID: \"02d85e80-b0dd-4631-875b-725e625e3a64\") " pod="calico-system/calico-node-kcptb" May 14 18:15:24.918311 kubelet[2626]: I0514 18:15:24.918268 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/02d85e80-b0dd-4631-875b-725e625e3a64-cni-bin-dir\") pod \"calico-node-kcptb\" (UID: \"02d85e80-b0dd-4631-875b-725e625e3a64\") " pod="calico-system/calico-node-kcptb" May 14 18:15:24.918311 kubelet[2626]: I0514 18:15:24.918285 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02d85e80-b0dd-4631-875b-725e625e3a64-xtables-lock\") pod \"calico-node-kcptb\" (UID: \"02d85e80-b0dd-4631-875b-725e625e3a64\") " pod="calico-system/calico-node-kcptb" May 14 18:15:24.918311 kubelet[2626]: I0514 18:15:24.918301 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02d85e80-b0dd-4631-875b-725e625e3a64-tigera-ca-bundle\") pod \"calico-node-kcptb\" (UID: \"02d85e80-b0dd-4631-875b-725e625e3a64\") " pod="calico-system/calico-node-kcptb" May 14 18:15:24.918396 kubelet[2626]: I0514 18:15:24.918318 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/02d85e80-b0dd-4631-875b-725e625e3a64-flexvol-driver-host\") pod \"calico-node-kcptb\" (UID: \"02d85e80-b0dd-4631-875b-725e625e3a64\") " pod="calico-system/calico-node-kcptb" May 14 18:15:24.918396 kubelet[2626]: I0514 18:15:24.918336 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/02d85e80-b0dd-4631-875b-725e625e3a64-cni-log-dir\") pod \"calico-node-kcptb\" (UID: \"02d85e80-b0dd-4631-875b-725e625e3a64\") " pod="calico-system/calico-node-kcptb" May 14 18:15:24.918396 kubelet[2626]: I0514 18:15:24.918358 2626 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 14 18:15:24.918396 kubelet[2626]: I0514 18:15:24.918367 2626 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" May 14 18:15:24.918396 kubelet[2626]: I0514 18:15:24.918375 2626 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ad204f77-3166-479d-93b7-db06e21e14fc-node-certs\") on node \"localhost\" DevicePath \"\"" May 14 18:15:24.918396 kubelet[2626]: I0514 18:15:24.918382 2626 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-var-run-calico\") on node \"localhost\" DevicePath \"\"" May 14 18:15:24.918396 kubelet[2626]: I0514 18:15:24.918389 2626 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-cni-log-dir\") on node \"localhost\" DevicePath \"\"" May 14 18:15:24.918531 kubelet[2626]: I0514 18:15:24.918396 2626 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad204f77-3166-479d-93b7-db06e21e14fc-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 14 18:15:24.918531 kubelet[2626]: I0514 18:15:24.918404 2626 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-lib-modules\") on node \"localhost\" DevicePath \"\"" May 14 18:15:24.918531 kubelet[2626]: I0514 18:15:24.918411 2626 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-cni-net-dir\") on node \"localhost\" DevicePath \"\"" May 14 18:15:24.918531 kubelet[2626]: I0514 18:15:24.918418 2626 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-var-lib-calico\") on node \"localhost\" DevicePath \"\"" May 14 18:15:24.918531 kubelet[2626]: I0514 18:15:24.918434 2626 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" May 14 18:15:24.918531 kubelet[2626]: I0514 18:15:24.918444 2626 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ad204f77-3166-479d-93b7-db06e21e14fc-policysync\") on node \"localhost\" DevicePath \"\"" May 14 18:15:24.918531 kubelet[2626]: I0514 18:15:24.918452 2626 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tb527\" (UniqueName: \"kubernetes.io/projected/ad204f77-3166-479d-93b7-db06e21e14fc-kube-api-access-tb527\") on node \"localhost\" DevicePath \"\"" May 14 18:15:24.921179 kubelet[2626]: I0514 18:15:24.921145 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2abc07a-c489-4f6d-83c3-a98e091c2fd2-kube-api-access-pj6cb" (OuterVolumeSpecName: "kube-api-access-pj6cb") pod "a2abc07a-c489-4f6d-83c3-a98e091c2fd2" (UID: "a2abc07a-c489-4f6d-83c3-a98e091c2fd2"). InnerVolumeSpecName "kube-api-access-pj6cb". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:15:24.924925 kubelet[2626]: I0514 18:15:24.924881 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2abc07a-c489-4f6d-83c3-a98e091c2fd2-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "a2abc07a-c489-4f6d-83c3-a98e091c2fd2" (UID: "a2abc07a-c489-4f6d-83c3-a98e091c2fd2"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 18:15:25.021370 kubelet[2626]: I0514 18:15:25.019517 2626 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pj6cb\" (UniqueName: \"kubernetes.io/projected/a2abc07a-c489-4f6d-83c3-a98e091c2fd2-kube-api-access-pj6cb\") on node \"localhost\" DevicePath \"\"" May 14 18:15:25.021370 kubelet[2626]: I0514 18:15:25.019552 2626 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2abc07a-c489-4f6d-83c3-a98e091c2fd2-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 14 18:15:25.053997 containerd[1523]: time="2025-05-14T18:15:25.053931357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kcptb,Uid:02d85e80-b0dd-4631-875b-725e625e3a64,Namespace:calico-system,Attempt:0,}" May 14 18:15:25.066977 containerd[1523]: time="2025-05-14T18:15:25.066111305Z" level=info msg="connecting to shim 7638ec9bae9bfec1c5ecb196149ea98d211e4cfcd3fd2b201ac3abc3efe203a9" address="unix:///run/containerd/s/04971a0ffdd203a95517c6548cbf9dc40b0cb8b284da9fd2e4e334d83362e562" namespace=k8s.io protocol=ttrpc version=3 May 14 18:15:25.090116 systemd[1]: Started cri-containerd-7638ec9bae9bfec1c5ecb196149ea98d211e4cfcd3fd2b201ac3abc3efe203a9.scope - libcontainer container 7638ec9bae9bfec1c5ecb196149ea98d211e4cfcd3fd2b201ac3abc3efe203a9. May 14 18:15:25.110598 containerd[1523]: time="2025-05-14T18:15:25.110536980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kcptb,Uid:02d85e80-b0dd-4631-875b-725e625e3a64,Namespace:calico-system,Attempt:0,} returns sandbox id \"7638ec9bae9bfec1c5ecb196149ea98d211e4cfcd3fd2b201ac3abc3efe203a9\"" May 14 18:15:25.113050 containerd[1523]: time="2025-05-14T18:15:25.112941328Z" level=info msg="CreateContainer within sandbox \"7638ec9bae9bfec1c5ecb196149ea98d211e4cfcd3fd2b201ac3abc3efe203a9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 18:15:25.123491 containerd[1523]: time="2025-05-14T18:15:25.123457201Z" level=info msg="Container ed71532db9bd6d9a9b3353be8e672288b4f4036043878870e3d25b0397d07e7a: CDI devices from CRI Config.CDIDevices: []" May 14 18:15:25.130452 containerd[1523]: time="2025-05-14T18:15:25.130402473Z" level=info msg="CreateContainer within sandbox \"7638ec9bae9bfec1c5ecb196149ea98d211e4cfcd3fd2b201ac3abc3efe203a9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ed71532db9bd6d9a9b3353be8e672288b4f4036043878870e3d25b0397d07e7a\"" May 14 18:15:25.132142 containerd[1523]: time="2025-05-14T18:15:25.132100149Z" level=info msg="StartContainer for \"ed71532db9bd6d9a9b3353be8e672288b4f4036043878870e3d25b0397d07e7a\"" May 14 18:15:25.134233 containerd[1523]: time="2025-05-14T18:15:25.134207844Z" level=info msg="connecting to shim ed71532db9bd6d9a9b3353be8e672288b4f4036043878870e3d25b0397d07e7a" address="unix:///run/containerd/s/04971a0ffdd203a95517c6548cbf9dc40b0cb8b284da9fd2e4e334d83362e562" protocol=ttrpc version=3 May 14 18:15:25.153114 systemd[1]: Started cri-containerd-ed71532db9bd6d9a9b3353be8e672288b4f4036043878870e3d25b0397d07e7a.scope - libcontainer container ed71532db9bd6d9a9b3353be8e672288b4f4036043878870e3d25b0397d07e7a. May 14 18:15:25.187074 containerd[1523]: time="2025-05-14T18:15:25.187039217Z" level=info msg="StartContainer for \"ed71532db9bd6d9a9b3353be8e672288b4f4036043878870e3d25b0397d07e7a\" returns successfully" May 14 18:15:25.212030 systemd[1]: cri-containerd-ed71532db9bd6d9a9b3353be8e672288b4f4036043878870e3d25b0397d07e7a.scope: Deactivated successfully. May 14 18:15:25.212353 systemd[1]: cri-containerd-ed71532db9bd6d9a9b3353be8e672288b4f4036043878870e3d25b0397d07e7a.scope: Consumed 38ms CPU time, 17.9M memory peak, 9.9M read from disk, 6.2M written to disk. May 14 18:15:25.215363 containerd[1523]: time="2025-05-14T18:15:25.215318808Z" level=info msg="received exit event container_id:\"ed71532db9bd6d9a9b3353be8e672288b4f4036043878870e3d25b0397d07e7a\" id:\"ed71532db9bd6d9a9b3353be8e672288b4f4036043878870e3d25b0397d07e7a\" pid:6011 exited_at:{seconds:1747246525 nanos:215019274}" May 14 18:15:25.215826 containerd[1523]: time="2025-05-14T18:15:25.215796189Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed71532db9bd6d9a9b3353be8e672288b4f4036043878870e3d25b0397d07e7a\" id:\"ed71532db9bd6d9a9b3353be8e672288b4f4036043878870e3d25b0397d07e7a\" pid:6011 exited_at:{seconds:1747246525 nanos:215019274}" May 14 18:15:25.282559 systemd[1]: var-lib-kubelet-pods-a2abc07a\x2dc489\x2d4f6d\x2d83c3\x2da98e091c2fd2-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. May 14 18:15:25.282656 systemd[1]: run-netns-cni\x2d777bad87\x2d4f74\x2d3a33\x2d5f89\x2df294f876ed47.mount: Deactivated successfully. May 14 18:15:25.282701 systemd[1]: var-lib-kubelet-pods-ad204f77\x2d3166\x2d479d\x2d93b7\x2ddb06e21e14fc-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 14 18:15:25.282751 systemd[1]: var-lib-kubelet-pods-a2abc07a\x2dc489\x2d4f6d\x2d83c3\x2da98e091c2fd2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpj6cb.mount: Deactivated successfully. May 14 18:15:25.282800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876-rootfs.mount: Deactivated successfully. May 14 18:15:25.282856 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876-shm.mount: Deactivated successfully. May 14 18:15:25.282899 systemd[1]: var-lib-kubelet-pods-ad204f77\x2d3166\x2d479d\x2d93b7\x2ddb06e21e14fc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtb527.mount: Deactivated successfully. May 14 18:15:25.282946 systemd[1]: var-lib-kubelet-pods-ad204f77\x2d3166\x2d479d\x2d93b7\x2ddb06e21e14fc-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 14 18:15:25.572229 kubelet[2626]: I0514 18:15:25.571969 2626 scope.go:117] "RemoveContainer" containerID="232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957" May 14 18:15:25.577375 containerd[1523]: time="2025-05-14T18:15:25.577280269Z" level=info msg="RemoveContainer for \"232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957\"" May 14 18:15:25.580047 containerd[1523]: time="2025-05-14T18:15:25.579818423Z" level=info msg="CreateContainer within sandbox \"7638ec9bae9bfec1c5ecb196149ea98d211e4cfcd3fd2b201ac3abc3efe203a9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 18:15:25.582969 containerd[1523]: time="2025-05-14T18:15:25.582895441Z" level=info msg="RemoveContainer for \"232cb99bc8634b4d61e0642710c3ec0a96e229d13b290f122a058fd9da460957\" returns successfully" May 14 18:15:25.583777 kubelet[2626]: I0514 18:15:25.583701 2626 scope.go:117] "RemoveContainer" containerID="36399c4564023e5d1af359218addbb992fbfe7258de0b262d5da1070706162ea" May 14 18:15:25.587334 systemd[1]: Removed slice kubepods-besteffort-podad204f77_3166_479d_93b7_db06e21e14fc.slice - libcontainer container kubepods-besteffort-podad204f77_3166_479d_93b7_db06e21e14fc.slice. May 14 18:15:25.587444 systemd[1]: kubepods-besteffort-podad204f77_3166_479d_93b7_db06e21e14fc.slice: Consumed 4.270s CPU time, 214.8M memory peak, 19.8M read from disk, 159M written to disk. May 14 18:15:25.593988 containerd[1523]: time="2025-05-14T18:15:25.592176378Z" level=info msg="Container 8eb476916cd34d3c013eca04e894328dbb60baf349102d90d825a5427fb5e8c6: CDI devices from CRI Config.CDIDevices: []" May 14 18:15:25.594765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount738928784.mount: Deactivated successfully. May 14 18:15:25.596408 containerd[1523]: time="2025-05-14T18:15:25.596372207Z" level=info msg="RemoveContainer for \"36399c4564023e5d1af359218addbb992fbfe7258de0b262d5da1070706162ea\"" May 14 18:15:25.596641 systemd[1]: Removed slice kubepods-besteffort-poda2abc07a_c489_4f6d_83c3_a98e091c2fd2.slice - libcontainer container kubepods-besteffort-poda2abc07a_c489_4f6d_83c3_a98e091c2fd2.slice. May 14 18:15:25.605071 containerd[1523]: time="2025-05-14T18:15:25.604885589Z" level=info msg="RemoveContainer for \"36399c4564023e5d1af359218addbb992fbfe7258de0b262d5da1070706162ea\" returns successfully" May 14 18:15:25.605963 kubelet[2626]: I0514 18:15:25.605347 2626 scope.go:117] "RemoveContainer" containerID="4d99d1f3b47f6d055469d12026f0a67337cc4d7d9bfc9aaa60f8fcf67e620f84" May 14 18:15:25.607901 containerd[1523]: time="2025-05-14T18:15:25.607861683Z" level=info msg="CreateContainer within sandbox \"7638ec9bae9bfec1c5ecb196149ea98d211e4cfcd3fd2b201ac3abc3efe203a9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8eb476916cd34d3c013eca04e894328dbb60baf349102d90d825a5427fb5e8c6\"" May 14 18:15:25.608814 containerd[1523]: time="2025-05-14T18:15:25.608724282Z" level=info msg="StartContainer for \"8eb476916cd34d3c013eca04e894328dbb60baf349102d90d825a5427fb5e8c6\"" May 14 18:15:25.609600 containerd[1523]: time="2025-05-14T18:15:25.609504597Z" level=info msg="RemoveContainer for \"4d99d1f3b47f6d055469d12026f0a67337cc4d7d9bfc9aaa60f8fcf67e620f84\"" May 14 18:15:25.612991 containerd[1523]: time="2025-05-14T18:15:25.612829306Z" level=info msg="connecting to shim 8eb476916cd34d3c013eca04e894328dbb60baf349102d90d825a5427fb5e8c6" address="unix:///run/containerd/s/04971a0ffdd203a95517c6548cbf9dc40b0cb8b284da9fd2e4e334d83362e562" protocol=ttrpc version=3 May 14 18:15:25.614110 containerd[1523]: time="2025-05-14T18:15:25.613377531Z" level=info msg="RemoveContainer for \"4d99d1f3b47f6d055469d12026f0a67337cc4d7d9bfc9aaa60f8fcf67e620f84\" returns successfully" May 14 18:15:25.619063 kubelet[2626]: I0514 18:15:25.618169 2626 scope.go:117] "RemoveContainer" containerID="037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c" May 14 18:15:25.622496 containerd[1523]: time="2025-05-14T18:15:25.622464059Z" level=info msg="RemoveContainer for \"037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c\"" May 14 18:15:25.625365 containerd[1523]: time="2025-05-14T18:15:25.625328508Z" level=info msg="RemoveContainer for \"037739d7950b5b65568f92e7654fd50c795bfd4cd350b32b0377f45b5e9a3b2c\" returns successfully" May 14 18:15:25.644150 systemd[1]: Started cri-containerd-8eb476916cd34d3c013eca04e894328dbb60baf349102d90d825a5427fb5e8c6.scope - libcontainer container 8eb476916cd34d3c013eca04e894328dbb60baf349102d90d825a5427fb5e8c6. May 14 18:15:25.730220 containerd[1523]: time="2025-05-14T18:15:25.730176498Z" level=info msg="StartContainer for \"8eb476916cd34d3c013eca04e894328dbb60baf349102d90d825a5427fb5e8c6\" returns successfully" May 14 18:15:26.159665 systemd[1]: cri-containerd-8eb476916cd34d3c013eca04e894328dbb60baf349102d90d825a5427fb5e8c6.scope: Deactivated successfully. May 14 18:15:26.159974 systemd[1]: cri-containerd-8eb476916cd34d3c013eca04e894328dbb60baf349102d90d825a5427fb5e8c6.scope: Consumed 584ms CPU time, 106.9M memory peak, 92.9M read from disk. May 14 18:15:26.161990 containerd[1523]: time="2025-05-14T18:15:26.161957880Z" level=info msg="received exit event container_id:\"8eb476916cd34d3c013eca04e894328dbb60baf349102d90d825a5427fb5e8c6\" id:\"8eb476916cd34d3c013eca04e894328dbb60baf349102d90d825a5427fb5e8c6\" pid:6065 exited_at:{seconds:1747246526 nanos:161707069}" May 14 18:15:26.162224 containerd[1523]: time="2025-05-14T18:15:26.162106567Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8eb476916cd34d3c013eca04e894328dbb60baf349102d90d825a5427fb5e8c6\" id:\"8eb476916cd34d3c013eca04e894328dbb60baf349102d90d825a5427fb5e8c6\" pid:6065 exited_at:{seconds:1747246526 nanos:161707069}" May 14 18:15:26.181172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8eb476916cd34d3c013eca04e894328dbb60baf349102d90d825a5427fb5e8c6-rootfs.mount: Deactivated successfully. May 14 18:15:26.301662 kubelet[2626]: I0514 18:15:26.301598 2626 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2abc07a-c489-4f6d-83c3-a98e091c2fd2" path="/var/lib/kubelet/pods/a2abc07a-c489-4f6d-83c3-a98e091c2fd2/volumes" May 14 18:15:26.302135 kubelet[2626]: I0514 18:15:26.302121 2626 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad204f77-3166-479d-93b7-db06e21e14fc" path="/var/lib/kubelet/pods/ad204f77-3166-479d-93b7-db06e21e14fc/volumes" May 14 18:15:26.603044 containerd[1523]: time="2025-05-14T18:15:26.602593035Z" level=info msg="CreateContainer within sandbox \"7638ec9bae9bfec1c5ecb196149ea98d211e4cfcd3fd2b201ac3abc3efe203a9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 18:15:26.613287 containerd[1523]: time="2025-05-14T18:15:26.613254942Z" level=info msg="Container 3ac2f6fb5e7e1bfa8fb7728b42735b8903b5947a758a6a3ccf4b21b23b4e618e: CDI devices from CRI Config.CDIDevices: []" May 14 18:15:26.615186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2190862243.mount: Deactivated successfully. May 14 18:15:26.626941 containerd[1523]: time="2025-05-14T18:15:26.626890700Z" level=info msg="CreateContainer within sandbox \"7638ec9bae9bfec1c5ecb196149ea98d211e4cfcd3fd2b201ac3abc3efe203a9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3ac2f6fb5e7e1bfa8fb7728b42735b8903b5947a758a6a3ccf4b21b23b4e618e\"" May 14 18:15:26.627459 containerd[1523]: time="2025-05-14T18:15:26.627374961Z" level=info msg="StartContainer for \"3ac2f6fb5e7e1bfa8fb7728b42735b8903b5947a758a6a3ccf4b21b23b4e618e\"" May 14 18:15:26.629050 containerd[1523]: time="2025-05-14T18:15:26.629024433Z" level=info msg="connecting to shim 3ac2f6fb5e7e1bfa8fb7728b42735b8903b5947a758a6a3ccf4b21b23b4e618e" address="unix:///run/containerd/s/04971a0ffdd203a95517c6548cbf9dc40b0cb8b284da9fd2e4e334d83362e562" protocol=ttrpc version=3 May 14 18:15:26.648087 systemd[1]: Started cri-containerd-3ac2f6fb5e7e1bfa8fb7728b42735b8903b5947a758a6a3ccf4b21b23b4e618e.scope - libcontainer container 3ac2f6fb5e7e1bfa8fb7728b42735b8903b5947a758a6a3ccf4b21b23b4e618e. May 14 18:15:26.693068 containerd[1523]: time="2025-05-14T18:15:26.693025518Z" level=info msg="StartContainer for \"3ac2f6fb5e7e1bfa8fb7728b42735b8903b5947a758a6a3ccf4b21b23b4e618e\" returns successfully" May 14 18:15:27.619615 kubelet[2626]: I0514 18:15:27.619558 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kcptb" podStartSLOduration=3.619540198 podStartE2EDuration="3.619540198s" podCreationTimestamp="2025-05-14 18:15:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:15:27.618519354 +0000 UTC m=+63.394533374" watchObservedRunningTime="2025-05-14 18:15:27.619540198 +0000 UTC m=+63.395554258" May 14 18:15:27.671060 containerd[1523]: time="2025-05-14T18:15:27.671022120Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ac2f6fb5e7e1bfa8fb7728b42735b8903b5947a758a6a3ccf4b21b23b4e618e\" id:\"0ca9f90bd6127c5b5474534fb08a34a8d7786c1b2d9c3d544a848173e09d2a57\" pid:6164 exit_status:1 exited_at:{seconds:1747246527 nanos:670729747}" May 14 18:15:28.216627 systemd[1]: Started sshd@15-10.0.0.119:22-10.0.0.1:51964.service - OpenSSH per-connection server daemon (10.0.0.1:51964). May 14 18:15:28.274094 sshd[6277]: Accepted publickey for core from 10.0.0.1 port 51964 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:15:28.276711 sshd-session[6277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:28.283012 systemd-logind[1500]: New session 16 of user core. May 14 18:15:28.301168 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 18:15:28.453881 sshd[6279]: Connection closed by 10.0.0.1 port 51964 May 14 18:15:28.454227 sshd-session[6277]: pam_unix(sshd:session): session closed for user core May 14 18:15:28.468201 systemd[1]: sshd@15-10.0.0.119:22-10.0.0.1:51964.service: Deactivated successfully. May 14 18:15:28.469727 systemd[1]: session-16.scope: Deactivated successfully. May 14 18:15:28.470368 systemd-logind[1500]: Session 16 logged out. Waiting for processes to exit. May 14 18:15:28.473800 systemd[1]: Started sshd@16-10.0.0.119:22-10.0.0.1:51980.service - OpenSSH per-connection server daemon (10.0.0.1:51980). May 14 18:15:28.474616 systemd-logind[1500]: Removed session 16. May 14 18:15:28.521661 sshd[6293]: Accepted publickey for core from 10.0.0.1 port 51980 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:15:28.522842 sshd-session[6293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:28.528086 systemd-logind[1500]: New session 17 of user core. May 14 18:15:28.551149 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 18:15:28.670050 containerd[1523]: time="2025-05-14T18:15:28.669889161Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ac2f6fb5e7e1bfa8fb7728b42735b8903b5947a758a6a3ccf4b21b23b4e618e\" id:\"76384604fc0ce2cdb198851bfcb89324a2d8c73766d9e66c37043397a1620b8a\" pid:6315 exit_status:1 exited_at:{seconds:1747246528 nanos:669578548}" May 14 18:15:28.707356 systemd[1]: cri-containerd-7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2.scope: Deactivated successfully. May 14 18:15:28.708267 systemd[1]: cri-containerd-7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2.scope: Consumed 355ms CPU time, 25.5M memory peak, 4.7M read from disk. May 14 18:15:28.709840 containerd[1523]: time="2025-05-14T18:15:28.709805267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2\" id:\"7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2\" pid:3263 exit_status:1 exited_at:{seconds:1747246528 nanos:709145320}" May 14 18:15:28.710879 containerd[1523]: time="2025-05-14T18:15:28.709821028Z" level=info msg="received exit event container_id:\"7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2\" id:\"7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2\" pid:3263 exit_status:1 exited_at:{seconds:1747246528 nanos:709145320}" May 14 18:15:28.730067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2-rootfs.mount: Deactivated successfully. May 14 18:15:28.741305 containerd[1523]: time="2025-05-14T18:15:28.741268061Z" level=info msg="StopContainer for \"7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2\" returns successfully" May 14 18:15:28.741842 containerd[1523]: time="2025-05-14T18:15:28.741811324Z" level=info msg="StopPodSandbox for \"0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1\"" May 14 18:15:28.742036 containerd[1523]: time="2025-05-14T18:15:28.742018772Z" level=info msg="Container to stop \"7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:15:28.748302 systemd[1]: cri-containerd-0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1.scope: Deactivated successfully. May 14 18:15:28.752687 containerd[1523]: time="2025-05-14T18:15:28.752537771Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1\" id:\"0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1\" pid:3150 exit_status:137 exited_at:{seconds:1747246528 nanos:752024910}" May 14 18:15:28.776648 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1-rootfs.mount: Deactivated successfully. May 14 18:15:28.777417 containerd[1523]: time="2025-05-14T18:15:28.776938710Z" level=info msg="received exit event sandbox_id:\"0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1\" exit_status:137 exited_at:{seconds:1747246528 nanos:752024910}" May 14 18:15:28.777417 containerd[1523]: time="2025-05-14T18:15:28.777040114Z" level=info msg="shim disconnected" id=0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1 namespace=k8s.io May 14 18:15:28.777417 containerd[1523]: time="2025-05-14T18:15:28.777311686Z" level=warning msg="cleaning up after shim disconnected" id=0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1 namespace=k8s.io May 14 18:15:28.777417 containerd[1523]: time="2025-05-14T18:15:28.777338007Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 18:15:28.779429 containerd[1523]: time="2025-05-14T18:15:28.779357411Z" level=info msg="TearDown network for sandbox \"0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1\" successfully" May 14 18:15:28.779429 containerd[1523]: time="2025-05-14T18:15:28.779386772Z" level=info msg="StopPodSandbox for \"0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1\" returns successfully" May 14 18:15:28.781737 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1-shm.mount: Deactivated successfully. May 14 18:15:28.796987 sshd[6295]: Connection closed by 10.0.0.1 port 51980 May 14 18:15:28.798086 sshd-session[6293]: pam_unix(sshd:session): session closed for user core May 14 18:15:28.813209 systemd[1]: sshd@16-10.0.0.119:22-10.0.0.1:51980.service: Deactivated successfully. May 14 18:15:28.818121 systemd[1]: session-17.scope: Deactivated successfully. May 14 18:15:28.820170 systemd-logind[1500]: Session 17 logged out. Waiting for processes to exit. May 14 18:15:28.826275 systemd[1]: Started sshd@17-10.0.0.119:22-10.0.0.1:51992.service - OpenSSH per-connection server daemon (10.0.0.1:51992). May 14 18:15:28.826804 systemd-logind[1500]: Removed session 17. May 14 18:15:28.874041 sshd[6386]: Accepted publickey for core from 10.0.0.1 port 51992 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:15:28.875192 sshd-session[6386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:28.880138 systemd-logind[1500]: New session 18 of user core. May 14 18:15:28.894177 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 18:15:28.948225 kubelet[2626]: I0514 18:15:28.948191 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8d2540df-28ab-452a-a202-de0a851cad5a-typha-certs\") pod \"8d2540df-28ab-452a-a202-de0a851cad5a\" (UID: \"8d2540df-28ab-452a-a202-de0a851cad5a\") " May 14 18:15:28.948225 kubelet[2626]: I0514 18:15:28.948232 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prn4h\" (UniqueName: \"kubernetes.io/projected/8d2540df-28ab-452a-a202-de0a851cad5a-kube-api-access-prn4h\") pod \"8d2540df-28ab-452a-a202-de0a851cad5a\" (UID: \"8d2540df-28ab-452a-a202-de0a851cad5a\") " May 14 18:15:28.948569 kubelet[2626]: I0514 18:15:28.948258 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d2540df-28ab-452a-a202-de0a851cad5a-tigera-ca-bundle\") pod \"8d2540df-28ab-452a-a202-de0a851cad5a\" (UID: \"8d2540df-28ab-452a-a202-de0a851cad5a\") " May 14 18:15:28.951784 systemd[1]: var-lib-kubelet-pods-8d2540df\x2d28ab\x2d452a\x2da202\x2dde0a851cad5a-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. May 14 18:15:28.953614 kubelet[2626]: I0514 18:15:28.953564 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d2540df-28ab-452a-a202-de0a851cad5a-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "8d2540df-28ab-452a-a202-de0a851cad5a" (UID: "8d2540df-28ab-452a-a202-de0a851cad5a"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 18:15:28.954301 kubelet[2626]: I0514 18:15:28.954262 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d2540df-28ab-452a-a202-de0a851cad5a-kube-api-access-prn4h" (OuterVolumeSpecName: "kube-api-access-prn4h") pod "8d2540df-28ab-452a-a202-de0a851cad5a" (UID: "8d2540df-28ab-452a-a202-de0a851cad5a"). InnerVolumeSpecName "kube-api-access-prn4h". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:15:28.954351 kubelet[2626]: I0514 18:15:28.954324 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d2540df-28ab-452a-a202-de0a851cad5a-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "8d2540df-28ab-452a-a202-de0a851cad5a" (UID: "8d2540df-28ab-452a-a202-de0a851cad5a"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 18:15:28.954649 systemd[1]: var-lib-kubelet-pods-8d2540df\x2d28ab\x2d452a\x2da202\x2dde0a851cad5a-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. May 14 18:15:28.954752 systemd[1]: var-lib-kubelet-pods-8d2540df\x2d28ab\x2d452a\x2da202\x2dde0a851cad5a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dprn4h.mount: Deactivated successfully. May 14 18:15:29.049037 kubelet[2626]: I0514 18:15:29.048917 2626 reconciler_common.go:288] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8d2540df-28ab-452a-a202-de0a851cad5a-typha-certs\") on node \"localhost\" DevicePath \"\"" May 14 18:15:29.049037 kubelet[2626]: I0514 18:15:29.048970 2626 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-prn4h\" (UniqueName: \"kubernetes.io/projected/8d2540df-28ab-452a-a202-de0a851cad5a-kube-api-access-prn4h\") on node \"localhost\" DevicePath \"\"" May 14 18:15:29.049037 kubelet[2626]: I0514 18:15:29.048982 2626 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d2540df-28ab-452a-a202-de0a851cad5a-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 14 18:15:29.609482 kubelet[2626]: I0514 18:15:29.609109 2626 scope.go:117] "RemoveContainer" containerID="7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2" May 14 18:15:29.618627 containerd[1523]: time="2025-05-14T18:15:29.618519433Z" level=info msg="RemoveContainer for \"7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2\"" May 14 18:15:29.619142 systemd[1]: Removed slice kubepods-besteffort-pod8d2540df_28ab_452a_a202_de0a851cad5a.slice - libcontainer container kubepods-besteffort-pod8d2540df_28ab_452a_a202_de0a851cad5a.slice. May 14 18:15:29.619757 systemd[1]: kubepods-besteffort-pod8d2540df_28ab_452a_a202_de0a851cad5a.slice: Consumed 372ms CPU time, 25.7M memory peak, 4.7M read from disk. May 14 18:15:29.623731 containerd[1523]: time="2025-05-14T18:15:29.623694004Z" level=info msg="RemoveContainer for \"7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2\" returns successfully" May 14 18:15:29.623910 kubelet[2626]: I0514 18:15:29.623883 2626 scope.go:117] "RemoveContainer" containerID="7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2" May 14 18:15:29.624352 containerd[1523]: time="2025-05-14T18:15:29.624317189Z" level=error msg="ContainerStatus for \"7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2\": not found" May 14 18:15:29.624467 kubelet[2626]: E0514 18:15:29.624449 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2\": not found" containerID="7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2" May 14 18:15:29.624502 kubelet[2626]: I0514 18:15:29.624475 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2"} err="failed to get container status \"7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"7062d3b70d37af2db1454d4e00f1c940f7214bf9e036c47662414c824bd809e2\": not found" May 14 18:15:30.301646 kubelet[2626]: I0514 18:15:30.301564 2626 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d2540df-28ab-452a-a202-de0a851cad5a" path="/var/lib/kubelet/pods/8d2540df-28ab-452a-a202-de0a851cad5a/volumes" May 14 18:15:30.593547 sshd[6388]: Connection closed by 10.0.0.1 port 51992 May 14 18:15:30.596481 sshd-session[6386]: pam_unix(sshd:session): session closed for user core May 14 18:15:30.604245 systemd[1]: sshd@17-10.0.0.119:22-10.0.0.1:51992.service: Deactivated successfully. May 14 18:15:30.607935 systemd[1]: session-18.scope: Deactivated successfully. May 14 18:15:30.609107 systemd[1]: session-18.scope: Consumed 516ms CPU time, 67.2M memory peak. May 14 18:15:30.611129 systemd-logind[1500]: Session 18 logged out. Waiting for processes to exit. May 14 18:15:30.612164 systemd[1]: Started sshd@18-10.0.0.119:22-10.0.0.1:52002.service - OpenSSH per-connection server daemon (10.0.0.1:52002). May 14 18:15:30.616138 systemd-logind[1500]: Removed session 18. May 14 18:15:30.666625 sshd[6461]: Accepted publickey for core from 10.0.0.1 port 52002 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:15:30.667840 sshd-session[6461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:30.671661 systemd-logind[1500]: New session 19 of user core. May 14 18:15:30.681169 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 18:15:30.983302 sshd[6463]: Connection closed by 10.0.0.1 port 52002 May 14 18:15:30.984441 sshd-session[6461]: pam_unix(sshd:session): session closed for user core May 14 18:15:30.993237 systemd[1]: sshd@18-10.0.0.119:22-10.0.0.1:52002.service: Deactivated successfully. May 14 18:15:30.994827 systemd[1]: session-19.scope: Deactivated successfully. May 14 18:15:30.995970 systemd-logind[1500]: Session 19 logged out. Waiting for processes to exit. May 14 18:15:30.999309 systemd[1]: Started sshd@19-10.0.0.119:22-10.0.0.1:52008.service - OpenSSH per-connection server daemon (10.0.0.1:52008). May 14 18:15:31.001615 systemd-logind[1500]: Removed session 19. May 14 18:15:31.051125 sshd[6477]: Accepted publickey for core from 10.0.0.1 port 52008 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:15:31.052409 sshd-session[6477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:31.056373 systemd-logind[1500]: New session 20 of user core. May 14 18:15:31.067176 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 18:15:31.207760 sshd[6479]: Connection closed by 10.0.0.1 port 52008 May 14 18:15:31.208115 sshd-session[6477]: pam_unix(sshd:session): session closed for user core May 14 18:15:31.211788 systemd-logind[1500]: Session 20 logged out. Waiting for processes to exit. May 14 18:15:31.212014 systemd[1]: sshd@19-10.0.0.119:22-10.0.0.1:52008.service: Deactivated successfully. May 14 18:15:31.213704 systemd[1]: session-20.scope: Deactivated successfully. May 14 18:15:31.216660 systemd-logind[1500]: Removed session 20. May 14 18:15:36.227566 systemd[1]: Started sshd@20-10.0.0.119:22-10.0.0.1:41250.service - OpenSSH per-connection server daemon (10.0.0.1:41250). May 14 18:15:36.285871 sshd[6707]: Accepted publickey for core from 10.0.0.1 port 41250 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:15:36.287243 sshd-session[6707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:36.293224 systemd-logind[1500]: New session 21 of user core. May 14 18:15:36.303168 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 18:15:36.448620 sshd[6709]: Connection closed by 10.0.0.1 port 41250 May 14 18:15:36.449197 sshd-session[6707]: pam_unix(sshd:session): session closed for user core May 14 18:15:36.452760 systemd-logind[1500]: Session 21 logged out. Waiting for processes to exit. May 14 18:15:36.452912 systemd[1]: sshd@20-10.0.0.119:22-10.0.0.1:41250.service: Deactivated successfully. May 14 18:15:36.455377 systemd[1]: session-21.scope: Deactivated successfully. May 14 18:15:36.456701 systemd-logind[1500]: Removed session 21. May 14 18:15:41.462321 systemd[1]: Started sshd@21-10.0.0.119:22-10.0.0.1:41266.service - OpenSSH per-connection server daemon (10.0.0.1:41266). May 14 18:15:41.532157 sshd[6731]: Accepted publickey for core from 10.0.0.1 port 41266 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:15:41.532756 sshd-session[6731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:41.538789 systemd-logind[1500]: New session 22 of user core. May 14 18:15:41.550154 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 18:15:41.716129 sshd[6733]: Connection closed by 10.0.0.1 port 41266 May 14 18:15:41.713197 sshd-session[6731]: pam_unix(sshd:session): session closed for user core May 14 18:15:41.721906 systemd[1]: sshd@21-10.0.0.119:22-10.0.0.1:41266.service: Deactivated successfully. May 14 18:15:41.723822 systemd[1]: session-22.scope: Deactivated successfully. May 14 18:15:41.728028 systemd-logind[1500]: Session 22 logged out. Waiting for processes to exit. May 14 18:15:41.730781 systemd-logind[1500]: Removed session 22. May 14 18:15:46.736639 systemd[1]: Started sshd@22-10.0.0.119:22-10.0.0.1:50500.service - OpenSSH per-connection server daemon (10.0.0.1:50500). May 14 18:15:46.822392 sshd[6754]: Accepted publickey for core from 10.0.0.1 port 50500 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:15:46.823295 sshd-session[6754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:46.831191 systemd-logind[1500]: New session 23 of user core. May 14 18:15:46.841173 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 18:15:46.993804 sshd[6756]: Connection closed by 10.0.0.1 port 50500 May 14 18:15:46.993829 sshd-session[6754]: pam_unix(sshd:session): session closed for user core May 14 18:15:46.997822 systemd[1]: sshd@22-10.0.0.119:22-10.0.0.1:50500.service: Deactivated successfully. May 14 18:15:46.999983 systemd[1]: session-23.scope: Deactivated successfully. May 14 18:15:47.001526 systemd-logind[1500]: Session 23 logged out. Waiting for processes to exit. May 14 18:15:47.003144 systemd-logind[1500]: Removed session 23. May 14 18:15:52.006189 systemd[1]: Started sshd@23-10.0.0.119:22-10.0.0.1:50514.service - OpenSSH per-connection server daemon (10.0.0.1:50514). May 14 18:15:52.045877 sshd[6769]: Accepted publickey for core from 10.0.0.1 port 50514 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:15:52.047581 sshd-session[6769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:52.052530 systemd-logind[1500]: New session 24 of user core. May 14 18:15:52.070466 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 18:15:52.201260 sshd[6771]: Connection closed by 10.0.0.1 port 50514 May 14 18:15:52.201560 sshd-session[6769]: pam_unix(sshd:session): session closed for user core May 14 18:15:52.205102 systemd[1]: sshd@23-10.0.0.119:22-10.0.0.1:50514.service: Deactivated successfully. May 14 18:15:52.208574 systemd[1]: session-24.scope: Deactivated successfully. May 14 18:15:52.209897 systemd-logind[1500]: Session 24 logged out. Waiting for processes to exit. May 14 18:15:52.211877 systemd-logind[1500]: Removed session 24. May 14 18:15:55.109205 containerd[1523]: time="2025-05-14T18:15:55.109157035Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ac2f6fb5e7e1bfa8fb7728b42735b8903b5947a758a6a3ccf4b21b23b4e618e\" id:\"24ec0b4d55fe92429f2e3fd368a6a4758e85eba23e01575f6bb17b34c0b32fe2\" pid:6805 exited_at:{seconds:1747246555 nanos:108759146}" May 14 18:15:57.218448 systemd[1]: Started sshd@24-10.0.0.119:22-10.0.0.1:56180.service - OpenSSH per-connection server daemon (10.0.0.1:56180). May 14 18:15:57.275426 sshd[6818]: Accepted publickey for core from 10.0.0.1 port 56180 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:15:57.277117 sshd-session[6818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:57.282530 systemd-logind[1500]: New session 25 of user core. May 14 18:15:57.297146 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 18:15:57.410707 sshd[6820]: Connection closed by 10.0.0.1 port 56180 May 14 18:15:57.411012 sshd-session[6818]: pam_unix(sshd:session): session closed for user core May 14 18:15:57.414390 systemd[1]: sshd@24-10.0.0.119:22-10.0.0.1:56180.service: Deactivated successfully. May 14 18:15:57.416283 systemd[1]: session-25.scope: Deactivated successfully. May 14 18:15:57.417381 systemd-logind[1500]: Session 25 logged out. Waiting for processes to exit. May 14 18:15:57.418654 systemd-logind[1500]: Removed session 25. May 14 18:16:02.424879 systemd[1]: Started sshd@25-10.0.0.119:22-10.0.0.1:56182.service - OpenSSH per-connection server daemon (10.0.0.1:56182). May 14 18:16:02.485081 sshd[6836]: Accepted publickey for core from 10.0.0.1 port 56182 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:16:02.486369 sshd-session[6836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:16:02.490976 systemd-logind[1500]: New session 26 of user core. May 14 18:16:02.497155 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 18:16:02.623332 sshd[6838]: Connection closed by 10.0.0.1 port 56182 May 14 18:16:02.623839 sshd-session[6836]: pam_unix(sshd:session): session closed for user core May 14 18:16:02.627159 systemd[1]: sshd@25-10.0.0.119:22-10.0.0.1:56182.service: Deactivated successfully. May 14 18:16:02.629554 systemd[1]: session-26.scope: Deactivated successfully. May 14 18:16:02.633491 systemd-logind[1500]: Session 26 logged out. Waiting for processes to exit. May 14 18:16:02.634383 systemd-logind[1500]: Removed session 26. May 14 18:16:07.636712 systemd[1]: Started sshd@26-10.0.0.119:22-10.0.0.1:48086.service - OpenSSH per-connection server daemon (10.0.0.1:48086). May 14 18:16:07.690021 sshd[6851]: Accepted publickey for core from 10.0.0.1 port 48086 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:16:07.691285 sshd-session[6851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:16:07.695142 systemd-logind[1500]: New session 27 of user core. May 14 18:16:07.707110 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 18:16:07.821798 sshd[6853]: Connection closed by 10.0.0.1 port 48086 May 14 18:16:07.822137 sshd-session[6851]: pam_unix(sshd:session): session closed for user core May 14 18:16:07.825581 systemd[1]: sshd@26-10.0.0.119:22-10.0.0.1:48086.service: Deactivated successfully. May 14 18:16:07.828331 systemd[1]: session-27.scope: Deactivated successfully. May 14 18:16:07.829138 systemd-logind[1500]: Session 27 logged out. Waiting for processes to exit. May 14 18:16:07.830516 systemd-logind[1500]: Removed session 27. May 14 18:16:12.839221 systemd[1]: Started sshd@27-10.0.0.119:22-10.0.0.1:39486.service - OpenSSH per-connection server daemon (10.0.0.1:39486). May 14 18:16:12.887342 sshd[6877]: Accepted publickey for core from 10.0.0.1 port 39486 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:16:12.888590 sshd-session[6877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:16:12.892840 systemd-logind[1500]: New session 28 of user core. May 14 18:16:12.905134 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 18:16:13.013890 sshd[6880]: Connection closed by 10.0.0.1 port 39486 May 14 18:16:13.014416 sshd-session[6877]: pam_unix(sshd:session): session closed for user core May 14 18:16:13.017438 systemd[1]: sshd@27-10.0.0.119:22-10.0.0.1:39486.service: Deactivated successfully. May 14 18:16:13.019160 systemd[1]: session-28.scope: Deactivated successfully. May 14 18:16:13.020436 systemd-logind[1500]: Session 28 logged out. Waiting for processes to exit. May 14 18:16:13.021850 systemd-logind[1500]: Removed session 28. May 14 18:16:18.038297 systemd[1]: Started sshd@28-10.0.0.119:22-10.0.0.1:39492.service - OpenSSH per-connection server daemon (10.0.0.1:39492). May 14 18:16:18.091110 sshd[6902]: Accepted publickey for core from 10.0.0.1 port 39492 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:16:18.092403 sshd-session[6902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:16:18.097205 systemd-logind[1500]: New session 29 of user core. May 14 18:16:18.108112 systemd[1]: Started session-29.scope - Session 29 of User core. May 14 18:16:18.249780 sshd[6906]: Connection closed by 10.0.0.1 port 39492 May 14 18:16:18.250349 sshd-session[6902]: pam_unix(sshd:session): session closed for user core May 14 18:16:18.253809 systemd[1]: sshd@28-10.0.0.119:22-10.0.0.1:39492.service: Deactivated successfully. May 14 18:16:18.255665 systemd[1]: session-29.scope: Deactivated successfully. May 14 18:16:18.257259 systemd-logind[1500]: Session 29 logged out. Waiting for processes to exit. May 14 18:16:18.258889 systemd-logind[1500]: Removed session 29. May 14 18:16:23.269830 systemd[1]: Started sshd@29-10.0.0.119:22-10.0.0.1:56098.service - OpenSSH per-connection server daemon (10.0.0.1:56098). May 14 18:16:23.318030 sshd[6919]: Accepted publickey for core from 10.0.0.1 port 56098 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:16:23.319197 sshd-session[6919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:16:23.323096 systemd-logind[1500]: New session 30 of user core. May 14 18:16:23.330121 systemd[1]: Started session-30.scope - Session 30 of User core. May 14 18:16:23.446062 sshd[6921]: Connection closed by 10.0.0.1 port 56098 May 14 18:16:23.446436 sshd-session[6919]: pam_unix(sshd:session): session closed for user core May 14 18:16:23.450076 systemd[1]: sshd@29-10.0.0.119:22-10.0.0.1:56098.service: Deactivated successfully. May 14 18:16:23.451842 systemd[1]: session-30.scope: Deactivated successfully. May 14 18:16:23.454520 systemd-logind[1500]: Session 30 logged out. Waiting for processes to exit. May 14 18:16:23.455534 systemd-logind[1500]: Removed session 30. May 14 18:16:24.925863 containerd[1523]: time="2025-05-14T18:16:24.925824406Z" level=info msg="StopPodSandbox for \"0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1\"" May 14 18:16:24.926263 containerd[1523]: time="2025-05-14T18:16:24.925974488Z" level=info msg="TearDown network for sandbox \"0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1\" successfully" May 14 18:16:24.926263 containerd[1523]: time="2025-05-14T18:16:24.925990009Z" level=info msg="StopPodSandbox for \"0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1\" returns successfully" May 14 18:16:24.926263 containerd[1523]: time="2025-05-14T18:16:24.926201492Z" level=info msg="RemovePodSandbox for \"0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1\"" May 14 18:16:24.926263 containerd[1523]: time="2025-05-14T18:16:24.926229412Z" level=info msg="Forcibly stopping sandbox \"0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1\"" May 14 18:16:24.926346 containerd[1523]: time="2025-05-14T18:16:24.926300893Z" level=info msg="TearDown network for sandbox \"0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1\" successfully" May 14 18:16:24.927467 containerd[1523]: time="2025-05-14T18:16:24.927443871Z" level=info msg="Ensure that sandbox 0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1 in task-service has been cleanup successfully" May 14 18:16:24.932072 containerd[1523]: time="2025-05-14T18:16:24.932037661Z" level=info msg="RemovePodSandbox \"0d2e7b628be8302de7716f55b3283e5b069ce7b207528630b7dc37f8c3cbdfe1\" returns successfully" May 14 18:16:24.932384 containerd[1523]: time="2025-05-14T18:16:24.932344346Z" level=info msg="StopPodSandbox for \"ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876\"" May 14 18:16:24.932459 containerd[1523]: time="2025-05-14T18:16:24.932431227Z" level=info msg="TearDown network for sandbox \"ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876\" successfully" May 14 18:16:24.932459 containerd[1523]: time="2025-05-14T18:16:24.932443108Z" level=info msg="StopPodSandbox for \"ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876\" returns successfully" May 14 18:16:24.932740 containerd[1523]: time="2025-05-14T18:16:24.932681831Z" level=info msg="RemovePodSandbox for \"ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876\"" May 14 18:16:24.932776 containerd[1523]: time="2025-05-14T18:16:24.932742112Z" level=info msg="Forcibly stopping sandbox \"ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876\"" May 14 18:16:24.932875 containerd[1523]: time="2025-05-14T18:16:24.932812033Z" level=info msg="TearDown network for sandbox \"ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876\" successfully" May 14 18:16:24.933930 containerd[1523]: time="2025-05-14T18:16:24.933887130Z" level=info msg="Ensure that sandbox ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876 in task-service has been cleanup successfully" May 14 18:16:24.940610 containerd[1523]: time="2025-05-14T18:16:24.940575632Z" level=info msg="RemovePodSandbox \"ca3731175868405a0085cfb59b27e0ec687d49cd02c6b1ece8929600ed62a876\" returns successfully" May 14 18:16:24.940925 containerd[1523]: time="2025-05-14T18:16:24.940898477Z" level=info msg="StopPodSandbox for \"38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801\"" May 14 18:16:25.027279 containerd[1523]: 2025-05-14 18:16:24.987 [WARNING][6951] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" May 14 18:16:25.027279 containerd[1523]: 2025-05-14 18:16:24.987 [INFO][6951] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" May 14 18:16:25.027279 containerd[1523]: 2025-05-14 18:16:24.987 [INFO][6951] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" iface="eth0" netns="" May 14 18:16:25.027279 containerd[1523]: 2025-05-14 18:16:24.987 [INFO][6951] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" May 14 18:16:25.027279 containerd[1523]: 2025-05-14 18:16:24.987 [INFO][6951] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" May 14 18:16:25.027279 containerd[1523]: 2025-05-14 18:16:25.012 [INFO][6959] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" HandleID="k8s-pod-network.38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" Workload="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" May 14 18:16:25.027279 containerd[1523]: 2025-05-14 18:16:25.012 [INFO][6959] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:16:25.027279 containerd[1523]: 2025-05-14 18:16:25.012 [INFO][6959] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:16:25.027279 containerd[1523]: 2025-05-14 18:16:25.020 [WARNING][6959] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" HandleID="k8s-pod-network.38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" Workload="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" May 14 18:16:25.027279 containerd[1523]: 2025-05-14 18:16:25.020 [INFO][6959] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" HandleID="k8s-pod-network.38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" Workload="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" May 14 18:16:25.027279 containerd[1523]: 2025-05-14 18:16:25.023 [INFO][6959] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:16:25.027279 containerd[1523]: 2025-05-14 18:16:25.025 [INFO][6951] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" May 14 18:16:25.027722 containerd[1523]: time="2025-05-14T18:16:25.027323759Z" level=info msg="TearDown network for sandbox \"38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801\" successfully" May 14 18:16:25.027722 containerd[1523]: time="2025-05-14T18:16:25.027350279Z" level=info msg="StopPodSandbox for \"38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801\" returns successfully" May 14 18:16:25.028115 containerd[1523]: time="2025-05-14T18:16:25.028086691Z" level=info msg="RemovePodSandbox for \"38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801\"" May 14 18:16:25.028175 containerd[1523]: time="2025-05-14T18:16:25.028125091Z" level=info msg="Forcibly stopping sandbox \"38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801\"" May 14 18:16:25.118696 containerd[1523]: 2025-05-14 18:16:25.063 [WARNING][6981] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" May 14 18:16:25.118696 containerd[1523]: 2025-05-14 18:16:25.063 [INFO][6981] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" May 14 18:16:25.118696 containerd[1523]: 2025-05-14 18:16:25.063 [INFO][6981] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" iface="eth0" netns="" May 14 18:16:25.118696 containerd[1523]: 2025-05-14 18:16:25.063 [INFO][6981] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" May 14 18:16:25.118696 containerd[1523]: 2025-05-14 18:16:25.063 [INFO][6981] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" May 14 18:16:25.118696 containerd[1523]: 2025-05-14 18:16:25.100 [INFO][6996] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" HandleID="k8s-pod-network.38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" Workload="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" May 14 18:16:25.118696 containerd[1523]: 2025-05-14 18:16:25.100 [INFO][6996] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:16:25.118696 containerd[1523]: 2025-05-14 18:16:25.100 [INFO][6996] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:16:25.118696 containerd[1523]: 2025-05-14 18:16:25.110 [WARNING][6996] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" HandleID="k8s-pod-network.38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" Workload="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" May 14 18:16:25.118696 containerd[1523]: 2025-05-14 18:16:25.110 [INFO][6996] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" HandleID="k8s-pod-network.38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" Workload="localhost-k8s-calico--kube--controllers--85fb4f869d--ch7ss-eth0" May 14 18:16:25.118696 containerd[1523]: 2025-05-14 18:16:25.112 [INFO][6996] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:16:25.118696 containerd[1523]: 2025-05-14 18:16:25.114 [INFO][6981] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801" May 14 18:16:25.119066 containerd[1523]: time="2025-05-14T18:16:25.118742746Z" level=info msg="TearDown network for sandbox \"38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801\" successfully" May 14 18:16:25.121173 containerd[1523]: time="2025-05-14T18:16:25.121137663Z" level=info msg="Ensure that sandbox 38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801 in task-service has been cleanup successfully" May 14 18:16:25.125946 containerd[1523]: time="2025-05-14T18:16:25.125904415Z" level=info msg="RemovePodSandbox \"38bb9fe663cbe7719509fc41b3d284380222e5776211acf542c0e24f9ca50801\" returns successfully" May 14 18:16:25.144744 containerd[1523]: time="2025-05-14T18:16:25.144694940Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ac2f6fb5e7e1bfa8fb7728b42735b8903b5947a758a6a3ccf4b21b23b4e618e\" id:\"d428905875ba2f53365995662cf06530710148bd5b28881616524bc5c192ccf0\" pid:7008 exited_at:{seconds:1747246585 nanos:144344735}" May 14 18:16:28.461661 systemd[1]: Started sshd@30-10.0.0.119:22-10.0.0.1:56112.service - OpenSSH per-connection server daemon (10.0.0.1:56112). May 14 18:16:28.523926 sshd[7023]: Accepted publickey for core from 10.0.0.1 port 56112 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 18:16:28.524714 sshd-session[7023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:16:28.528723 systemd-logind[1500]: New session 31 of user core. May 14 18:16:28.541199 systemd[1]: Started session-31.scope - Session 31 of User core. May 14 18:16:28.651965 sshd[7025]: Connection closed by 10.0.0.1 port 56112 May 14 18:16:28.651695 sshd-session[7023]: pam_unix(sshd:session): session closed for user core May 14 18:16:28.655382 systemd[1]: sshd@30-10.0.0.119:22-10.0.0.1:56112.service: Deactivated successfully. May 14 18:16:28.657178 systemd[1]: session-31.scope: Deactivated successfully. May 14 18:16:28.657907 systemd-logind[1500]: Session 31 logged out. Waiting for processes to exit. May 14 18:16:28.659326 systemd-logind[1500]: Removed session 31.