May 9 00:21:20.875236 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 9 00:21:20.875256 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu May 8 22:43:24 -00 2025 May 9 00:21:20.875266 kernel: KASLR enabled May 9 00:21:20.875272 kernel: efi: EFI v2.7 by EDK II May 9 00:21:20.875278 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 9 00:21:20.875283 kernel: random: crng init done May 9 00:21:20.875290 kernel: ACPI: Early table checksum verification disabled May 9 00:21:20.875296 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 9 00:21:20.875302 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 9 00:21:20.875310 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:21:20.875316 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:21:20.875322 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:21:20.875328 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:21:20.875334 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:21:20.875341 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:21:20.875349 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:21:20.875355 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:21:20.875362 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:21:20.875368 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 9 00:21:20.875374 kernel: NUMA: Failed to initialise from firmware May 9 00:21:20.875381 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 9 00:21:20.875387 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] May 9 00:21:20.875393 kernel: Zone ranges: May 9 00:21:20.875400 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 9 00:21:20.875406 kernel: DMA32 empty May 9 00:21:20.875414 kernel: Normal empty May 9 00:21:20.875420 kernel: Movable zone start for each node May 9 00:21:20.875426 kernel: Early memory node ranges May 9 00:21:20.875433 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 9 00:21:20.875439 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 9 00:21:20.875446 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 9 00:21:20.875452 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 9 00:21:20.875458 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 9 00:21:20.875465 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 9 00:21:20.875471 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 9 00:21:20.875477 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 9 00:21:20.875483 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 9 00:21:20.875490 kernel: psci: probing for conduit method from ACPI. May 9 00:21:20.875497 kernel: psci: PSCIv1.1 detected in firmware. May 9 00:21:20.875503 kernel: psci: Using standard PSCI v0.2 function IDs May 9 00:21:20.875512 kernel: psci: Trusted OS migration not required May 9 00:21:20.875518 kernel: psci: SMC Calling Convention v1.1 May 9 00:21:20.875525 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 9 00:21:20.875533 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 9 00:21:20.875540 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 9 00:21:20.875547 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 9 00:21:20.875554 kernel: Detected PIPT I-cache on CPU0 May 9 00:21:20.875561 kernel: CPU features: detected: GIC system register CPU interface May 9 00:21:20.875568 kernel: CPU features: detected: Hardware dirty bit management May 9 00:21:20.875574 kernel: CPU features: detected: Spectre-v4 May 9 00:21:20.875646 kernel: CPU features: detected: Spectre-BHB May 9 00:21:20.875654 kernel: CPU features: kernel page table isolation forced ON by KASLR May 9 00:21:20.875661 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 9 00:21:20.875670 kernel: CPU features: detected: ARM erratum 1418040 May 9 00:21:20.875677 kernel: CPU features: detected: SSBS not fully self-synchronizing May 9 00:21:20.875690 kernel: alternatives: applying boot alternatives May 9 00:21:20.875698 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8e29bd932c31237847976018676f554a4d09fa105e08b3bc01bcbb09708aefd3 May 9 00:21:20.875706 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 00:21:20.875713 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 00:21:20.875719 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 00:21:20.875726 kernel: Fallback order for Node 0: 0 May 9 00:21:20.875733 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 9 00:21:20.875739 kernel: Policy zone: DMA May 9 00:21:20.875746 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 00:21:20.875754 kernel: software IO TLB: area num 4. May 9 00:21:20.875761 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 9 00:21:20.875768 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) May 9 00:21:20.875775 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 00:21:20.875782 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 00:21:20.875789 kernel: rcu: RCU event tracing is enabled. May 9 00:21:20.875796 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 00:21:20.875803 kernel: Trampoline variant of Tasks RCU enabled. May 9 00:21:20.875809 kernel: Tracing variant of Tasks RCU enabled. May 9 00:21:20.875816 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 00:21:20.875823 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 00:21:20.875835 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 00:21:20.875843 kernel: GICv3: 256 SPIs implemented May 9 00:21:20.875849 kernel: GICv3: 0 Extended SPIs implemented May 9 00:21:20.875856 kernel: Root IRQ handler: gic_handle_irq May 9 00:21:20.875862 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 9 00:21:20.875869 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 9 00:21:20.875876 kernel: ITS [mem 0x08080000-0x0809ffff] May 9 00:21:20.875883 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 9 00:21:20.875890 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 9 00:21:20.875896 kernel: GICv3: using LPI property table @0x00000000400f0000 May 9 00:21:20.875903 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 9 00:21:20.875910 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 00:21:20.875918 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 00:21:20.875925 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 9 00:21:20.875932 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 9 00:21:20.875939 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 9 00:21:20.875946 kernel: arm-pv: using stolen time PV May 9 00:21:20.875953 kernel: Console: colour dummy device 80x25 May 9 00:21:20.875960 kernel: ACPI: Core revision 20230628 May 9 00:21:20.875967 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 9 00:21:20.875974 kernel: pid_max: default: 32768 minimum: 301 May 9 00:21:20.875980 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 00:21:20.875988 kernel: landlock: Up and running. May 9 00:21:20.875995 kernel: SELinux: Initializing. May 9 00:21:20.876002 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:21:20.876009 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:21:20.876016 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 9 00:21:20.876023 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:21:20.876030 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:21:20.876037 kernel: rcu: Hierarchical SRCU implementation. May 9 00:21:20.876044 kernel: rcu: Max phase no-delay instances is 400. May 9 00:21:20.876052 kernel: Platform MSI: ITS@0x8080000 domain created May 9 00:21:20.876059 kernel: PCI/MSI: ITS@0x8080000 domain created May 9 00:21:20.876066 kernel: Remapping and enabling EFI services. May 9 00:21:20.876072 kernel: smp: Bringing up secondary CPUs ... May 9 00:21:20.876079 kernel: Detected PIPT I-cache on CPU1 May 9 00:21:20.876086 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 9 00:21:20.876093 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 9 00:21:20.876100 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 00:21:20.876107 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 9 00:21:20.876114 kernel: Detected PIPT I-cache on CPU2 May 9 00:21:20.876121 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 9 00:21:20.876128 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 9 00:21:20.876140 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 00:21:20.876148 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 9 00:21:20.876155 kernel: Detected PIPT I-cache on CPU3 May 9 00:21:20.876162 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 9 00:21:20.876170 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 9 00:21:20.876177 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 00:21:20.876184 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 9 00:21:20.876191 kernel: smp: Brought up 1 node, 4 CPUs May 9 00:21:20.876199 kernel: SMP: Total of 4 processors activated. May 9 00:21:20.876206 kernel: CPU features: detected: 32-bit EL0 Support May 9 00:21:20.876214 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 9 00:21:20.876221 kernel: CPU features: detected: Common not Private translations May 9 00:21:20.876228 kernel: CPU features: detected: CRC32 instructions May 9 00:21:20.876235 kernel: CPU features: detected: Enhanced Virtualization Traps May 9 00:21:20.876243 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 9 00:21:20.876251 kernel: CPU features: detected: LSE atomic instructions May 9 00:21:20.876258 kernel: CPU features: detected: Privileged Access Never May 9 00:21:20.876266 kernel: CPU features: detected: RAS Extension Support May 9 00:21:20.876273 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 9 00:21:20.876291 kernel: CPU: All CPU(s) started at EL1 May 9 00:21:20.876298 kernel: alternatives: applying system-wide alternatives May 9 00:21:20.876306 kernel: devtmpfs: initialized May 9 00:21:20.876313 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 00:21:20.876320 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 00:21:20.876329 kernel: pinctrl core: initialized pinctrl subsystem May 9 00:21:20.876336 kernel: SMBIOS 3.0.0 present. May 9 00:21:20.876343 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 9 00:21:20.876351 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 00:21:20.876358 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 00:21:20.876365 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 00:21:20.876373 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 00:21:20.876380 kernel: audit: initializing netlink subsys (disabled) May 9 00:21:20.876387 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 May 9 00:21:20.876395 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 00:21:20.876403 kernel: cpuidle: using governor menu May 9 00:21:20.876410 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 00:21:20.876417 kernel: ASID allocator initialised with 32768 entries May 9 00:21:20.876424 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 00:21:20.876431 kernel: Serial: AMBA PL011 UART driver May 9 00:21:20.876439 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 9 00:21:20.876446 kernel: Modules: 0 pages in range for non-PLT usage May 9 00:21:20.876453 kernel: Modules: 509008 pages in range for PLT usage May 9 00:21:20.876461 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 00:21:20.876468 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 00:21:20.876476 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 00:21:20.876483 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 00:21:20.876490 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 00:21:20.876497 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 00:21:20.876504 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 00:21:20.876511 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 00:21:20.876519 kernel: ACPI: Added _OSI(Module Device) May 9 00:21:20.876527 kernel: ACPI: Added _OSI(Processor Device) May 9 00:21:20.876534 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 00:21:20.876541 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 00:21:20.876548 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 00:21:20.876556 kernel: ACPI: Interpreter enabled May 9 00:21:20.876563 kernel: ACPI: Using GIC for interrupt routing May 9 00:21:20.876570 kernel: ACPI: MCFG table detected, 1 entries May 9 00:21:20.876584 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 9 00:21:20.876593 kernel: printk: console [ttyAMA0] enabled May 9 00:21:20.876602 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 00:21:20.876741 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 00:21:20.876815 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 9 00:21:20.876880 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 9 00:21:20.876944 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 9 00:21:20.877006 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 9 00:21:20.877016 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 9 00:21:20.877026 kernel: PCI host bridge to bus 0000:00 May 9 00:21:20.877094 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 9 00:21:20.877152 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 9 00:21:20.877208 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 9 00:21:20.877264 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 00:21:20.877341 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 9 00:21:20.877419 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 9 00:21:20.877490 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 9 00:21:20.877555 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 9 00:21:20.877634 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 9 00:21:20.877713 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 9 00:21:20.877784 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 9 00:21:20.877852 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 9 00:21:20.877916 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 9 00:21:20.877976 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 9 00:21:20.878035 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 9 00:21:20.878045 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 9 00:21:20.878052 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 9 00:21:20.878060 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 9 00:21:20.878067 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 9 00:21:20.878074 kernel: iommu: Default domain type: Translated May 9 00:21:20.878084 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 00:21:20.878091 kernel: efivars: Registered efivars operations May 9 00:21:20.878099 kernel: vgaarb: loaded May 9 00:21:20.878106 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 00:21:20.878114 kernel: VFS: Disk quotas dquot_6.6.0 May 9 00:21:20.878121 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 00:21:20.878129 kernel: pnp: PnP ACPI init May 9 00:21:20.878206 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 9 00:21:20.878217 kernel: pnp: PnP ACPI: found 1 devices May 9 00:21:20.878226 kernel: NET: Registered PF_INET protocol family May 9 00:21:20.878233 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 00:21:20.878241 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 00:21:20.878248 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 00:21:20.878256 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 00:21:20.878263 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 00:21:20.878270 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 00:21:20.878278 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:21:20.878286 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:21:20.878294 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 00:21:20.878301 kernel: PCI: CLS 0 bytes, default 64 May 9 00:21:20.878308 kernel: kvm [1]: HYP mode not available May 9 00:21:20.878315 kernel: Initialise system trusted keyrings May 9 00:21:20.878322 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 00:21:20.878329 kernel: Key type asymmetric registered May 9 00:21:20.878340 kernel: Asymmetric key parser 'x509' registered May 9 00:21:20.878348 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 9 00:21:20.878355 kernel: io scheduler mq-deadline registered May 9 00:21:20.878364 kernel: io scheduler kyber registered May 9 00:21:20.878371 kernel: io scheduler bfq registered May 9 00:21:20.878378 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 9 00:21:20.878385 kernel: ACPI: button: Power Button [PWRB] May 9 00:21:20.878393 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 9 00:21:20.878464 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 9 00:21:20.878474 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 00:21:20.878481 kernel: thunder_xcv, ver 1.0 May 9 00:21:20.878489 kernel: thunder_bgx, ver 1.0 May 9 00:21:20.878501 kernel: nicpf, ver 1.0 May 9 00:21:20.878508 kernel: nicvf, ver 1.0 May 9 00:21:20.878629 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 00:21:20.878711 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T00:21:20 UTC (1746750080) May 9 00:21:20.878722 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 00:21:20.878729 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 9 00:21:20.878736 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 00:21:20.878744 kernel: watchdog: Hard watchdog permanently disabled May 9 00:21:20.878755 kernel: NET: Registered PF_INET6 protocol family May 9 00:21:20.878762 kernel: Segment Routing with IPv6 May 9 00:21:20.878770 kernel: In-situ OAM (IOAM) with IPv6 May 9 00:21:20.878777 kernel: NET: Registered PF_PACKET protocol family May 9 00:21:20.878784 kernel: Key type dns_resolver registered May 9 00:21:20.878791 kernel: registered taskstats version 1 May 9 00:21:20.878798 kernel: Loading compiled-in X.509 certificates May 9 00:21:20.878806 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 7944e0e0bec5e8cad487856da19569eba337cea0' May 9 00:21:20.878813 kernel: Key type .fscrypt registered May 9 00:21:20.878822 kernel: Key type fscrypt-provisioning registered May 9 00:21:20.878829 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 00:21:20.878836 kernel: ima: Allocated hash algorithm: sha1 May 9 00:21:20.878843 kernel: ima: No architecture policies found May 9 00:21:20.878851 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 00:21:20.878858 kernel: clk: Disabling unused clocks May 9 00:21:20.878865 kernel: Freeing unused kernel memory: 39424K May 9 00:21:20.878873 kernel: Run /init as init process May 9 00:21:20.878880 kernel: with arguments: May 9 00:21:20.878888 kernel: /init May 9 00:21:20.878895 kernel: with environment: May 9 00:21:20.878902 kernel: HOME=/ May 9 00:21:20.878909 kernel: TERM=linux May 9 00:21:20.878916 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 00:21:20.878926 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:21:20.878935 systemd[1]: Detected virtualization kvm. May 9 00:21:20.878944 systemd[1]: Detected architecture arm64. May 9 00:21:20.878952 systemd[1]: Running in initrd. May 9 00:21:20.878960 systemd[1]: No hostname configured, using default hostname. May 9 00:21:20.878967 systemd[1]: Hostname set to . May 9 00:21:20.878975 systemd[1]: Initializing machine ID from VM UUID. May 9 00:21:20.878983 systemd[1]: Queued start job for default target initrd.target. May 9 00:21:20.878991 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:21:20.878999 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:21:20.879009 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 00:21:20.879017 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:21:20.879025 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 00:21:20.879033 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 00:21:20.879043 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 00:21:20.879051 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 00:21:20.879059 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:21:20.879069 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:21:20.879077 systemd[1]: Reached target paths.target - Path Units. May 9 00:21:20.879085 systemd[1]: Reached target slices.target - Slice Units. May 9 00:21:20.879092 systemd[1]: Reached target swap.target - Swaps. May 9 00:21:20.879100 systemd[1]: Reached target timers.target - Timer Units. May 9 00:21:20.879108 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:21:20.879116 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:21:20.879124 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 00:21:20.879131 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 00:21:20.879140 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:21:20.879148 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:21:20.879156 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:21:20.879164 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:21:20.879171 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 00:21:20.879179 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:21:20.879187 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 00:21:20.879195 systemd[1]: Starting systemd-fsck-usr.service... May 9 00:21:20.879203 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:21:20.879211 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:21:20.879219 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:21:20.879227 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 00:21:20.879234 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:21:20.879242 systemd[1]: Finished systemd-fsck-usr.service. May 9 00:21:20.879252 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:21:20.879260 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:21:20.879268 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:21:20.879276 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:21:20.879299 systemd-journald[239]: Collecting audit messages is disabled. May 9 00:21:20.879319 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:21:20.879328 systemd-journald[239]: Journal started May 9 00:21:20.879346 systemd-journald[239]: Runtime Journal (/run/log/journal/a0948435abfd4cf0bdf2a2882d66d5d2) is 5.9M, max 47.3M, 41.4M free. May 9 00:21:20.865058 systemd-modules-load[240]: Inserted module 'overlay' May 9 00:21:20.884847 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:21:20.884880 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 00:21:20.884878 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:21:20.887034 systemd-modules-load[240]: Inserted module 'br_netfilter' May 9 00:21:20.887706 kernel: Bridge firewalling registered May 9 00:21:20.887597 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:21:20.888769 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:21:20.903767 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 00:21:20.905074 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:21:20.906912 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:21:20.914506 dracut-cmdline[269]: dracut-dracut-053 May 9 00:21:20.914495 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:21:20.916850 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8e29bd932c31237847976018676f554a4d09fa105e08b3bc01bcbb09708aefd3 May 9 00:21:20.917916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:21:20.922292 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:21:20.948918 systemd-resolved[295]: Positive Trust Anchors: May 9 00:21:20.949639 systemd-resolved[295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:21:20.950648 systemd-resolved[295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:21:20.955265 systemd-resolved[295]: Defaulting to hostname 'linux'. May 9 00:21:20.958666 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:21:20.959771 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:21:20.987606 kernel: SCSI subsystem initialized May 9 00:21:20.992597 kernel: Loading iSCSI transport class v2.0-870. May 9 00:21:20.999604 kernel: iscsi: registered transport (tcp) May 9 00:21:21.011843 kernel: iscsi: registered transport (qla4xxx) May 9 00:21:21.011870 kernel: QLogic iSCSI HBA Driver May 9 00:21:21.052748 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 00:21:21.058733 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 00:21:21.076073 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 00:21:21.076125 kernel: device-mapper: uevent: version 1.0.3 May 9 00:21:21.076143 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 00:21:21.123610 kernel: raid6: neonx8 gen() 15590 MB/s May 9 00:21:21.140598 kernel: raid6: neonx4 gen() 15503 MB/s May 9 00:21:21.157601 kernel: raid6: neonx2 gen() 13081 MB/s May 9 00:21:21.174594 kernel: raid6: neonx1 gen() 10406 MB/s May 9 00:21:21.191593 kernel: raid6: int64x8 gen() 6916 MB/s May 9 00:21:21.208591 kernel: raid6: int64x4 gen() 7268 MB/s May 9 00:21:21.225594 kernel: raid6: int64x2 gen() 6068 MB/s May 9 00:21:21.242601 kernel: raid6: int64x1 gen() 5021 MB/s May 9 00:21:21.242629 kernel: raid6: using algorithm neonx8 gen() 15590 MB/s May 9 00:21:21.259604 kernel: raid6: .... xor() 11915 MB/s, rmw enabled May 9 00:21:21.259618 kernel: raid6: using neon recovery algorithm May 9 00:21:21.264593 kernel: xor: measuring software checksum speed May 9 00:21:21.264607 kernel: 8regs : 19788 MB/sec May 9 00:21:21.266038 kernel: 32regs : 18098 MB/sec May 9 00:21:21.266051 kernel: arm64_neon : 27052 MB/sec May 9 00:21:21.266066 kernel: xor: using function: arm64_neon (27052 MB/sec) May 9 00:21:21.319603 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 00:21:21.332949 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 00:21:21.344750 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:21:21.357158 systemd-udevd[461]: Using default interface naming scheme 'v255'. May 9 00:21:21.360407 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:21:21.362988 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 00:21:21.381512 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation May 9 00:21:21.412944 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:21:21.421734 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:21:21.463914 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:21:21.470757 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 00:21:21.483983 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 00:21:21.486667 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:21:21.489301 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:21:21.490231 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:21:21.497760 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 00:21:21.512635 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 00:21:21.517395 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 9 00:21:21.517729 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 00:21:21.517670 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:21:21.524106 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 00:21:21.517824 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:21:21.529136 kernel: GPT:9289727 != 19775487 May 9 00:21:21.529153 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 00:21:21.529163 kernel: GPT:9289727 != 19775487 May 9 00:21:21.529174 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 00:21:21.529183 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:21:21.526799 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:21:21.529955 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:21:21.530224 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:21:21.532466 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:21:21.543014 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:21:21.549779 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (509) May 9 00:21:21.549815 kernel: BTRFS: device fsid 9a510efc-c158-4845-bfb8-279f8b20070f devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (522) May 9 00:21:21.551636 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 00:21:21.556511 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 00:21:21.560736 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:21:21.570702 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 00:21:21.571782 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 00:21:21.577025 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:21:21.592722 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 00:21:21.594546 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:21:21.600369 disk-uuid[549]: Primary Header is updated. May 9 00:21:21.600369 disk-uuid[549]: Secondary Entries is updated. May 9 00:21:21.600369 disk-uuid[549]: Secondary Header is updated. May 9 00:21:21.603449 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:21:21.616742 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:21:22.615600 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:21:22.616207 disk-uuid[550]: The operation has completed successfully. May 9 00:21:22.635151 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 00:21:22.635247 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 00:21:22.663806 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 00:21:22.666646 sh[572]: Success May 9 00:21:22.679614 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 00:21:22.715945 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 00:21:22.717407 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 00:21:22.718246 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 00:21:22.727945 kernel: BTRFS info (device dm-0): first mount of filesystem 9a510efc-c158-4845-bfb8-279f8b20070f May 9 00:21:22.727981 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 00:21:22.727993 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 00:21:22.729736 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 00:21:22.729767 kernel: BTRFS info (device dm-0): using free space tree May 9 00:21:22.732786 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 00:21:22.734029 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 00:21:22.743731 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 00:21:22.745251 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 00:21:22.753025 kernel: BTRFS info (device vda6): first mount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 9 00:21:22.753057 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 00:21:22.753068 kernel: BTRFS info (device vda6): using free space tree May 9 00:21:22.755635 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:21:22.765055 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 00:21:22.766066 kernel: BTRFS info (device vda6): last unmount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 9 00:21:22.771427 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 00:21:22.777748 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 00:21:22.834757 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:21:22.842782 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:21:22.874654 systemd-networkd[763]: lo: Link UP May 9 00:21:22.874662 systemd-networkd[763]: lo: Gained carrier May 9 00:21:22.876354 systemd-networkd[763]: Enumeration completed May 9 00:21:22.876464 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:21:22.877004 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:21:22.877008 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:21:22.878083 systemd-networkd[763]: eth0: Link UP May 9 00:21:22.878086 systemd-networkd[763]: eth0: Gained carrier May 9 00:21:22.878093 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:21:22.878232 systemd[1]: Reached target network.target - Network. May 9 00:21:22.886495 ignition[669]: Ignition 2.19.0 May 9 00:21:22.886502 ignition[669]: Stage: fetch-offline May 9 00:21:22.886535 ignition[669]: no configs at "/usr/lib/ignition/base.d" May 9 00:21:22.886543 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:21:22.886718 ignition[669]: parsed url from cmdline: "" May 9 00:21:22.886721 ignition[669]: no config URL provided May 9 00:21:22.886726 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" May 9 00:21:22.886733 ignition[669]: no config at "/usr/lib/ignition/user.ign" May 9 00:21:22.886755 ignition[669]: op(1): [started] loading QEMU firmware config module May 9 00:21:22.886759 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 00:21:22.896962 ignition[669]: op(1): [finished] loading QEMU firmware config module May 9 00:21:22.898642 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:21:22.936497 ignition[669]: parsing config with SHA512: b60d8fced7910005f9b51b99eb998b7bf7e251fe511c62ba8eafa4a578e76a3052cf5b53e66a9b9695abf84b037e4866cd659fb5048c7b9354a342e6a62e066c May 9 00:21:22.942147 unknown[669]: fetched base config from "system" May 9 00:21:22.942156 unknown[669]: fetched user config from "qemu" May 9 00:21:22.942640 ignition[669]: fetch-offline: fetch-offline passed May 9 00:21:22.944161 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:21:22.942717 ignition[669]: Ignition finished successfully May 9 00:21:22.945660 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 00:21:22.952756 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 00:21:22.962731 ignition[770]: Ignition 2.19.0 May 9 00:21:22.962741 ignition[770]: Stage: kargs May 9 00:21:22.962905 ignition[770]: no configs at "/usr/lib/ignition/base.d" May 9 00:21:22.962914 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:21:22.963874 ignition[770]: kargs: kargs passed May 9 00:21:22.966373 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 00:21:22.963918 ignition[770]: Ignition finished successfully May 9 00:21:22.976769 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 00:21:22.985908 ignition[779]: Ignition 2.19.0 May 9 00:21:22.985916 ignition[779]: Stage: disks May 9 00:21:22.986079 ignition[779]: no configs at "/usr/lib/ignition/base.d" May 9 00:21:22.986088 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:21:22.988113 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 00:21:22.987014 ignition[779]: disks: disks passed May 9 00:21:22.990235 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 00:21:22.987060 ignition[779]: Ignition finished successfully May 9 00:21:22.992895 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 00:21:22.994300 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:21:22.995960 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:21:22.997440 systemd[1]: Reached target basic.target - Basic System. May 9 00:21:23.010771 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 00:21:23.020513 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 00:21:23.025327 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 00:21:23.027426 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 00:21:23.074608 kernel: EXT4-fs (vda9): mounted filesystem 1a8c7c5d-87ec-4bc4-aa01-1ebc1d3c20e7 r/w with ordered data mode. Quota mode: none. May 9 00:21:23.074674 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 00:21:23.075925 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 00:21:23.088686 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:21:23.091027 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 00:21:23.092008 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 00:21:23.092050 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 00:21:23.092072 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:21:23.097822 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 00:21:23.100167 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 00:21:23.104914 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (797) May 9 00:21:23.104943 kernel: BTRFS info (device vda6): first mount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 9 00:21:23.104954 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 00:21:23.104964 kernel: BTRFS info (device vda6): using free space tree May 9 00:21:23.106711 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:21:23.108116 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:21:23.138168 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory May 9 00:21:23.141437 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory May 9 00:21:23.145494 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory May 9 00:21:23.149297 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory May 9 00:21:23.226519 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 00:21:23.241731 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 00:21:23.243120 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 00:21:23.248588 kernel: BTRFS info (device vda6): last unmount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 9 00:21:23.261554 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 00:21:23.265731 ignition[911]: INFO : Ignition 2.19.0 May 9 00:21:23.265731 ignition[911]: INFO : Stage: mount May 9 00:21:23.266875 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:21:23.266875 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:21:23.266875 ignition[911]: INFO : mount: mount passed May 9 00:21:23.266875 ignition[911]: INFO : Ignition finished successfully May 9 00:21:23.268045 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 00:21:23.278736 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 00:21:23.727275 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 00:21:23.740768 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:21:23.747597 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (925) May 9 00:21:23.749858 kernel: BTRFS info (device vda6): first mount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 9 00:21:23.749886 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 00:21:23.749897 kernel: BTRFS info (device vda6): using free space tree May 9 00:21:23.751594 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:21:23.753117 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:21:23.769377 ignition[942]: INFO : Ignition 2.19.0 May 9 00:21:23.769377 ignition[942]: INFO : Stage: files May 9 00:21:23.770619 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:21:23.770619 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:21:23.770619 ignition[942]: DEBUG : files: compiled without relabeling support, skipping May 9 00:21:23.773061 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 00:21:23.773061 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 00:21:23.776145 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 00:21:23.777312 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 00:21:23.778632 unknown[942]: wrote ssh authorized keys file for user: core May 9 00:21:23.779709 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 00:21:23.781365 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 9 00:21:23.783257 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 9 00:21:23.783257 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 00:21:23.783257 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 9 00:21:24.373830 systemd-networkd[763]: eth0: Gained IPv6LL May 9 00:21:24.581840 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 9 00:21:25.750311 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 00:21:25.752514 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 9 00:21:25.752514 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 9 00:21:25.752514 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 00:21:25.752514 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 00:21:25.752514 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:21:25.752514 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:21:25.752514 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:21:25.752514 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:21:25.752514 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:21:25.752514 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:21:25.752514 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 00:21:25.752514 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 00:21:25.752514 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 00:21:25.752514 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 9 00:21:26.081939 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 9 00:21:26.439076 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 00:21:26.439076 ignition[942]: INFO : files: op(c): [started] processing unit "containerd.service" May 9 00:21:26.442516 ignition[942]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 9 00:21:26.442516 ignition[942]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 9 00:21:26.442516 ignition[942]: INFO : files: op(c): [finished] processing unit "containerd.service" May 9 00:21:26.442516 ignition[942]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 9 00:21:26.442516 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:21:26.442516 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:21:26.442516 ignition[942]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 9 00:21:26.442516 ignition[942]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 9 00:21:26.442516 ignition[942]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:21:26.442516 ignition[942]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:21:26.442516 ignition[942]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 9 00:21:26.442516 ignition[942]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" May 9 00:21:26.467089 ignition[942]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:21:26.470376 ignition[942]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:21:26.472733 ignition[942]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" May 9 00:21:26.472733 ignition[942]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" May 9 00:21:26.472733 ignition[942]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" May 9 00:21:26.472733 ignition[942]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 00:21:26.472733 ignition[942]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 00:21:26.472733 ignition[942]: INFO : files: files passed May 9 00:21:26.472733 ignition[942]: INFO : Ignition finished successfully May 9 00:21:26.473110 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 00:21:26.481748 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 00:21:26.485115 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 00:21:26.486591 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 00:21:26.486674 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 00:21:26.492778 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory May 9 00:21:26.495745 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:21:26.495745 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 00:21:26.499642 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:21:26.497472 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:21:26.499074 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 00:21:26.508716 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 00:21:26.526343 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 00:21:26.526462 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 00:21:26.528419 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 00:21:26.530703 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 00:21:26.532245 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 00:21:26.532972 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 00:21:26.547736 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:21:26.560792 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 00:21:26.570759 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 00:21:26.571952 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:21:26.573755 systemd[1]: Stopped target timers.target - Timer Units. May 9 00:21:26.575293 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 00:21:26.575408 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:21:26.577610 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 00:21:26.579504 systemd[1]: Stopped target basic.target - Basic System. May 9 00:21:26.581028 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 00:21:26.582591 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:21:26.584353 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 00:21:26.586127 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 00:21:26.587771 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:21:26.589535 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 00:21:26.591322 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 00:21:26.592876 systemd[1]: Stopped target swap.target - Swaps. May 9 00:21:26.594228 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 00:21:26.594347 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 00:21:26.596374 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 00:21:26.598090 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:21:26.599863 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 00:21:26.601496 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:21:26.602748 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 00:21:26.602854 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 00:21:26.605234 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 00:21:26.605346 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:21:26.607182 systemd[1]: Stopped target paths.target - Path Units. May 9 00:21:26.608620 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 00:21:26.613652 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:21:26.614954 systemd[1]: Stopped target slices.target - Slice Units. May 9 00:21:26.616736 systemd[1]: Stopped target sockets.target - Socket Units. May 9 00:21:26.618178 systemd[1]: iscsid.socket: Deactivated successfully. May 9 00:21:26.618271 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:21:26.619658 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 00:21:26.619751 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:21:26.621188 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 00:21:26.621297 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:21:26.622914 systemd[1]: ignition-files.service: Deactivated successfully. May 9 00:21:26.623018 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 00:21:26.638747 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 00:21:26.639618 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 00:21:26.639748 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:21:26.642615 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 00:21:26.643221 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 00:21:26.643329 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:21:26.644246 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 00:21:26.659160 ignition[995]: INFO : Ignition 2.19.0 May 9 00:21:26.659160 ignition[995]: INFO : Stage: umount May 9 00:21:26.659160 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:21:26.659160 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:21:26.644340 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:21:26.664427 ignition[995]: INFO : umount: umount passed May 9 00:21:26.664427 ignition[995]: INFO : Ignition finished successfully May 9 00:21:26.661856 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 00:21:26.662268 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 00:21:26.662343 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 00:21:26.664466 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 00:21:26.664552 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 00:21:26.667071 systemd[1]: Stopped target network.target - Network. May 9 00:21:26.668066 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 00:21:26.668140 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 00:21:26.669564 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 00:21:26.669636 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 00:21:26.671984 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 00:21:26.672030 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 00:21:26.674468 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 00:21:26.674518 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 00:21:26.676424 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 00:21:26.677874 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 00:21:26.678475 systemd-networkd[763]: eth0: DHCPv6 lease lost May 9 00:21:26.680530 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 00:21:26.680651 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 00:21:26.682465 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 00:21:26.682518 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 00:21:26.689686 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 00:21:26.690549 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 00:21:26.690630 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:21:26.692667 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:21:26.695894 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 00:21:26.695983 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 00:21:26.699452 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:21:26.699509 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:21:26.700713 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 00:21:26.700763 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 00:21:26.703550 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 00:21:26.703640 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:21:26.713864 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 00:21:26.714596 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:21:26.716879 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 00:21:26.716973 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 00:21:26.718532 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 00:21:26.718613 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 00:21:26.719441 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 00:21:26.719472 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:21:26.722500 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 00:21:26.722551 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 00:21:26.726205 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 00:21:26.726256 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 00:21:26.728236 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:21:26.728284 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:21:26.743721 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 00:21:26.744731 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 00:21:26.744788 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:21:26.746879 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:21:26.746922 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:21:26.748785 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 00:21:26.748872 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 00:21:26.751889 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 00:21:26.751967 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 00:21:26.754097 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 00:21:26.754987 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 00:21:26.755035 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 00:21:26.757118 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 00:21:26.765525 systemd[1]: Switching root. May 9 00:21:26.797159 systemd-journald[239]: Journal stopped May 9 00:21:27.531799 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 9 00:21:27.531852 kernel: SELinux: policy capability network_peer_controls=1 May 9 00:21:27.531865 kernel: SELinux: policy capability open_perms=1 May 9 00:21:27.531875 kernel: SELinux: policy capability extended_socket_class=1 May 9 00:21:27.531885 kernel: SELinux: policy capability always_check_network=0 May 9 00:21:27.531895 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 00:21:27.531905 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 00:21:27.531920 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 00:21:27.531935 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 00:21:27.531945 kernel: audit: type=1403 audit(1746750086.977:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 00:21:27.531956 systemd[1]: Successfully loaded SELinux policy in 40.134ms. May 9 00:21:27.531974 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.920ms. May 9 00:21:27.531986 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:21:27.531997 systemd[1]: Detected virtualization kvm. May 9 00:21:27.532008 systemd[1]: Detected architecture arm64. May 9 00:21:27.532020 systemd[1]: Detected first boot. May 9 00:21:27.532031 systemd[1]: Initializing machine ID from VM UUID. May 9 00:21:27.532041 zram_generator::config[1058]: No configuration found. May 9 00:21:27.532053 systemd[1]: Populated /etc with preset unit settings. May 9 00:21:27.532063 systemd[1]: Queued start job for default target multi-user.target. May 9 00:21:27.532074 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 00:21:27.532086 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 00:21:27.532097 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 00:21:27.532107 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 00:21:27.532119 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 00:21:27.532130 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 00:21:27.532141 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 00:21:27.532151 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 00:21:27.532163 systemd[1]: Created slice user.slice - User and Session Slice. May 9 00:21:27.532174 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:21:27.532185 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:21:27.532195 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 00:21:27.532207 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 00:21:27.532218 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 00:21:27.532230 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:21:27.532240 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 9 00:21:27.532251 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:21:27.532262 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 00:21:27.532272 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:21:27.532282 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:21:27.532294 systemd[1]: Reached target slices.target - Slice Units. May 9 00:21:27.532309 systemd[1]: Reached target swap.target - Swaps. May 9 00:21:27.532320 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 00:21:27.532337 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 00:21:27.532349 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 00:21:27.532359 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 00:21:27.532370 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:21:27.532380 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:21:27.532391 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:21:27.532403 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 00:21:27.532416 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 00:21:27.532427 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 00:21:27.532437 systemd[1]: Mounting media.mount - External Media Directory... May 9 00:21:27.532447 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 00:21:27.532458 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 00:21:27.532469 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 00:21:27.532480 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 00:21:27.532490 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:21:27.532502 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:21:27.532513 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 00:21:27.532523 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:21:27.532538 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:21:27.532548 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:21:27.532559 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 00:21:27.532570 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:21:27.532589 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 00:21:27.532601 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 9 00:21:27.532614 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 9 00:21:27.532624 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:21:27.532635 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:21:27.532646 kernel: loop: module loaded May 9 00:21:27.532656 kernel: fuse: init (API version 7.39) May 9 00:21:27.532666 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 00:21:27.532677 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 00:21:27.532694 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:21:27.532705 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 00:21:27.532717 kernel: ACPI: bus type drm_connector registered May 9 00:21:27.532727 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 00:21:27.532737 systemd[1]: Mounted media.mount - External Media Directory. May 9 00:21:27.532747 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 00:21:27.532774 systemd-journald[1131]: Collecting audit messages is disabled. May 9 00:21:27.532796 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 00:21:27.532806 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 00:21:27.532819 systemd-journald[1131]: Journal started May 9 00:21:27.532841 systemd-journald[1131]: Runtime Journal (/run/log/journal/a0948435abfd4cf0bdf2a2882d66d5d2) is 5.9M, max 47.3M, 41.4M free. May 9 00:21:27.534177 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:21:27.536314 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:21:27.537458 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 00:21:27.537639 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 00:21:27.539079 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:21:27.539244 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:21:27.540345 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:21:27.540502 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:21:27.541653 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:21:27.541817 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:21:27.543160 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 00:21:27.543318 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 00:21:27.544739 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:21:27.544974 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:21:27.546166 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:21:27.547301 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 00:21:27.548538 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 00:21:27.549936 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 00:21:27.561236 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 00:21:27.574717 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 00:21:27.576951 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 00:21:27.577834 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 00:21:27.580768 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 00:21:27.585750 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 00:21:27.586690 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:21:27.587822 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 00:21:27.589114 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:21:27.592766 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:21:27.594864 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:21:27.595811 systemd-journald[1131]: Time spent on flushing to /var/log/journal/a0948435abfd4cf0bdf2a2882d66d5d2 is 11.733ms for 844 entries. May 9 00:21:27.595811 systemd-journald[1131]: System Journal (/var/log/journal/a0948435abfd4cf0bdf2a2882d66d5d2) is 8.0M, max 195.6M, 187.6M free. May 9 00:21:27.619286 systemd-journald[1131]: Received client request to flush runtime journal. May 9 00:21:27.598165 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:21:27.599476 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 00:21:27.600762 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 00:21:27.602055 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 00:21:27.605523 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 00:21:27.616790 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 00:21:27.624295 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 00:21:27.626250 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:21:27.627054 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. May 9 00:21:27.627069 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. May 9 00:21:27.629409 udevadm[1199]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 9 00:21:27.633230 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:21:27.645828 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 00:21:27.664097 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 00:21:27.675807 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:21:27.687460 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. May 9 00:21:27.687481 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. May 9 00:21:27.691186 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:21:28.011726 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 00:21:28.023731 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:21:28.046918 systemd-udevd[1218]: Using default interface naming scheme 'v255'. May 9 00:21:28.063772 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:21:28.072369 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:21:28.087881 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. May 9 00:21:28.108606 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1222) May 9 00:21:28.126748 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 00:21:28.180620 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:21:28.182889 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 00:21:28.207878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:21:28.215620 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 00:21:28.218844 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 00:21:28.239381 lvm[1254]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:21:28.250115 systemd-networkd[1224]: lo: Link UP May 9 00:21:28.250122 systemd-networkd[1224]: lo: Gained carrier May 9 00:21:28.250869 systemd-networkd[1224]: Enumeration completed May 9 00:21:28.250992 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:21:28.251292 systemd-networkd[1224]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:21:28.251299 systemd-networkd[1224]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:21:28.251918 systemd-networkd[1224]: eth0: Link UP May 9 00:21:28.251928 systemd-networkd[1224]: eth0: Gained carrier May 9 00:21:28.251941 systemd-networkd[1224]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:21:28.258777 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 00:21:28.262572 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:21:28.265630 systemd-networkd[1224]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:21:28.281077 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 00:21:28.282453 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:21:28.295794 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 00:21:28.300319 lvm[1264]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:21:28.327026 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 00:21:28.328223 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 00:21:28.329282 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 00:21:28.329311 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:21:28.330162 systemd[1]: Reached target machines.target - Containers. May 9 00:21:28.331983 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 00:21:28.342723 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 00:21:28.344871 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 00:21:28.346026 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:21:28.346935 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 00:21:28.351736 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 00:21:28.353767 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 00:21:28.355340 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 00:21:28.362654 kernel: loop0: detected capacity change from 0 to 194096 May 9 00:21:28.363511 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 00:21:28.374526 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 00:21:28.375708 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 00:21:28.378596 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 00:21:28.418599 kernel: loop1: detected capacity change from 0 to 114432 May 9 00:21:28.460665 kernel: loop2: detected capacity change from 0 to 114328 May 9 00:21:28.510616 kernel: loop3: detected capacity change from 0 to 194096 May 9 00:21:28.521601 kernel: loop4: detected capacity change from 0 to 114432 May 9 00:21:28.533610 kernel: loop5: detected capacity change from 0 to 114328 May 9 00:21:28.536459 (sd-merge)[1287]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 00:21:28.536905 (sd-merge)[1287]: Merged extensions into '/usr'. May 9 00:21:28.540498 systemd[1]: Reloading requested from client PID 1272 ('systemd-sysext') (unit systemd-sysext.service)... May 9 00:21:28.540521 systemd[1]: Reloading... May 9 00:21:28.585645 zram_generator::config[1315]: No configuration found. May 9 00:21:28.591566 ldconfig[1268]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 00:21:28.677543 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:21:28.719984 systemd[1]: Reloading finished in 179 ms. May 9 00:21:28.736247 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 00:21:28.737507 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 00:21:28.754734 systemd[1]: Starting ensure-sysext.service... May 9 00:21:28.756660 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:21:28.760711 systemd[1]: Reloading requested from client PID 1356 ('systemctl') (unit ensure-sysext.service)... May 9 00:21:28.760729 systemd[1]: Reloading... May 9 00:21:28.772852 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 00:21:28.773118 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 00:21:28.773877 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 00:21:28.774096 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. May 9 00:21:28.774144 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. May 9 00:21:28.776354 systemd-tmpfiles[1357]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:21:28.776368 systemd-tmpfiles[1357]: Skipping /boot May 9 00:21:28.783025 systemd-tmpfiles[1357]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:21:28.783043 systemd-tmpfiles[1357]: Skipping /boot May 9 00:21:28.813964 zram_generator::config[1389]: No configuration found. May 9 00:21:28.899709 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:21:28.942774 systemd[1]: Reloading finished in 181 ms. May 9 00:21:28.961355 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:21:28.983117 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 00:21:28.985689 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 00:21:28.987662 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 00:21:28.992842 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:21:28.995901 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 00:21:29.000462 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:21:29.004896 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:21:29.007302 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:21:29.014832 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:21:29.015731 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:21:29.016525 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:21:29.016690 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:21:29.019744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:21:29.019892 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:21:29.021895 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 00:21:29.025422 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:21:29.025742 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:21:29.032590 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:21:29.041839 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:21:29.044363 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:21:29.047820 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:21:29.048805 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:21:29.051844 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 00:21:29.055310 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 00:21:29.056906 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:21:29.057046 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:21:29.059011 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:21:29.059149 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:21:29.061512 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:21:29.061738 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:21:29.065651 augenrules[1470]: No rules May 9 00:21:29.068224 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 00:21:29.069732 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 00:21:29.073481 systemd-resolved[1432]: Positive Trust Anchors: May 9 00:21:29.073495 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:21:29.073500 systemd-resolved[1432]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:21:29.073532 systemd-resolved[1432]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:21:29.079978 systemd-resolved[1432]: Defaulting to hostname 'linux'. May 9 00:21:29.080722 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:21:29.082570 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:21:29.086790 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:21:29.089783 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:21:29.090785 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:21:29.090919 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 00:21:29.091333 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:21:29.092914 systemd[1]: Finished ensure-sysext.service. May 9 00:21:29.093957 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 00:21:29.095124 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:21:29.095346 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:21:29.096492 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:21:29.096721 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:21:29.097826 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:21:29.098031 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:21:29.099191 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:21:29.099468 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:21:29.104550 systemd[1]: Reached target network.target - Network. May 9 00:21:29.105428 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:21:29.106300 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:21:29.106369 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:21:29.108048 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 00:21:29.154862 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 00:21:29.155575 systemd-timesyncd[1501]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 00:21:29.155649 systemd-timesyncd[1501]: Initial clock synchronization to Fri 2025-05-09 00:21:29.039344 UTC. May 9 00:21:29.156373 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:21:29.157505 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 00:21:29.158444 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 00:21:29.159368 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 00:21:29.160271 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 00:21:29.160305 systemd[1]: Reached target paths.target - Path Units. May 9 00:21:29.160973 systemd[1]: Reached target time-set.target - System Time Set. May 9 00:21:29.161814 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 00:21:29.162649 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 00:21:29.163504 systemd[1]: Reached target timers.target - Timer Units. May 9 00:21:29.164642 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 00:21:29.166661 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 00:21:29.168291 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 00:21:29.176554 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 00:21:29.177615 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:21:29.178314 systemd[1]: Reached target basic.target - Basic System. May 9 00:21:29.179130 systemd[1]: System is tainted: cgroupsv1 May 9 00:21:29.179175 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 00:21:29.179195 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 00:21:29.180195 systemd[1]: Starting containerd.service - containerd container runtime... May 9 00:21:29.181937 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 00:21:29.183532 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 00:21:29.187733 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 00:21:29.188437 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 00:21:29.189408 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 00:21:29.193206 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 00:21:29.201565 jq[1507]: false May 9 00:21:29.201617 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 00:21:29.203551 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 00:21:29.210727 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 00:21:29.212150 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 00:21:29.218723 extend-filesystems[1508]: Found loop3 May 9 00:21:29.218723 extend-filesystems[1508]: Found loop4 May 9 00:21:29.218723 extend-filesystems[1508]: Found loop5 May 9 00:21:29.218723 extend-filesystems[1508]: Found vda May 9 00:21:29.218723 extend-filesystems[1508]: Found vda1 May 9 00:21:29.218723 extend-filesystems[1508]: Found vda2 May 9 00:21:29.218723 extend-filesystems[1508]: Found vda3 May 9 00:21:29.218723 extend-filesystems[1508]: Found usr May 9 00:21:29.218723 extend-filesystems[1508]: Found vda4 May 9 00:21:29.218723 extend-filesystems[1508]: Found vda6 May 9 00:21:29.218723 extend-filesystems[1508]: Found vda7 May 9 00:21:29.218723 extend-filesystems[1508]: Found vda9 May 9 00:21:29.218723 extend-filesystems[1508]: Checking size of /dev/vda9 May 9 00:21:29.218959 systemd[1]: Starting update-engine.service - Update Engine... May 9 00:21:29.239620 dbus-daemon[1506]: [system] SELinux support is enabled May 9 00:21:29.220967 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 00:21:29.231399 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 00:21:29.247907 jq[1528]: true May 9 00:21:29.231621 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 00:21:29.231868 systemd[1]: motdgen.service: Deactivated successfully. May 9 00:21:29.232040 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 00:21:29.237856 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 00:21:29.238038 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 00:21:29.240092 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 00:21:29.255474 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 00:21:29.255508 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 00:21:29.255739 tar[1534]: linux-arm64/helm May 9 00:21:29.256767 extend-filesystems[1508]: Resized partition /dev/vda9 May 9 00:21:29.257034 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 00:21:29.257049 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 00:21:29.257127 (ntainerd)[1538]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 00:21:29.266014 jq[1536]: true May 9 00:21:29.278264 extend-filesystems[1548]: resize2fs 1.47.1 (20-May-2024) May 9 00:21:29.282294 systemd-logind[1519]: Watching system buttons on /dev/input/event0 (Power Button) May 9 00:21:29.287027 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 00:21:29.287074 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1232) May 9 00:21:29.286636 systemd-logind[1519]: New seat seat0. May 9 00:21:29.287514 systemd[1]: Started systemd-logind.service - User Login Management. May 9 00:21:29.295656 update_engine[1521]: I20250509 00:21:29.294445 1521 main.cc:92] Flatcar Update Engine starting May 9 00:21:29.318622 update_engine[1521]: I20250509 00:21:29.318528 1521 update_check_scheduler.cc:74] Next update check in 3m53s May 9 00:21:29.318643 systemd[1]: Started update-engine.service - Update Engine. May 9 00:21:29.321296 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 00:21:29.328767 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 00:21:29.328829 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 00:21:29.351491 extend-filesystems[1548]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 00:21:29.351491 extend-filesystems[1548]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 00:21:29.351491 extend-filesystems[1548]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 00:21:29.354496 extend-filesystems[1508]: Resized filesystem in /dev/vda9 May 9 00:21:29.353671 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 00:21:29.353914 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 00:21:29.364649 bash[1566]: Updated "/home/core/.ssh/authorized_keys" May 9 00:21:29.371847 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 00:21:29.376346 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 00:21:29.383144 locksmithd[1567]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 00:21:29.508925 containerd[1538]: time="2025-05-09T00:21:29.508842880Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 9 00:21:29.521266 sshd_keygen[1529]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 00:21:29.532441 containerd[1538]: time="2025-05-09T00:21:29.532403840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 00:21:29.533859 containerd[1538]: time="2025-05-09T00:21:29.533685560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 00:21:29.533859 containerd[1538]: time="2025-05-09T00:21:29.533717960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 00:21:29.533859 containerd[1538]: time="2025-05-09T00:21:29.533734400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 00:21:29.533964 containerd[1538]: time="2025-05-09T00:21:29.533877480Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 00:21:29.533964 containerd[1538]: time="2025-05-09T00:21:29.533895120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 00:21:29.533964 containerd[1538]: time="2025-05-09T00:21:29.533946240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:21:29.535133 containerd[1538]: time="2025-05-09T00:21:29.533958120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 00:21:29.535133 containerd[1538]: time="2025-05-09T00:21:29.534809600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:21:29.535133 containerd[1538]: time="2025-05-09T00:21:29.534873000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 00:21:29.535133 containerd[1538]: time="2025-05-09T00:21:29.534888200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:21:29.535133 containerd[1538]: time="2025-05-09T00:21:29.534898120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 00:21:29.535133 containerd[1538]: time="2025-05-09T00:21:29.534989720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 00:21:29.535273 containerd[1538]: time="2025-05-09T00:21:29.535164760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 00:21:29.535500 containerd[1538]: time="2025-05-09T00:21:29.535472440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:21:29.535529 containerd[1538]: time="2025-05-09T00:21:29.535500000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 00:21:29.535621 containerd[1538]: time="2025-05-09T00:21:29.535602480Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 00:21:29.535739 containerd[1538]: time="2025-05-09T00:21:29.535718520Z" level=info msg="metadata content store policy set" policy=shared May 9 00:21:29.539280 containerd[1538]: time="2025-05-09T00:21:29.539245440Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 00:21:29.539330 containerd[1538]: time="2025-05-09T00:21:29.539298560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 00:21:29.539330 containerd[1538]: time="2025-05-09T00:21:29.539316720Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 00:21:29.539387 containerd[1538]: time="2025-05-09T00:21:29.539343600Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 00:21:29.539429 containerd[1538]: time="2025-05-09T00:21:29.539394200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 00:21:29.539543 containerd[1538]: time="2025-05-09T00:21:29.539521040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 00:21:29.539898 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 00:21:29.540731 containerd[1538]: time="2025-05-09T00:21:29.540705360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 00:21:29.540874 containerd[1538]: time="2025-05-09T00:21:29.540844200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 00:21:29.540874 containerd[1538]: time="2025-05-09T00:21:29.540871960Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 00:21:29.540933 containerd[1538]: time="2025-05-09T00:21:29.540886240Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 00:21:29.540933 containerd[1538]: time="2025-05-09T00:21:29.540900640Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 00:21:29.540933 containerd[1538]: time="2025-05-09T00:21:29.540914680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 00:21:29.540933 containerd[1538]: time="2025-05-09T00:21:29.540930160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 00:21:29.541007 containerd[1538]: time="2025-05-09T00:21:29.540944720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 00:21:29.541007 containerd[1538]: time="2025-05-09T00:21:29.540958480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 00:21:29.541007 containerd[1538]: time="2025-05-09T00:21:29.540971720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 00:21:29.541007 containerd[1538]: time="2025-05-09T00:21:29.540984640Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 00:21:29.541007 containerd[1538]: time="2025-05-09T00:21:29.540997680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 00:21:29.541105 containerd[1538]: time="2025-05-09T00:21:29.541016520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 00:21:29.541105 containerd[1538]: time="2025-05-09T00:21:29.541029880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 00:21:29.541105 containerd[1538]: time="2025-05-09T00:21:29.541041920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 00:21:29.541105 containerd[1538]: time="2025-05-09T00:21:29.541056400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 00:21:29.541105 containerd[1538]: time="2025-05-09T00:21:29.541068960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 00:21:29.541105 containerd[1538]: time="2025-05-09T00:21:29.541082480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 00:21:29.541105 containerd[1538]: time="2025-05-09T00:21:29.541107240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 00:21:29.541229 containerd[1538]: time="2025-05-09T00:21:29.541121880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 00:21:29.541229 containerd[1538]: time="2025-05-09T00:21:29.541136040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 00:21:29.541229 containerd[1538]: time="2025-05-09T00:21:29.541153760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 00:21:29.541229 containerd[1538]: time="2025-05-09T00:21:29.541164960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 00:21:29.541229 containerd[1538]: time="2025-05-09T00:21:29.541176320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 00:21:29.541229 containerd[1538]: time="2025-05-09T00:21:29.541193800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 00:21:29.541229 containerd[1538]: time="2025-05-09T00:21:29.541209360Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 00:21:29.541229 containerd[1538]: time="2025-05-09T00:21:29.541228800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 00:21:29.541365 containerd[1538]: time="2025-05-09T00:21:29.541241440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 00:21:29.541365 containerd[1538]: time="2025-05-09T00:21:29.541252000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 00:21:29.541399 containerd[1538]: time="2025-05-09T00:21:29.541362760Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 00:21:29.541399 containerd[1538]: time="2025-05-09T00:21:29.541381760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 00:21:29.541399 containerd[1538]: time="2025-05-09T00:21:29.541393760Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 00:21:29.541454 containerd[1538]: time="2025-05-09T00:21:29.541405400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 00:21:29.541454 containerd[1538]: time="2025-05-09T00:21:29.541414600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 00:21:29.541454 containerd[1538]: time="2025-05-09T00:21:29.541426360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 00:21:29.541454 containerd[1538]: time="2025-05-09T00:21:29.541436080Z" level=info msg="NRI interface is disabled by configuration." May 9 00:21:29.541454 containerd[1538]: time="2025-05-09T00:21:29.541446200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 00:21:29.541875 containerd[1538]: time="2025-05-09T00:21:29.541810240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 00:21:29.541875 containerd[1538]: time="2025-05-09T00:21:29.541872200Z" level=info msg="Connect containerd service" May 9 00:21:29.542011 containerd[1538]: time="2025-05-09T00:21:29.541964320Z" level=info msg="using legacy CRI server" May 9 00:21:29.542011 containerd[1538]: time="2025-05-09T00:21:29.541972200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 00:21:29.542065 containerd[1538]: time="2025-05-09T00:21:29.542050520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 00:21:29.543036 containerd[1538]: time="2025-05-09T00:21:29.542954960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:21:29.548654 containerd[1538]: time="2025-05-09T00:21:29.545826200Z" level=info msg="Start subscribing containerd event" May 9 00:21:29.548938 containerd[1538]: time="2025-05-09T00:21:29.548659480Z" level=info msg="Start recovering state" May 9 00:21:29.548973 containerd[1538]: time="2025-05-09T00:21:29.548963800Z" level=info msg="Start event monitor" May 9 00:21:29.549002 containerd[1538]: time="2025-05-09T00:21:29.548978640Z" level=info msg="Start snapshots syncer" May 9 00:21:29.549002 containerd[1538]: time="2025-05-09T00:21:29.548989640Z" level=info msg="Start cni network conf syncer for default" May 9 00:21:29.549002 containerd[1538]: time="2025-05-09T00:21:29.548997320Z" level=info msg="Start streaming server" May 9 00:21:29.549129 containerd[1538]: time="2025-05-09T00:21:29.547973760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 00:21:29.549189 containerd[1538]: time="2025-05-09T00:21:29.549172360Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 00:21:29.549779 containerd[1538]: time="2025-05-09T00:21:29.549762000Z" level=info msg="containerd successfully booted in 0.041837s" May 9 00:21:29.551920 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 00:21:29.555724 systemd[1]: Started containerd.service - containerd container runtime. May 9 00:21:29.561179 systemd[1]: issuegen.service: Deactivated successfully. May 9 00:21:29.561418 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 00:21:29.569846 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 00:21:29.578911 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 00:21:29.581986 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 00:21:29.584853 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 9 00:21:29.586193 systemd[1]: Reached target getty.target - Login Prompts. May 9 00:21:29.631961 tar[1534]: linux-arm64/LICENSE May 9 00:21:29.632042 tar[1534]: linux-arm64/README.md May 9 00:21:29.643805 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 00:21:30.197687 systemd-networkd[1224]: eth0: Gained IPv6LL May 9 00:21:30.200691 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 00:21:30.203129 systemd[1]: Reached target network-online.target - Network is Online. May 9 00:21:30.212853 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 00:21:30.214971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:21:30.217050 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 00:21:30.232060 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 00:21:30.232336 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 00:21:30.234215 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 00:21:30.242514 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 00:21:30.685267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:21:30.686833 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 00:21:30.688019 systemd[1]: Startup finished in 6.814s (kernel) + 3.756s (userspace) = 10.570s. May 9 00:21:30.689165 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:21:31.180596 kubelet[1644]: E0509 00:21:31.180524 1644 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:21:31.183110 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:21:31.183284 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:21:33.594628 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 00:21:33.609833 systemd[1]: Started sshd@0-10.0.0.135:22-10.0.0.1:45376.service - OpenSSH per-connection server daemon (10.0.0.1:45376). May 9 00:21:33.658411 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 45376 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:21:33.660426 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:21:33.670630 systemd-logind[1519]: New session 1 of user core. May 9 00:21:33.671569 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 00:21:33.681836 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 00:21:33.692269 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 00:21:33.694436 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 00:21:33.701099 (systemd)[1664]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 00:21:33.780559 systemd[1664]: Queued start job for default target default.target. May 9 00:21:33.780970 systemd[1664]: Created slice app.slice - User Application Slice. May 9 00:21:33.780993 systemd[1664]: Reached target paths.target - Paths. May 9 00:21:33.781005 systemd[1664]: Reached target timers.target - Timers. May 9 00:21:33.798706 systemd[1664]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 00:21:33.805134 systemd[1664]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 00:21:33.805211 systemd[1664]: Reached target sockets.target - Sockets. May 9 00:21:33.805223 systemd[1664]: Reached target basic.target - Basic System. May 9 00:21:33.805262 systemd[1664]: Reached target default.target - Main User Target. May 9 00:21:33.805287 systemd[1664]: Startup finished in 98ms. May 9 00:21:33.805591 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 00:21:33.807159 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 00:21:33.866867 systemd[1]: Started sshd@1-10.0.0.135:22-10.0.0.1:45392.service - OpenSSH per-connection server daemon (10.0.0.1:45392). May 9 00:21:33.900350 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 45392 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:21:33.901727 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:21:33.906189 systemd-logind[1519]: New session 2 of user core. May 9 00:21:33.918862 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 00:21:33.971181 sshd[1676]: pam_unix(sshd:session): session closed for user core May 9 00:21:33.982895 systemd[1]: Started sshd@2-10.0.0.135:22-10.0.0.1:45396.service - OpenSSH per-connection server daemon (10.0.0.1:45396). May 9 00:21:33.983292 systemd[1]: sshd@1-10.0.0.135:22-10.0.0.1:45392.service: Deactivated successfully. May 9 00:21:33.984989 systemd-logind[1519]: Session 2 logged out. Waiting for processes to exit. May 9 00:21:33.985726 systemd[1]: session-2.scope: Deactivated successfully. May 9 00:21:33.987080 systemd-logind[1519]: Removed session 2. May 9 00:21:34.016230 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 45396 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:21:34.017596 sshd[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:21:34.021772 systemd-logind[1519]: New session 3 of user core. May 9 00:21:34.038886 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 00:21:34.087216 sshd[1681]: pam_unix(sshd:session): session closed for user core May 9 00:21:34.098899 systemd[1]: Started sshd@3-10.0.0.135:22-10.0.0.1:45406.service - OpenSSH per-connection server daemon (10.0.0.1:45406). May 9 00:21:34.099306 systemd[1]: sshd@2-10.0.0.135:22-10.0.0.1:45396.service: Deactivated successfully. May 9 00:21:34.100997 systemd-logind[1519]: Session 3 logged out. Waiting for processes to exit. May 9 00:21:34.101655 systemd[1]: session-3.scope: Deactivated successfully. May 9 00:21:34.103223 systemd-logind[1519]: Removed session 3. May 9 00:21:34.132496 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 45406 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:21:34.133837 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:21:34.137637 systemd-logind[1519]: New session 4 of user core. May 9 00:21:34.147932 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 00:21:34.199984 sshd[1689]: pam_unix(sshd:session): session closed for user core May 9 00:21:34.211887 systemd[1]: Started sshd@4-10.0.0.135:22-10.0.0.1:45414.service - OpenSSH per-connection server daemon (10.0.0.1:45414). May 9 00:21:34.212288 systemd[1]: sshd@3-10.0.0.135:22-10.0.0.1:45406.service: Deactivated successfully. May 9 00:21:34.214111 systemd-logind[1519]: Session 4 logged out. Waiting for processes to exit. May 9 00:21:34.214726 systemd[1]: session-4.scope: Deactivated successfully. May 9 00:21:34.216051 systemd-logind[1519]: Removed session 4. May 9 00:21:34.245289 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 45414 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:21:34.246632 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:21:34.250634 systemd-logind[1519]: New session 5 of user core. May 9 00:21:34.261898 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 00:21:34.324119 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 00:21:34.324404 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:21:34.338510 sudo[1704]: pam_unix(sudo:session): session closed for user root May 9 00:21:34.340390 sshd[1697]: pam_unix(sshd:session): session closed for user core May 9 00:21:34.349941 systemd[1]: Started sshd@5-10.0.0.135:22-10.0.0.1:45418.service - OpenSSH per-connection server daemon (10.0.0.1:45418). May 9 00:21:34.350406 systemd[1]: sshd@4-10.0.0.135:22-10.0.0.1:45414.service: Deactivated successfully. May 9 00:21:34.353052 systemd[1]: session-5.scope: Deactivated successfully. May 9 00:21:34.353228 systemd-logind[1519]: Session 5 logged out. Waiting for processes to exit. May 9 00:21:34.354736 systemd-logind[1519]: Removed session 5. May 9 00:21:34.383660 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 45418 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:21:34.384966 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:21:34.389330 systemd-logind[1519]: New session 6 of user core. May 9 00:21:34.397861 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 00:21:34.449127 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 00:21:34.449413 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:21:34.452976 sudo[1714]: pam_unix(sudo:session): session closed for user root May 9 00:21:34.458079 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 9 00:21:34.458375 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:21:34.474918 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 9 00:21:34.476229 auditctl[1717]: No rules May 9 00:21:34.477107 systemd[1]: audit-rules.service: Deactivated successfully. May 9 00:21:34.477367 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 9 00:21:34.479167 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 00:21:34.503511 augenrules[1736]: No rules May 9 00:21:34.504910 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 00:21:34.506456 sudo[1713]: pam_unix(sudo:session): session closed for user root May 9 00:21:34.508420 sshd[1706]: pam_unix(sshd:session): session closed for user core May 9 00:21:34.519853 systemd[1]: Started sshd@6-10.0.0.135:22-10.0.0.1:45424.service - OpenSSH per-connection server daemon (10.0.0.1:45424). May 9 00:21:34.520288 systemd[1]: sshd@5-10.0.0.135:22-10.0.0.1:45418.service: Deactivated successfully. May 9 00:21:34.521729 systemd[1]: session-6.scope: Deactivated successfully. May 9 00:21:34.523303 systemd-logind[1519]: Session 6 logged out. Waiting for processes to exit. May 9 00:21:34.524668 systemd-logind[1519]: Removed session 6. May 9 00:21:34.555961 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 45424 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:21:34.557331 sshd[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:21:34.561274 systemd-logind[1519]: New session 7 of user core. May 9 00:21:34.577942 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 00:21:34.628078 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 00:21:34.628364 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:21:34.936832 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 00:21:34.937080 (dockerd)[1768]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 00:21:35.193977 dockerd[1768]: time="2025-05-09T00:21:35.193847363Z" level=info msg="Starting up" May 9 00:21:35.262834 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3915392641-merged.mount: Deactivated successfully. May 9 00:21:35.434169 dockerd[1768]: time="2025-05-09T00:21:35.434104768Z" level=info msg="Loading containers: start." May 9 00:21:35.523605 kernel: Initializing XFRM netlink socket May 9 00:21:35.585761 systemd-networkd[1224]: docker0: Link UP May 9 00:21:35.601056 dockerd[1768]: time="2025-05-09T00:21:35.600999467Z" level=info msg="Loading containers: done." May 9 00:21:35.616190 dockerd[1768]: time="2025-05-09T00:21:35.616050949Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 00:21:35.616190 dockerd[1768]: time="2025-05-09T00:21:35.616178234Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 9 00:21:35.616372 dockerd[1768]: time="2025-05-09T00:21:35.616297420Z" level=info msg="Daemon has completed initialization" May 9 00:21:35.645659 dockerd[1768]: time="2025-05-09T00:21:35.645504942Z" level=info msg="API listen on /run/docker.sock" May 9 00:21:35.645774 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 00:21:36.387693 containerd[1538]: time="2025-05-09T00:21:36.387651905Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 9 00:21:36.965117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount92856607.mount: Deactivated successfully. May 9 00:21:38.556334 containerd[1538]: time="2025-05-09T00:21:38.556278147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:38.556883 containerd[1538]: time="2025-05-09T00:21:38.556842040Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 9 00:21:38.557622 containerd[1538]: time="2025-05-09T00:21:38.557593114Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:38.562607 containerd[1538]: time="2025-05-09T00:21:38.560987616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:38.562607 containerd[1538]: time="2025-05-09T00:21:38.562126745Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.174430201s" May 9 00:21:38.562607 containerd[1538]: time="2025-05-09T00:21:38.562158347Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 9 00:21:38.581748 containerd[1538]: time="2025-05-09T00:21:38.581708892Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 9 00:21:40.687450 containerd[1538]: time="2025-05-09T00:21:40.687387191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:40.688153 containerd[1538]: time="2025-05-09T00:21:40.688102688Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 9 00:21:40.688682 containerd[1538]: time="2025-05-09T00:21:40.688650985Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:40.692833 containerd[1538]: time="2025-05-09T00:21:40.692148098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:40.693570 containerd[1538]: time="2025-05-09T00:21:40.693522588Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 2.111769864s" May 9 00:21:40.693613 containerd[1538]: time="2025-05-09T00:21:40.693568771Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 9 00:21:40.712975 containerd[1538]: time="2025-05-09T00:21:40.712939267Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 9 00:21:41.420010 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 00:21:41.430853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:21:41.527889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:21:41.532401 (kubelet)[2005]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:21:41.576089 kubelet[2005]: E0509 00:21:41.576021 2005 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:21:41.579462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:21:41.579777 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:21:41.960214 containerd[1538]: time="2025-05-09T00:21:41.960162810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:41.960798 containerd[1538]: time="2025-05-09T00:21:41.960763277Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 9 00:21:41.961556 containerd[1538]: time="2025-05-09T00:21:41.961505389Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:41.964448 containerd[1538]: time="2025-05-09T00:21:41.964425398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:41.966495 containerd[1538]: time="2025-05-09T00:21:41.966460972Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.253482292s" May 9 00:21:41.966542 containerd[1538]: time="2025-05-09T00:21:41.966494580Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 9 00:21:41.984120 containerd[1538]: time="2025-05-09T00:21:41.984088267Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 9 00:21:43.063705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount714671324.mount: Deactivated successfully. May 9 00:21:43.382436 containerd[1538]: time="2025-05-09T00:21:43.382247729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:43.383401 containerd[1538]: time="2025-05-09T00:21:43.382982126Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 9 00:21:43.384054 containerd[1538]: time="2025-05-09T00:21:43.384012163Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:43.387680 containerd[1538]: time="2025-05-09T00:21:43.387012148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:43.388083 containerd[1538]: time="2025-05-09T00:21:43.387683506Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.403553288s" May 9 00:21:43.388083 containerd[1538]: time="2025-05-09T00:21:43.387719254Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 9 00:21:43.407366 containerd[1538]: time="2025-05-09T00:21:43.407328793Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 9 00:21:43.900230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount29896656.mount: Deactivated successfully. May 9 00:21:44.670445 containerd[1538]: time="2025-05-09T00:21:44.670393645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:44.672013 containerd[1538]: time="2025-05-09T00:21:44.671930836Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 9 00:21:44.672792 containerd[1538]: time="2025-05-09T00:21:44.672755825Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:44.676211 containerd[1538]: time="2025-05-09T00:21:44.676157791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:44.677417 containerd[1538]: time="2025-05-09T00:21:44.677372226Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.269763837s" May 9 00:21:44.677461 containerd[1538]: time="2025-05-09T00:21:44.677419840Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 9 00:21:44.696375 containerd[1538]: time="2025-05-09T00:21:44.696275651Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 9 00:21:45.137099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount958935154.mount: Deactivated successfully. May 9 00:21:45.141884 containerd[1538]: time="2025-05-09T00:21:45.141829569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:45.142688 containerd[1538]: time="2025-05-09T00:21:45.142642653Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 9 00:21:45.143315 containerd[1538]: time="2025-05-09T00:21:45.143289942Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:45.145618 containerd[1538]: time="2025-05-09T00:21:45.145563240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:45.146698 containerd[1538]: time="2025-05-09T00:21:45.146659049Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 450.332828ms" May 9 00:21:45.146759 containerd[1538]: time="2025-05-09T00:21:45.146700448Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 9 00:21:45.166323 containerd[1538]: time="2025-05-09T00:21:45.166272193Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 9 00:21:45.647148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2237736431.mount: Deactivated successfully. May 9 00:21:47.797663 containerd[1538]: time="2025-05-09T00:21:47.797614738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:47.799026 containerd[1538]: time="2025-05-09T00:21:47.798997700Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 9 00:21:47.800038 containerd[1538]: time="2025-05-09T00:21:47.799982182Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:47.803195 containerd[1538]: time="2025-05-09T00:21:47.803152819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:21:47.804280 containerd[1538]: time="2025-05-09T00:21:47.804242822Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.637929385s" May 9 00:21:47.804353 containerd[1538]: time="2025-05-09T00:21:47.804278568Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 9 00:21:51.669979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 00:21:51.679723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:21:51.844732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:21:51.847799 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:21:51.883175 kubelet[2232]: E0509 00:21:51.883116 2232 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:21:51.885716 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:21:51.885972 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:21:51.990390 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:21:52.001901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:21:52.016775 systemd[1]: Reloading requested from client PID 2250 ('systemctl') (unit session-7.scope)... May 9 00:21:52.016797 systemd[1]: Reloading... May 9 00:21:52.073814 zram_generator::config[2290]: No configuration found. May 9 00:21:52.214619 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:21:52.265999 systemd[1]: Reloading finished in 248 ms. May 9 00:21:52.296171 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 00:21:52.296228 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 00:21:52.296465 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:21:52.298009 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:21:52.386918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:21:52.391614 (kubelet)[2346]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:21:52.428289 kubelet[2346]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:21:52.428289 kubelet[2346]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:21:52.428289 kubelet[2346]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:21:52.429121 kubelet[2346]: I0509 00:21:52.429067 2346 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:21:53.409223 kubelet[2346]: I0509 00:21:53.409175 2346 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 00:21:53.409223 kubelet[2346]: I0509 00:21:53.409209 2346 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:21:53.409439 kubelet[2346]: I0509 00:21:53.409409 2346 server.go:927] "Client rotation is on, will bootstrap in background" May 9 00:21:53.442585 kubelet[2346]: E0509 00:21:53.442554 2346 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.135:6443: connect: connection refused May 9 00:21:53.442856 kubelet[2346]: I0509 00:21:53.442730 2346 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:21:53.452001 kubelet[2346]: I0509 00:21:53.451965 2346 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:21:53.452837 kubelet[2346]: I0509 00:21:53.452564 2346 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:21:53.452908 kubelet[2346]: I0509 00:21:53.452612 2346 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 00:21:53.453014 kubelet[2346]: I0509 00:21:53.452996 2346 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:21:53.453014 kubelet[2346]: I0509 00:21:53.453014 2346 container_manager_linux.go:301] "Creating device plugin manager" May 9 00:21:53.453361 kubelet[2346]: I0509 00:21:53.453341 2346 state_mem.go:36] "Initialized new in-memory state store" May 9 00:21:53.454161 kubelet[2346]: I0509 00:21:53.454140 2346 kubelet.go:400] "Attempting to sync node with API server" May 9 00:21:53.454161 kubelet[2346]: I0509 00:21:53.454163 2346 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:21:53.454646 kubelet[2346]: I0509 00:21:53.454412 2346 kubelet.go:312] "Adding apiserver pod source" May 9 00:21:53.454646 kubelet[2346]: I0509 00:21:53.454519 2346 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:21:53.455468 kubelet[2346]: I0509 00:21:53.455439 2346 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 00:21:53.455548 kubelet[2346]: W0509 00:21:53.455503 2346 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 9 00:21:53.455601 kubelet[2346]: E0509 00:21:53.455558 2346 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 9 00:21:53.455690 kubelet[2346]: W0509 00:21:53.455650 2346 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 9 00:21:53.455778 kubelet[2346]: E0509 00:21:53.455748 2346 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 9 00:21:53.455852 kubelet[2346]: I0509 00:21:53.455837 2346 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:21:53.455967 kubelet[2346]: W0509 00:21:53.455939 2346 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 00:21:53.456756 kubelet[2346]: I0509 00:21:53.456668 2346 server.go:1264] "Started kubelet" May 9 00:21:53.458406 kubelet[2346]: I0509 00:21:53.457783 2346 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:21:53.459820 kubelet[2346]: E0509 00:21:53.459651 2346 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.135:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.135:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183db3f937f4e83a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 00:21:53.456646202 +0000 UTC m=+1.061948334,LastTimestamp:2025-05-09 00:21:53.456646202 +0000 UTC m=+1.061948334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 00:21:53.460690 kubelet[2346]: I0509 00:21:53.460657 2346 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:21:53.460921 kubelet[2346]: I0509 00:21:53.460819 2346 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:21:53.461017 kubelet[2346]: I0509 00:21:53.460999 2346 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:21:53.461148 kubelet[2346]: I0509 00:21:53.461131 2346 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 00:21:53.461211 kubelet[2346]: I0509 00:21:53.461199 2346 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 00:21:53.461651 kubelet[2346]: W0509 00:21:53.461603 2346 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 9 00:21:53.461651 kubelet[2346]: E0509 00:21:53.461652 2346 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 9 00:21:53.462060 kubelet[2346]: I0509 00:21:53.462032 2346 reconciler.go:26] "Reconciler: start to sync state" May 9 00:21:53.462123 kubelet[2346]: E0509 00:21:53.462082 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="200ms" May 9 00:21:53.462250 kubelet[2346]: I0509 00:21:53.462232 2346 server.go:455] "Adding debug handlers to kubelet server" May 9 00:21:53.462881 kubelet[2346]: I0509 00:21:53.462850 2346 factory.go:221] Registration of the systemd container factory successfully May 9 00:21:53.462955 kubelet[2346]: I0509 00:21:53.462936 2346 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:21:53.463821 kubelet[2346]: I0509 00:21:53.463800 2346 factory.go:221] Registration of the containerd container factory successfully May 9 00:21:53.463892 kubelet[2346]: E0509 00:21:53.463860 2346 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:21:53.475659 kubelet[2346]: I0509 00:21:53.475628 2346 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:21:53.476720 kubelet[2346]: I0509 00:21:53.476689 2346 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:21:53.476856 kubelet[2346]: I0509 00:21:53.476843 2346 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:21:53.476886 kubelet[2346]: I0509 00:21:53.476863 2346 kubelet.go:2337] "Starting kubelet main sync loop" May 9 00:21:53.476927 kubelet[2346]: E0509 00:21:53.476905 2346 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:21:53.480549 kubelet[2346]: W0509 00:21:53.480478 2346 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 9 00:21:53.480549 kubelet[2346]: E0509 00:21:53.480528 2346 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 9 00:21:53.482187 kubelet[2346]: I0509 00:21:53.482163 2346 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:21:53.482187 kubelet[2346]: I0509 00:21:53.482180 2346 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:21:53.482277 kubelet[2346]: I0509 00:21:53.482198 2346 state_mem.go:36] "Initialized new in-memory state store" May 9 00:21:53.484293 kubelet[2346]: I0509 00:21:53.484268 2346 policy_none.go:49] "None policy: Start" May 9 00:21:53.484873 kubelet[2346]: I0509 00:21:53.484857 2346 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:21:53.484873 kubelet[2346]: I0509 00:21:53.484880 2346 state_mem.go:35] "Initializing new in-memory state store" May 9 00:21:53.489325 kubelet[2346]: I0509 00:21:53.488616 2346 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:21:53.489325 kubelet[2346]: I0509 00:21:53.488778 2346 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:21:53.489325 kubelet[2346]: I0509 00:21:53.488859 2346 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:21:53.490408 kubelet[2346]: E0509 00:21:53.490380 2346 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 9 00:21:53.562337 kubelet[2346]: I0509 00:21:53.562312 2346 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:21:53.562690 kubelet[2346]: E0509 00:21:53.562644 2346 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" May 9 00:21:53.577940 kubelet[2346]: I0509 00:21:53.577916 2346 topology_manager.go:215] "Topology Admit Handler" podUID="756f3758bb014402cac7963cd7a66536" podNamespace="kube-system" podName="kube-apiserver-localhost" May 9 00:21:53.579413 kubelet[2346]: I0509 00:21:53.578899 2346 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 9 00:21:53.580230 kubelet[2346]: I0509 00:21:53.580133 2346 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 9 00:21:53.662830 kubelet[2346]: E0509 00:21:53.662708 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="400ms" May 9 00:21:53.763091 kubelet[2346]: I0509 00:21:53.763031 2346 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 9 00:21:53.763235 kubelet[2346]: I0509 00:21:53.763143 2346 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/756f3758bb014402cac7963cd7a66536-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"756f3758bb014402cac7963cd7a66536\") " pod="kube-system/kube-apiserver-localhost" May 9 00:21:53.763235 kubelet[2346]: I0509 00:21:53.763176 2346 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:21:53.763235 kubelet[2346]: I0509 00:21:53.763194 2346 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:21:53.763235 kubelet[2346]: I0509 00:21:53.763210 2346 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:21:53.763235 kubelet[2346]: I0509 00:21:53.763225 2346 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:21:53.763562 kubelet[2346]: I0509 00:21:53.763429 2346 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:21:53.763562 kubelet[2346]: I0509 00:21:53.763510 2346 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/756f3758bb014402cac7963cd7a66536-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"756f3758bb014402cac7963cd7a66536\") " pod="kube-system/kube-apiserver-localhost" May 9 00:21:53.763562 kubelet[2346]: I0509 00:21:53.763528 2346 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/756f3758bb014402cac7963cd7a66536-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"756f3758bb014402cac7963cd7a66536\") " pod="kube-system/kube-apiserver-localhost" May 9 00:21:53.764235 kubelet[2346]: I0509 00:21:53.763948 2346 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:21:53.764307 kubelet[2346]: E0509 00:21:53.764280 2346 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" May 9 00:21:53.882905 kubelet[2346]: E0509 00:21:53.882873 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:21:53.883552 containerd[1538]: time="2025-05-09T00:21:53.883506648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:756f3758bb014402cac7963cd7a66536,Namespace:kube-system,Attempt:0,}" May 9 00:21:53.884275 containerd[1538]: time="2025-05-09T00:21:53.883900622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 9 00:21:53.884321 kubelet[2346]: E0509 00:21:53.883542 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:21:53.885143 kubelet[2346]: E0509 00:21:53.885071 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:21:53.885401 containerd[1538]: time="2025-05-09T00:21:53.885365036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 9 00:21:54.064135 kubelet[2346]: E0509 00:21:54.064090 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="800ms" May 9 00:21:54.165518 kubelet[2346]: I0509 00:21:54.165478 2346 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:21:54.165823 kubelet[2346]: E0509 00:21:54.165796 2346 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" May 9 00:21:54.337783 kubelet[2346]: W0509 00:21:54.337657 2346 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 9 00:21:54.337783 kubelet[2346]: E0509 00:21:54.337720 2346 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 9 00:21:54.428878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3661509308.mount: Deactivated successfully. May 9 00:21:54.433654 containerd[1538]: time="2025-05-09T00:21:54.433608077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:21:54.435223 containerd[1538]: time="2025-05-09T00:21:54.435182909Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:21:54.435875 containerd[1538]: time="2025-05-09T00:21:54.435846238Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:21:54.437627 containerd[1538]: time="2025-05-09T00:21:54.437539520Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:21:54.438374 containerd[1538]: time="2025-05-09T00:21:54.438323418Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:21:54.439174 containerd[1538]: time="2025-05-09T00:21:54.439145613Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:21:54.439602 containerd[1538]: time="2025-05-09T00:21:54.439567605Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 9 00:21:54.441364 containerd[1538]: time="2025-05-09T00:21:54.441331765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:21:54.445282 containerd[1538]: time="2025-05-09T00:21:54.445005680Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 559.571809ms" May 9 00:21:54.445788 containerd[1538]: time="2025-05-09T00:21:54.445742526Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 561.784101ms" May 9 00:21:54.447707 containerd[1538]: time="2025-05-09T00:21:54.447571648Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 563.943841ms" May 9 00:21:54.550749 kubelet[2346]: W0509 00:21:54.545341 2346 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 9 00:21:54.550749 kubelet[2346]: E0509 00:21:54.545405 2346 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 9 00:21:54.597980 containerd[1538]: time="2025-05-09T00:21:54.597803994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:21:54.598228 containerd[1538]: time="2025-05-09T00:21:54.597861360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:21:54.598670 containerd[1538]: time="2025-05-09T00:21:54.598388490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:21:54.598670 containerd[1538]: time="2025-05-09T00:21:54.598432783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:21:54.598670 containerd[1538]: time="2025-05-09T00:21:54.598468962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:21:54.598814 containerd[1538]: time="2025-05-09T00:21:54.598295464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:21:54.599675 containerd[1538]: time="2025-05-09T00:21:54.599608970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:21:54.599675 containerd[1538]: time="2025-05-09T00:21:54.599651545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:21:54.599775 containerd[1538]: time="2025-05-09T00:21:54.599675611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:21:54.599800 containerd[1538]: time="2025-05-09T00:21:54.599769276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:21:54.600255 containerd[1538]: time="2025-05-09T00:21:54.600214214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:21:54.600402 containerd[1538]: time="2025-05-09T00:21:54.600369122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:21:54.646098 containerd[1538]: time="2025-05-09T00:21:54.645697850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"080be7d9d09b6b6784aa58341f08c0fa0fa1e8a0ad614b4e5bef8d95a63bcf4d\"" May 9 00:21:54.646496 containerd[1538]: time="2025-05-09T00:21:54.646416946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:756f3758bb014402cac7963cd7a66536,Namespace:kube-system,Attempt:0,} returns sandbox id \"d255425d27f899d709b8e377c701565f8588e1cb7fc8e615f9085b380e832a6a\"" May 9 00:21:54.647230 kubelet[2346]: E0509 00:21:54.647205 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:21:54.647230 kubelet[2346]: E0509 00:21:54.647219 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:21:54.647757 containerd[1538]: time="2025-05-09T00:21:54.647552876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"6691bc75232e33e98a903c7cb932a7721931ddd9516c066533d784719ff911a0\"" May 9 00:21:54.648025 kubelet[2346]: E0509 00:21:54.648008 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:21:54.650394 containerd[1538]: time="2025-05-09T00:21:54.650311331Z" level=info msg="CreateContainer within sandbox \"080be7d9d09b6b6784aa58341f08c0fa0fa1e8a0ad614b4e5bef8d95a63bcf4d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 00:21:54.650394 containerd[1538]: time="2025-05-09T00:21:54.650347509Z" level=info msg="CreateContainer within sandbox \"d255425d27f899d709b8e377c701565f8588e1cb7fc8e615f9085b380e832a6a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 00:21:54.650626 containerd[1538]: time="2025-05-09T00:21:54.650604438Z" level=info msg="CreateContainer within sandbox \"6691bc75232e33e98a903c7cb932a7721931ddd9516c066533d784719ff911a0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 00:21:54.666413 containerd[1538]: time="2025-05-09T00:21:54.666374345Z" level=info msg="CreateContainer within sandbox \"080be7d9d09b6b6784aa58341f08c0fa0fa1e8a0ad614b4e5bef8d95a63bcf4d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"237ad6382f0c60c6645851a39807a1c4cb24eb1126eea53a26dc705ee366cf65\"" May 9 00:21:54.667312 containerd[1538]: time="2025-05-09T00:21:54.666981827Z" level=info msg="StartContainer for \"237ad6382f0c60c6645851a39807a1c4cb24eb1126eea53a26dc705ee366cf65\"" May 9 00:21:54.668272 containerd[1538]: time="2025-05-09T00:21:54.668223935Z" level=info msg="CreateContainer within sandbox \"6691bc75232e33e98a903c7cb932a7721931ddd9516c066533d784719ff911a0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a712d3a9a3669bfd87c4bc8b839aefb42eb951fd02edc927047e8966e24fb536\"" May 9 00:21:54.668781 containerd[1538]: time="2025-05-09T00:21:54.668753183Z" level=info msg="StartContainer for \"a712d3a9a3669bfd87c4bc8b839aefb42eb951fd02edc927047e8966e24fb536\"" May 9 00:21:54.669246 containerd[1538]: time="2025-05-09T00:21:54.669209674Z" level=info msg="CreateContainer within sandbox \"d255425d27f899d709b8e377c701565f8588e1cb7fc8e615f9085b380e832a6a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dd5be9bb9fd75b96ba26bc67f666b9fca7c3dbb31382024923a6c98749a199a0\"" May 9 00:21:54.669910 containerd[1538]: time="2025-05-09T00:21:54.669884676Z" level=info msg="StartContainer for \"dd5be9bb9fd75b96ba26bc67f666b9fca7c3dbb31382024923a6c98749a199a0\"" May 9 00:21:54.746449 containerd[1538]: time="2025-05-09T00:21:54.744718375Z" level=info msg="StartContainer for \"a712d3a9a3669bfd87c4bc8b839aefb42eb951fd02edc927047e8966e24fb536\" returns successfully" May 9 00:21:54.746449 containerd[1538]: time="2025-05-09T00:21:54.744878481Z" level=info msg="StartContainer for \"237ad6382f0c60c6645851a39807a1c4cb24eb1126eea53a26dc705ee366cf65\" returns successfully" May 9 00:21:54.746449 containerd[1538]: time="2025-05-09T00:21:54.744904666Z" level=info msg="StartContainer for \"dd5be9bb9fd75b96ba26bc67f666b9fca7c3dbb31382024923a6c98749a199a0\" returns successfully" May 9 00:21:54.806804 kubelet[2346]: W0509 00:21:54.806682 2346 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 9 00:21:54.806804 kubelet[2346]: E0509 00:21:54.806755 2346 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 9 00:21:54.864697 kubelet[2346]: E0509 00:21:54.864545 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="1.6s" May 9 00:21:54.907215 kubelet[2346]: W0509 00:21:54.906985 2346 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 9 00:21:54.907215 kubelet[2346]: E0509 00:21:54.907055 2346 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused May 9 00:21:54.969509 kubelet[2346]: I0509 00:21:54.969468 2346 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:21:55.490803 kubelet[2346]: E0509 00:21:55.490727 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:21:55.504621 kubelet[2346]: E0509 00:21:55.492623 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:21:55.505858 kubelet[2346]: E0509 00:21:55.505742 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:21:56.503821 kubelet[2346]: E0509 00:21:56.503790 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:21:56.702287 kubelet[2346]: E0509 00:21:56.702249 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:21:56.872654 kubelet[2346]: E0509 00:21:56.868078 2346 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 9 00:21:56.938011 kubelet[2346]: I0509 00:21:56.937969 2346 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 9 00:21:56.945044 kubelet[2346]: E0509 00:21:56.945011 2346 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:21:56.986341 kubelet[2346]: E0509 00:21:56.986213 2346 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183db3f937f4e83a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 00:21:53.456646202 +0000 UTC m=+1.061948334,LastTimestamp:2025-05-09 00:21:53.456646202 +0000 UTC m=+1.061948334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 00:21:57.046599 kubelet[2346]: E0509 00:21:57.045244 2346 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:21:57.145477 kubelet[2346]: E0509 00:21:57.145368 2346 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:21:57.245882 kubelet[2346]: E0509 00:21:57.245834 2346 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:21:57.346323 kubelet[2346]: E0509 00:21:57.346289 2346 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:21:57.447054 kubelet[2346]: E0509 00:21:57.447018 2346 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:21:57.500499 kubelet[2346]: E0509 00:21:57.500456 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:21:57.547319 kubelet[2346]: E0509 00:21:57.547283 2346 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:21:57.647836 kubelet[2346]: E0509 00:21:57.647792 2346 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:21:58.457741 kubelet[2346]: I0509 00:21:58.457704 2346 apiserver.go:52] "Watching apiserver" May 9 00:21:58.462371 kubelet[2346]: I0509 00:21:58.462187 2346 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 00:21:58.849429 systemd[1]: Reloading requested from client PID 2626 ('systemctl') (unit session-7.scope)... May 9 00:21:58.849448 systemd[1]: Reloading... May 9 00:21:58.909632 zram_generator::config[2668]: No configuration found. May 9 00:21:59.080257 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:21:59.139492 systemd[1]: Reloading finished in 289 ms. May 9 00:21:59.169179 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:21:59.178696 systemd[1]: kubelet.service: Deactivated successfully. May 9 00:21:59.179041 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:21:59.185912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:21:59.271985 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:21:59.275736 (kubelet)[2717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:21:59.312242 kubelet[2717]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:21:59.312242 kubelet[2717]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:21:59.312242 kubelet[2717]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:21:59.312618 kubelet[2717]: I0509 00:21:59.312278 2717 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:21:59.318263 kubelet[2717]: I0509 00:21:59.318212 2717 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 00:21:59.318263 kubelet[2717]: I0509 00:21:59.318239 2717 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:21:59.318409 kubelet[2717]: I0509 00:21:59.318384 2717 server.go:927] "Client rotation is on, will bootstrap in background" May 9 00:21:59.319661 kubelet[2717]: I0509 00:21:59.319636 2717 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 00:21:59.320832 kubelet[2717]: I0509 00:21:59.320806 2717 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:21:59.325529 kubelet[2717]: I0509 00:21:59.325510 2717 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:21:59.325938 kubelet[2717]: I0509 00:21:59.325914 2717 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:21:59.326082 kubelet[2717]: I0509 00:21:59.325942 2717 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 00:21:59.326161 kubelet[2717]: I0509 00:21:59.326089 2717 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:21:59.326161 kubelet[2717]: I0509 00:21:59.326097 2717 container_manager_linux.go:301] "Creating device plugin manager" May 9 00:21:59.326161 kubelet[2717]: I0509 00:21:59.326126 2717 state_mem.go:36] "Initialized new in-memory state store" May 9 00:21:59.326243 kubelet[2717]: I0509 00:21:59.326230 2717 kubelet.go:400] "Attempting to sync node with API server" May 9 00:21:59.326268 kubelet[2717]: I0509 00:21:59.326247 2717 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:21:59.326637 kubelet[2717]: I0509 00:21:59.326621 2717 kubelet.go:312] "Adding apiserver pod source" May 9 00:21:59.326684 kubelet[2717]: I0509 00:21:59.326674 2717 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:21:59.327286 kubelet[2717]: I0509 00:21:59.327267 2717 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 00:21:59.327804 kubelet[2717]: I0509 00:21:59.327500 2717 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:21:59.328027 kubelet[2717]: I0509 00:21:59.328011 2717 server.go:1264] "Started kubelet" May 9 00:21:59.328205 kubelet[2717]: I0509 00:21:59.328164 2717 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:21:59.328440 kubelet[2717]: I0509 00:21:59.328394 2717 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:21:59.328949 kubelet[2717]: I0509 00:21:59.328928 2717 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:21:59.330952 kubelet[2717]: I0509 00:21:59.330921 2717 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:21:59.331118 kubelet[2717]: I0509 00:21:59.331091 2717 server.go:455] "Adding debug handlers to kubelet server" May 9 00:21:59.341551 kubelet[2717]: I0509 00:21:59.341462 2717 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 00:21:59.346599 kubelet[2717]: I0509 00:21:59.346569 2717 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 00:21:59.346947 kubelet[2717]: I0509 00:21:59.346924 2717 reconciler.go:26] "Reconciler: start to sync state" May 9 00:21:59.348943 kubelet[2717]: E0509 00:21:59.348925 2717 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:21:59.349725 kubelet[2717]: I0509 00:21:59.349668 2717 factory.go:221] Registration of the containerd container factory successfully May 9 00:21:59.349725 kubelet[2717]: I0509 00:21:59.349687 2717 factory.go:221] Registration of the systemd container factory successfully May 9 00:21:59.349905 kubelet[2717]: I0509 00:21:59.349769 2717 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:21:59.352234 kubelet[2717]: I0509 00:21:59.352190 2717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:21:59.353338 kubelet[2717]: I0509 00:21:59.353315 2717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:21:59.353405 kubelet[2717]: I0509 00:21:59.353350 2717 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:21:59.353405 kubelet[2717]: I0509 00:21:59.353367 2717 kubelet.go:2337] "Starting kubelet main sync loop" May 9 00:21:59.353481 kubelet[2717]: E0509 00:21:59.353403 2717 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:21:59.385074 kubelet[2717]: I0509 00:21:59.384996 2717 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:21:59.385074 kubelet[2717]: I0509 00:21:59.385028 2717 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:21:59.385074 kubelet[2717]: I0509 00:21:59.385047 2717 state_mem.go:36] "Initialized new in-memory state store" May 9 00:21:59.385287 kubelet[2717]: I0509 00:21:59.385172 2717 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 00:21:59.385287 kubelet[2717]: I0509 00:21:59.385182 2717 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 00:21:59.385287 kubelet[2717]: I0509 00:21:59.385206 2717 policy_none.go:49] "None policy: Start" May 9 00:21:59.385740 kubelet[2717]: I0509 00:21:59.385719 2717 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:21:59.386342 kubelet[2717]: I0509 00:21:59.385844 2717 state_mem.go:35] "Initializing new in-memory state store" May 9 00:21:59.386342 kubelet[2717]: I0509 00:21:59.386011 2717 state_mem.go:75] "Updated machine memory state" May 9 00:21:59.387094 kubelet[2717]: I0509 00:21:59.387058 2717 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:21:59.387257 kubelet[2717]: I0509 00:21:59.387217 2717 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:21:59.387332 kubelet[2717]: I0509 00:21:59.387321 2717 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:21:59.445185 kubelet[2717]: I0509 00:21:59.445164 2717 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:21:59.450228 kubelet[2717]: I0509 00:21:59.450206 2717 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 9 00:21:59.450405 kubelet[2717]: I0509 00:21:59.450270 2717 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 9 00:21:59.454144 kubelet[2717]: I0509 00:21:59.453520 2717 topology_manager.go:215] "Topology Admit Handler" podUID="756f3758bb014402cac7963cd7a66536" podNamespace="kube-system" podName="kube-apiserver-localhost" May 9 00:21:59.454609 kubelet[2717]: I0509 00:21:59.454570 2717 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 9 00:21:59.454659 kubelet[2717]: I0509 00:21:59.454643 2717 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 9 00:21:59.548680 kubelet[2717]: I0509 00:21:59.548605 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/756f3758bb014402cac7963cd7a66536-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"756f3758bb014402cac7963cd7a66536\") " pod="kube-system/kube-apiserver-localhost" May 9 00:21:59.548680 kubelet[2717]: I0509 00:21:59.548647 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/756f3758bb014402cac7963cd7a66536-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"756f3758bb014402cac7963cd7a66536\") " pod="kube-system/kube-apiserver-localhost" May 9 00:21:59.548680 kubelet[2717]: I0509 00:21:59.548668 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:21:59.548852 kubelet[2717]: I0509 00:21:59.548708 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:21:59.548852 kubelet[2717]: I0509 00:21:59.548762 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 9 00:21:59.548852 kubelet[2717]: I0509 00:21:59.548783 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/756f3758bb014402cac7963cd7a66536-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"756f3758bb014402cac7963cd7a66536\") " pod="kube-system/kube-apiserver-localhost" May 9 00:21:59.548852 kubelet[2717]: I0509 00:21:59.548798 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:21:59.548852 kubelet[2717]: I0509 00:21:59.548812 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:21:59.548950 kubelet[2717]: I0509 00:21:59.548828 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:21:59.777465 kubelet[2717]: E0509 00:21:59.777317 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:21:59.777465 kubelet[2717]: E0509 00:21:59.777321 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:21:59.777465 kubelet[2717]: E0509 00:21:59.777328 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:00.327631 kubelet[2717]: I0509 00:22:00.327587 2717 apiserver.go:52] "Watching apiserver" May 9 00:22:00.346865 kubelet[2717]: I0509 00:22:00.346791 2717 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 00:22:00.365627 kubelet[2717]: E0509 00:22:00.365482 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:00.365627 kubelet[2717]: E0509 00:22:00.365516 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:00.365930 kubelet[2717]: E0509 00:22:00.365865 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:00.395598 kubelet[2717]: I0509 00:22:00.395366 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.395349412 podStartE2EDuration="1.395349412s" podCreationTimestamp="2025-05-09 00:21:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:22:00.382061885 +0000 UTC m=+1.103346566" watchObservedRunningTime="2025-05-09 00:22:00.395349412 +0000 UTC m=+1.116634093" May 9 00:22:00.406787 kubelet[2717]: I0509 00:22:00.406724 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.406707249 podStartE2EDuration="1.406707249s" podCreationTimestamp="2025-05-09 00:21:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:22:00.395524766 +0000 UTC m=+1.116809447" watchObservedRunningTime="2025-05-09 00:22:00.406707249 +0000 UTC m=+1.127991930" May 9 00:22:00.426441 kubelet[2717]: I0509 00:22:00.426372 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.426356934 podStartE2EDuration="1.426356934s" podCreationTimestamp="2025-05-09 00:21:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:22:00.411540571 +0000 UTC m=+1.132825252" watchObservedRunningTime="2025-05-09 00:22:00.426356934 +0000 UTC m=+1.147641615" May 9 00:22:01.371001 kubelet[2717]: E0509 00:22:01.370925 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:03.966649 sudo[1749]: pam_unix(sudo:session): session closed for user root May 9 00:22:03.968641 sshd[1742]: pam_unix(sshd:session): session closed for user core May 9 00:22:03.972321 systemd[1]: sshd@6-10.0.0.135:22-10.0.0.1:45424.service: Deactivated successfully. May 9 00:22:03.975887 systemd[1]: session-7.scope: Deactivated successfully. May 9 00:22:03.977062 systemd-logind[1519]: Session 7 logged out. Waiting for processes to exit. May 9 00:22:03.978554 systemd-logind[1519]: Removed session 7. May 9 00:22:06.580947 kubelet[2717]: E0509 00:22:06.580908 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:06.755103 kubelet[2717]: E0509 00:22:06.754759 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:07.375973 kubelet[2717]: E0509 00:22:07.375896 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:07.375973 kubelet[2717]: E0509 00:22:07.375941 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:08.566367 kubelet[2717]: E0509 00:22:08.566255 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:09.379139 kubelet[2717]: E0509 00:22:09.379098 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:10.380825 kubelet[2717]: E0509 00:22:10.380793 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:12.762284 kubelet[2717]: I0509 00:22:12.762233 2717 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 00:22:12.773323 containerd[1538]: time="2025-05-09T00:22:12.773220837Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 00:22:12.773699 kubelet[2717]: I0509 00:22:12.773591 2717 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 00:22:13.617862 kubelet[2717]: I0509 00:22:13.617802 2717 topology_manager.go:215] "Topology Admit Handler" podUID="28c50c75-9461-405e-94c7-5ff1fb3c0857" podNamespace="kube-system" podName="kube-proxy-bzngp" May 9 00:22:13.650857 kubelet[2717]: I0509 00:22:13.650785 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq68q\" (UniqueName: \"kubernetes.io/projected/28c50c75-9461-405e-94c7-5ff1fb3c0857-kube-api-access-wq68q\") pod \"kube-proxy-bzngp\" (UID: \"28c50c75-9461-405e-94c7-5ff1fb3c0857\") " pod="kube-system/kube-proxy-bzngp" May 9 00:22:13.650857 kubelet[2717]: I0509 00:22:13.650826 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/28c50c75-9461-405e-94c7-5ff1fb3c0857-kube-proxy\") pod \"kube-proxy-bzngp\" (UID: \"28c50c75-9461-405e-94c7-5ff1fb3c0857\") " pod="kube-system/kube-proxy-bzngp" May 9 00:22:13.650857 kubelet[2717]: I0509 00:22:13.650846 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28c50c75-9461-405e-94c7-5ff1fb3c0857-xtables-lock\") pod \"kube-proxy-bzngp\" (UID: \"28c50c75-9461-405e-94c7-5ff1fb3c0857\") " pod="kube-system/kube-proxy-bzngp" May 9 00:22:13.650857 kubelet[2717]: I0509 00:22:13.650862 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28c50c75-9461-405e-94c7-5ff1fb3c0857-lib-modules\") pod \"kube-proxy-bzngp\" (UID: \"28c50c75-9461-405e-94c7-5ff1fb3c0857\") " pod="kube-system/kube-proxy-bzngp" May 9 00:22:13.878301 kubelet[2717]: I0509 00:22:13.876286 2717 topology_manager.go:215] "Topology Admit Handler" podUID="3d05a42f-3896-4722-87df-84e180750acf" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-lj52m" May 9 00:22:13.921393 kubelet[2717]: E0509 00:22:13.921359 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:13.925963 containerd[1538]: time="2025-05-09T00:22:13.925923268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bzngp,Uid:28c50c75-9461-405e-94c7-5ff1fb3c0857,Namespace:kube-system,Attempt:0,}" May 9 00:22:13.949487 containerd[1538]: time="2025-05-09T00:22:13.949400699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:22:13.949487 containerd[1538]: time="2025-05-09T00:22:13.949451977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:22:13.949487 containerd[1538]: time="2025-05-09T00:22:13.949471177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:22:13.949677 containerd[1538]: time="2025-05-09T00:22:13.949562134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:22:13.954034 kubelet[2717]: I0509 00:22:13.953965 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3d05a42f-3896-4722-87df-84e180750acf-var-lib-calico\") pod \"tigera-operator-797db67f8-lj52m\" (UID: \"3d05a42f-3896-4722-87df-84e180750acf\") " pod="tigera-operator/tigera-operator-797db67f8-lj52m" May 9 00:22:13.954034 kubelet[2717]: I0509 00:22:13.954007 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh7th\" (UniqueName: \"kubernetes.io/projected/3d05a42f-3896-4722-87df-84e180750acf-kube-api-access-kh7th\") pod \"tigera-operator-797db67f8-lj52m\" (UID: \"3d05a42f-3896-4722-87df-84e180750acf\") " pod="tigera-operator/tigera-operator-797db67f8-lj52m" May 9 00:22:13.977814 containerd[1538]: time="2025-05-09T00:22:13.977771250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bzngp,Uid:28c50c75-9461-405e-94c7-5ff1fb3c0857,Namespace:kube-system,Attempt:0,} returns sandbox id \"0039ef4bed5f1308eb0678faa9f3bb0287b55a9f840d1d97c802321a9ff888a4\"" May 9 00:22:13.982065 kubelet[2717]: E0509 00:22:13.982042 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:13.995978 containerd[1538]: time="2025-05-09T00:22:13.995928056Z" level=info msg="CreateContainer within sandbox \"0039ef4bed5f1308eb0678faa9f3bb0287b55a9f840d1d97c802321a9ff888a4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 00:22:14.006608 containerd[1538]: time="2025-05-09T00:22:14.006503238Z" level=info msg="CreateContainer within sandbox \"0039ef4bed5f1308eb0678faa9f3bb0287b55a9f840d1d97c802321a9ff888a4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"da105df4ca30a40c1881a7ef9a56c7097da1d93222aa87919e9f558ec5b971b0\"" May 9 00:22:14.009334 containerd[1538]: time="2025-05-09T00:22:14.009289711Z" level=info msg="StartContainer for \"da105df4ca30a40c1881a7ef9a56c7097da1d93222aa87919e9f558ec5b971b0\"" May 9 00:22:14.056151 containerd[1538]: time="2025-05-09T00:22:14.056111095Z" level=info msg="StartContainer for \"da105df4ca30a40c1881a7ef9a56c7097da1d93222aa87919e9f558ec5b971b0\" returns successfully" May 9 00:22:14.127681 update_engine[1521]: I20250509 00:22:14.127607 1521 update_attempter.cc:509] Updating boot flags... May 9 00:22:14.161823 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2896) May 9 00:22:14.179943 containerd[1538]: time="2025-05-09T00:22:14.179374741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-lj52m,Uid:3d05a42f-3896-4722-87df-84e180750acf,Namespace:tigera-operator,Attempt:0,}" May 9 00:22:14.190856 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2896) May 9 00:22:14.219799 containerd[1538]: time="2025-05-09T00:22:14.219689287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:22:14.219799 containerd[1538]: time="2025-05-09T00:22:14.219755685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:22:14.219799 containerd[1538]: time="2025-05-09T00:22:14.219766965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:22:14.220116 containerd[1538]: time="2025-05-09T00:22:14.219856802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:22:14.265843 containerd[1538]: time="2025-05-09T00:22:14.265662378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-lj52m,Uid:3d05a42f-3896-4722-87df-84e180750acf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"38841b2ec88bf458774aeaeb11520c7488307c4e6a730e980d35596feb5aaca0\"" May 9 00:22:14.276135 containerd[1538]: time="2025-05-09T00:22:14.276042415Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 9 00:22:14.391928 kubelet[2717]: E0509 00:22:14.391811 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:15.595239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3209220490.mount: Deactivated successfully. May 9 00:22:15.840002 containerd[1538]: time="2025-05-09T00:22:15.839938617Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:15.840395 containerd[1538]: time="2025-05-09T00:22:15.840330766Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 9 00:22:15.841182 containerd[1538]: time="2025-05-09T00:22:15.841146142Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:15.843338 containerd[1538]: time="2025-05-09T00:22:15.843308038Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:15.844220 containerd[1538]: time="2025-05-09T00:22:15.844189132Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 1.568107198s" May 9 00:22:15.844262 containerd[1538]: time="2025-05-09T00:22:15.844223451Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 9 00:22:15.848949 containerd[1538]: time="2025-05-09T00:22:15.848783396Z" level=info msg="CreateContainer within sandbox \"38841b2ec88bf458774aeaeb11520c7488307c4e6a730e980d35596feb5aaca0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 9 00:22:15.865033 containerd[1538]: time="2025-05-09T00:22:15.864987197Z" level=info msg="CreateContainer within sandbox \"38841b2ec88bf458774aeaeb11520c7488307c4e6a730e980d35596feb5aaca0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1b167a8c8a846842e01fbb339de4532e0e4ccdb247714100f5d438f068b21854\"" May 9 00:22:15.865686 containerd[1538]: time="2025-05-09T00:22:15.865630498Z" level=info msg="StartContainer for \"1b167a8c8a846842e01fbb339de4532e0e4ccdb247714100f5d438f068b21854\"" May 9 00:22:15.911450 containerd[1538]: time="2025-05-09T00:22:15.911410144Z" level=info msg="StartContainer for \"1b167a8c8a846842e01fbb339de4532e0e4ccdb247714100f5d438f068b21854\" returns successfully" May 9 00:22:16.408281 kubelet[2717]: I0509 00:22:16.408214 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bzngp" podStartSLOduration=3.408158997 podStartE2EDuration="3.408158997s" podCreationTimestamp="2025-05-09 00:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:22:14.407661921 +0000 UTC m=+15.128946562" watchObservedRunningTime="2025-05-09 00:22:16.408158997 +0000 UTC m=+17.129443678" May 9 00:22:16.408866 kubelet[2717]: I0509 00:22:16.408381 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-lj52m" podStartSLOduration=1.8370824479999999 podStartE2EDuration="3.408375431s" podCreationTimestamp="2025-05-09 00:22:13 +0000 UTC" firstStartedPulling="2025-05-09 00:22:14.275615108 +0000 UTC m=+14.996899789" lastFinishedPulling="2025-05-09 00:22:15.846908091 +0000 UTC m=+16.568192772" observedRunningTime="2025-05-09 00:22:16.407593693 +0000 UTC m=+17.129465637" watchObservedRunningTime="2025-05-09 00:22:16.408375431 +0000 UTC m=+17.129660112" May 9 00:22:19.179341 kubelet[2717]: I0509 00:22:19.176924 2717 topology_manager.go:215] "Topology Admit Handler" podUID="7bf53bdf-c8a3-41e7-8650-326274418106" podNamespace="calico-system" podName="calico-typha-694774dcc6-jjfls" May 9 00:22:19.219009 kubelet[2717]: I0509 00:22:19.218956 2717 topology_manager.go:215] "Topology Admit Handler" podUID="a7704a0f-1bf2-4b38-bad9-ad8b4d0964df" podNamespace="calico-system" podName="calico-node-pt7mz" May 9 00:22:19.293575 kubelet[2717]: I0509 00:22:19.293514 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7704a0f-1bf2-4b38-bad9-ad8b4d0964df-lib-modules\") pod \"calico-node-pt7mz\" (UID: \"a7704a0f-1bf2-4b38-bad9-ad8b4d0964df\") " pod="calico-system/calico-node-pt7mz" May 9 00:22:19.293899 kubelet[2717]: I0509 00:22:19.293604 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a7704a0f-1bf2-4b38-bad9-ad8b4d0964df-node-certs\") pod \"calico-node-pt7mz\" (UID: \"a7704a0f-1bf2-4b38-bad9-ad8b4d0964df\") " pod="calico-system/calico-node-pt7mz" May 9 00:22:19.293899 kubelet[2717]: I0509 00:22:19.293628 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a7704a0f-1bf2-4b38-bad9-ad8b4d0964df-cni-bin-dir\") pod \"calico-node-pt7mz\" (UID: \"a7704a0f-1bf2-4b38-bad9-ad8b4d0964df\") " pod="calico-system/calico-node-pt7mz" May 9 00:22:19.293899 kubelet[2717]: I0509 00:22:19.293694 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a7704a0f-1bf2-4b38-bad9-ad8b4d0964df-flexvol-driver-host\") pod \"calico-node-pt7mz\" (UID: \"a7704a0f-1bf2-4b38-bad9-ad8b4d0964df\") " pod="calico-system/calico-node-pt7mz" May 9 00:22:19.293899 kubelet[2717]: I0509 00:22:19.293722 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5mbm\" (UniqueName: \"kubernetes.io/projected/7bf53bdf-c8a3-41e7-8650-326274418106-kube-api-access-c5mbm\") pod \"calico-typha-694774dcc6-jjfls\" (UID: \"7bf53bdf-c8a3-41e7-8650-326274418106\") " pod="calico-system/calico-typha-694774dcc6-jjfls" May 9 00:22:19.293899 kubelet[2717]: I0509 00:22:19.293738 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a7704a0f-1bf2-4b38-bad9-ad8b4d0964df-cni-net-dir\") pod \"calico-node-pt7mz\" (UID: \"a7704a0f-1bf2-4b38-bad9-ad8b4d0964df\") " pod="calico-system/calico-node-pt7mz" May 9 00:22:19.294029 kubelet[2717]: I0509 00:22:19.293772 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a7704a0f-1bf2-4b38-bad9-ad8b4d0964df-cni-log-dir\") pod \"calico-node-pt7mz\" (UID: \"a7704a0f-1bf2-4b38-bad9-ad8b4d0964df\") " pod="calico-system/calico-node-pt7mz" May 9 00:22:19.294029 kubelet[2717]: I0509 00:22:19.293791 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bf53bdf-c8a3-41e7-8650-326274418106-tigera-ca-bundle\") pod \"calico-typha-694774dcc6-jjfls\" (UID: \"7bf53bdf-c8a3-41e7-8650-326274418106\") " pod="calico-system/calico-typha-694774dcc6-jjfls" May 9 00:22:19.294029 kubelet[2717]: I0509 00:22:19.293805 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7704a0f-1bf2-4b38-bad9-ad8b4d0964df-xtables-lock\") pod \"calico-node-pt7mz\" (UID: \"a7704a0f-1bf2-4b38-bad9-ad8b4d0964df\") " pod="calico-system/calico-node-pt7mz" May 9 00:22:19.294029 kubelet[2717]: I0509 00:22:19.293820 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a7704a0f-1bf2-4b38-bad9-ad8b4d0964df-policysync\") pod \"calico-node-pt7mz\" (UID: \"a7704a0f-1bf2-4b38-bad9-ad8b4d0964df\") " pod="calico-system/calico-node-pt7mz" May 9 00:22:19.294029 kubelet[2717]: I0509 00:22:19.293856 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7704a0f-1bf2-4b38-bad9-ad8b4d0964df-tigera-ca-bundle\") pod \"calico-node-pt7mz\" (UID: \"a7704a0f-1bf2-4b38-bad9-ad8b4d0964df\") " pod="calico-system/calico-node-pt7mz" May 9 00:22:19.294136 kubelet[2717]: I0509 00:22:19.293885 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a7704a0f-1bf2-4b38-bad9-ad8b4d0964df-var-lib-calico\") pod \"calico-node-pt7mz\" (UID: \"a7704a0f-1bf2-4b38-bad9-ad8b4d0964df\") " pod="calico-system/calico-node-pt7mz" May 9 00:22:19.294136 kubelet[2717]: I0509 00:22:19.293903 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfn4n\" (UniqueName: \"kubernetes.io/projected/a7704a0f-1bf2-4b38-bad9-ad8b4d0964df-kube-api-access-xfn4n\") pod \"calico-node-pt7mz\" (UID: \"a7704a0f-1bf2-4b38-bad9-ad8b4d0964df\") " pod="calico-system/calico-node-pt7mz" May 9 00:22:19.294136 kubelet[2717]: I0509 00:22:19.293943 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7bf53bdf-c8a3-41e7-8650-326274418106-typha-certs\") pod \"calico-typha-694774dcc6-jjfls\" (UID: \"7bf53bdf-c8a3-41e7-8650-326274418106\") " pod="calico-system/calico-typha-694774dcc6-jjfls" May 9 00:22:19.294136 kubelet[2717]: I0509 00:22:19.293960 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a7704a0f-1bf2-4b38-bad9-ad8b4d0964df-var-run-calico\") pod \"calico-node-pt7mz\" (UID: \"a7704a0f-1bf2-4b38-bad9-ad8b4d0964df\") " pod="calico-system/calico-node-pt7mz" May 9 00:22:19.335071 kubelet[2717]: I0509 00:22:19.334696 2717 topology_manager.go:215] "Topology Admit Handler" podUID="4c753734-0995-4fcd-9777-f094bc14fa3a" podNamespace="calico-system" podName="csi-node-driver-c5864" May 9 00:22:19.337466 kubelet[2717]: E0509 00:22:19.337312 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c5864" podUID="4c753734-0995-4fcd-9777-f094bc14fa3a" May 9 00:22:19.396009 kubelet[2717]: E0509 00:22:19.395974 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.396009 kubelet[2717]: W0509 00:22:19.395995 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.396009 kubelet[2717]: E0509 00:22:19.396013 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.396228 kubelet[2717]: E0509 00:22:19.396203 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.396228 kubelet[2717]: W0509 00:22:19.396221 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.396283 kubelet[2717]: E0509 00:22:19.396232 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.396427 kubelet[2717]: E0509 00:22:19.396404 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.396427 kubelet[2717]: W0509 00:22:19.396414 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.396479 kubelet[2717]: E0509 00:22:19.396442 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.396682 kubelet[2717]: E0509 00:22:19.396660 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.396682 kubelet[2717]: W0509 00:22:19.396677 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.396751 kubelet[2717]: E0509 00:22:19.396695 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.398228 kubelet[2717]: E0509 00:22:19.398197 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.398228 kubelet[2717]: W0509 00:22:19.398214 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.398325 kubelet[2717]: E0509 00:22:19.398234 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.401214 kubelet[2717]: E0509 00:22:19.401178 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.401214 kubelet[2717]: W0509 00:22:19.401200 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.401413 kubelet[2717]: E0509 00:22:19.401396 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.401413 kubelet[2717]: W0509 00:22:19.401408 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.401621 kubelet[2717]: E0509 00:22:19.401603 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.401621 kubelet[2717]: W0509 00:22:19.401617 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.402274 kubelet[2717]: E0509 00:22:19.402212 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.402274 kubelet[2717]: E0509 00:22:19.402270 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.402354 kubelet[2717]: I0509 00:22:19.402293 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c753734-0995-4fcd-9777-f094bc14fa3a-kubelet-dir\") pod \"csi-node-driver-c5864\" (UID: \"4c753734-0995-4fcd-9777-f094bc14fa3a\") " pod="calico-system/csi-node-driver-c5864" May 9 00:22:19.403270 kubelet[2717]: E0509 00:22:19.403249 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.403270 kubelet[2717]: W0509 00:22:19.403266 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.403270 kubelet[2717]: E0509 00:22:19.403281 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.403444 kubelet[2717]: E0509 00:22:19.403418 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.403444 kubelet[2717]: W0509 00:22:19.403441 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.403499 kubelet[2717]: E0509 00:22:19.403451 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.403596 kubelet[2717]: E0509 00:22:19.403572 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.403596 kubelet[2717]: W0509 00:22:19.403593 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.403658 kubelet[2717]: E0509 00:22:19.403603 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.403823 kubelet[2717]: E0509 00:22:19.403805 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.403823 kubelet[2717]: W0509 00:22:19.403819 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.403882 kubelet[2717]: E0509 00:22:19.403829 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.404758 kubelet[2717]: E0509 00:22:19.404722 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.405808 kubelet[2717]: E0509 00:22:19.404637 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.405808 kubelet[2717]: W0509 00:22:19.405807 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.405915 kubelet[2717]: E0509 00:22:19.405826 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.406052 kubelet[2717]: E0509 00:22:19.406032 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.406052 kubelet[2717]: W0509 00:22:19.406047 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.407614 kubelet[2717]: E0509 00:22:19.406132 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.407767 kubelet[2717]: E0509 00:22:19.407744 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.407767 kubelet[2717]: W0509 00:22:19.407760 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.407839 kubelet[2717]: E0509 00:22:19.407787 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.407839 kubelet[2717]: I0509 00:22:19.407823 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4c753734-0995-4fcd-9777-f094bc14fa3a-socket-dir\") pod \"csi-node-driver-c5864\" (UID: \"4c753734-0995-4fcd-9777-f094bc14fa3a\") " pod="calico-system/csi-node-driver-c5864" May 9 00:22:19.413602 kubelet[2717]: E0509 00:22:19.410707 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.413602 kubelet[2717]: W0509 00:22:19.410725 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.413602 kubelet[2717]: E0509 00:22:19.410906 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.413602 kubelet[2717]: E0509 00:22:19.410964 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.413602 kubelet[2717]: W0509 00:22:19.410971 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.413602 kubelet[2717]: E0509 00:22:19.411058 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.413602 kubelet[2717]: E0509 00:22:19.411376 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.413602 kubelet[2717]: W0509 00:22:19.411386 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.413602 kubelet[2717]: E0509 00:22:19.411537 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.414333 kubelet[2717]: E0509 00:22:19.414303 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.414333 kubelet[2717]: W0509 00:22:19.414320 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.414412 kubelet[2717]: E0509 00:22:19.414360 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.414526 kubelet[2717]: E0509 00:22:19.414510 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.414526 kubelet[2717]: W0509 00:22:19.414522 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.414595 kubelet[2717]: E0509 00:22:19.414566 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.414704 kubelet[2717]: E0509 00:22:19.414686 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.414704 kubelet[2717]: W0509 00:22:19.414699 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.414768 kubelet[2717]: E0509 00:22:19.414730 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.427622 kubelet[2717]: E0509 00:22:19.422644 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.427622 kubelet[2717]: W0509 00:22:19.422678 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.427622 kubelet[2717]: E0509 00:22:19.422700 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.427622 kubelet[2717]: E0509 00:22:19.422878 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.427622 kubelet[2717]: W0509 00:22:19.422886 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.427622 kubelet[2717]: E0509 00:22:19.422895 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.427622 kubelet[2717]: E0509 00:22:19.425751 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.427622 kubelet[2717]: W0509 00:22:19.425766 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.427622 kubelet[2717]: E0509 00:22:19.425906 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.427622 kubelet[2717]: W0509 00:22:19.425913 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.427912 kubelet[2717]: E0509 00:22:19.425924 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.427912 kubelet[2717]: E0509 00:22:19.426077 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.427912 kubelet[2717]: W0509 00:22:19.426084 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.427912 kubelet[2717]: E0509 00:22:19.426092 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.427912 kubelet[2717]: E0509 00:22:19.426215 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.427912 kubelet[2717]: W0509 00:22:19.426221 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.427912 kubelet[2717]: E0509 00:22:19.426228 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.427912 kubelet[2717]: E0509 00:22:19.426355 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.427912 kubelet[2717]: W0509 00:22:19.426362 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.427912 kubelet[2717]: E0509 00:22:19.426369 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.428106 kubelet[2717]: E0509 00:22:19.426382 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.428106 kubelet[2717]: E0509 00:22:19.426516 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.428106 kubelet[2717]: W0509 00:22:19.426530 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.428106 kubelet[2717]: E0509 00:22:19.426538 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.428106 kubelet[2717]: E0509 00:22:19.426690 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.428106 kubelet[2717]: W0509 00:22:19.426697 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.428106 kubelet[2717]: E0509 00:22:19.426705 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.431084 kubelet[2717]: E0509 00:22:19.429722 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.431084 kubelet[2717]: W0509 00:22:19.429741 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.431084 kubelet[2717]: E0509 00:22:19.429755 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.431084 kubelet[2717]: E0509 00:22:19.430008 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.431084 kubelet[2717]: W0509 00:22:19.430016 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.431084 kubelet[2717]: E0509 00:22:19.430039 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.431084 kubelet[2717]: E0509 00:22:19.430691 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.431084 kubelet[2717]: W0509 00:22:19.430701 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.431084 kubelet[2717]: E0509 00:22:19.430737 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.433839 kubelet[2717]: E0509 00:22:19.433670 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.433839 kubelet[2717]: W0509 00:22:19.433689 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.433839 kubelet[2717]: E0509 00:22:19.433758 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.434076 kubelet[2717]: E0509 00:22:19.434051 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.434076 kubelet[2717]: W0509 00:22:19.434073 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.434446 kubelet[2717]: E0509 00:22:19.434426 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.434484 kubelet[2717]: W0509 00:22:19.434446 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.434702 kubelet[2717]: E0509 00:22:19.434571 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.434806 kubelet[2717]: E0509 00:22:19.434780 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.436019 kubelet[2717]: E0509 00:22:19.434859 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.436019 kubelet[2717]: W0509 00:22:19.434873 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.436019 kubelet[2717]: E0509 00:22:19.434936 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.436019 kubelet[2717]: E0509 00:22:19.435417 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.436019 kubelet[2717]: W0509 00:22:19.435435 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.436019 kubelet[2717]: E0509 00:22:19.435562 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.436229 kubelet[2717]: E0509 00:22:19.436075 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.436229 kubelet[2717]: W0509 00:22:19.436086 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.436276 kubelet[2717]: E0509 00:22:19.436234 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.436504 kubelet[2717]: E0509 00:22:19.436461 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.436504 kubelet[2717]: W0509 00:22:19.436475 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.436601 kubelet[2717]: E0509 00:22:19.436567 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.436814 kubelet[2717]: E0509 00:22:19.436764 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.436814 kubelet[2717]: W0509 00:22:19.436796 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.436814 kubelet[2717]: E0509 00:22:19.436841 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.436996 kubelet[2717]: E0509 00:22:19.436987 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.436996 kubelet[2717]: W0509 00:22:19.436996 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.437199 kubelet[2717]: E0509 00:22:19.437178 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.437435 kubelet[2717]: E0509 00:22:19.437404 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.437435 kubelet[2717]: W0509 00:22:19.437426 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.437509 kubelet[2717]: E0509 00:22:19.437501 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.438283 kubelet[2717]: E0509 00:22:19.438020 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.438283 kubelet[2717]: W0509 00:22:19.438034 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.438874 kubelet[2717]: E0509 00:22:19.438305 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.438874 kubelet[2717]: W0509 00:22:19.438314 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.438874 kubelet[2717]: E0509 00:22:19.438727 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.438938 kubelet[2717]: E0509 00:22:19.438878 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.438938 kubelet[2717]: W0509 00:22:19.438890 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.439217 kubelet[2717]: E0509 00:22:19.439194 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.439217 kubelet[2717]: E0509 00:22:19.439195 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.439301 kubelet[2717]: I0509 00:22:19.439228 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4c753734-0995-4fcd-9777-f094bc14fa3a-registration-dir\") pod \"csi-node-driver-c5864\" (UID: \"4c753734-0995-4fcd-9777-f094bc14fa3a\") " pod="calico-system/csi-node-driver-c5864" May 9 00:22:19.439301 kubelet[2717]: E0509 00:22:19.439280 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.439301 kubelet[2717]: W0509 00:22:19.439287 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.439364 kubelet[2717]: E0509 00:22:19.439354 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.440746 kubelet[2717]: E0509 00:22:19.440111 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.440746 kubelet[2717]: W0509 00:22:19.440127 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.440746 kubelet[2717]: E0509 00:22:19.440263 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.440746 kubelet[2717]: E0509 00:22:19.440444 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.440746 kubelet[2717]: W0509 00:22:19.440637 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.440924 kubelet[2717]: E0509 00:22:19.440898 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.442757 kubelet[2717]: E0509 00:22:19.442631 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.442757 kubelet[2717]: W0509 00:22:19.442650 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.443004 kubelet[2717]: E0509 00:22:19.442984 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.443459 kubelet[2717]: E0509 00:22:19.443444 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.443516 kubelet[2717]: W0509 00:22:19.443457 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.443620 kubelet[2717]: E0509 00:22:19.443560 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.443745 kubelet[2717]: E0509 00:22:19.443727 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.443745 kubelet[2717]: W0509 00:22:19.443745 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.443939 kubelet[2717]: E0509 00:22:19.443835 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.444300 kubelet[2717]: E0509 00:22:19.444282 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.444300 kubelet[2717]: W0509 00:22:19.444296 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.444300 kubelet[2717]: E0509 00:22:19.444325 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.444479 kubelet[2717]: E0509 00:22:19.444465 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.444479 kubelet[2717]: W0509 00:22:19.444476 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.444543 kubelet[2717]: E0509 00:22:19.444522 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.444672 kubelet[2717]: E0509 00:22:19.444659 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.444672 kubelet[2717]: W0509 00:22:19.444672 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.444799 kubelet[2717]: E0509 00:22:19.444754 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.444831 kubelet[2717]: E0509 00:22:19.444825 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.444869 kubelet[2717]: W0509 00:22:19.444834 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.444869 kubelet[2717]: E0509 00:22:19.444852 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.444990 kubelet[2717]: E0509 00:22:19.444976 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.444990 kubelet[2717]: W0509 00:22:19.444989 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.445040 kubelet[2717]: E0509 00:22:19.445003 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.445164 kubelet[2717]: E0509 00:22:19.445152 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.445164 kubelet[2717]: W0509 00:22:19.445164 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.445257 kubelet[2717]: E0509 00:22:19.445239 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.445289 kubelet[2717]: E0509 00:22:19.445281 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.445331 kubelet[2717]: W0509 00:22:19.445289 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.445390 kubelet[2717]: E0509 00:22:19.445368 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.445774 kubelet[2717]: E0509 00:22:19.445723 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.445774 kubelet[2717]: W0509 00:22:19.445740 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.445774 kubelet[2717]: E0509 00:22:19.445752 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.447310 kubelet[2717]: E0509 00:22:19.445960 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.447310 kubelet[2717]: W0509 00:22:19.445969 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.447310 kubelet[2717]: E0509 00:22:19.445983 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.447310 kubelet[2717]: E0509 00:22:19.446236 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.447310 kubelet[2717]: W0509 00:22:19.446248 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.447310 kubelet[2717]: E0509 00:22:19.446260 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.447310 kubelet[2717]: E0509 00:22:19.446433 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.447310 kubelet[2717]: W0509 00:22:19.446440 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.447310 kubelet[2717]: E0509 00:22:19.446449 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.447310 kubelet[2717]: E0509 00:22:19.446650 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.448369 kubelet[2717]: W0509 00:22:19.446658 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.448369 kubelet[2717]: E0509 00:22:19.446668 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.448369 kubelet[2717]: E0509 00:22:19.446812 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.448369 kubelet[2717]: W0509 00:22:19.446820 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.448369 kubelet[2717]: E0509 00:22:19.446828 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.448369 kubelet[2717]: E0509 00:22:19.447038 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.448369 kubelet[2717]: W0509 00:22:19.447050 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.448369 kubelet[2717]: E0509 00:22:19.447062 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.448369 kubelet[2717]: E0509 00:22:19.447208 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.448369 kubelet[2717]: W0509 00:22:19.447216 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.450856 kubelet[2717]: E0509 00:22:19.447225 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.450856 kubelet[2717]: E0509 00:22:19.447381 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.450856 kubelet[2717]: W0509 00:22:19.447389 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.450856 kubelet[2717]: E0509 00:22:19.447399 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.450856 kubelet[2717]: E0509 00:22:19.447734 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.450856 kubelet[2717]: W0509 00:22:19.447745 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.450856 kubelet[2717]: E0509 00:22:19.447760 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.450856 kubelet[2717]: E0509 00:22:19.448696 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.450856 kubelet[2717]: W0509 00:22:19.448705 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.450856 kubelet[2717]: E0509 00:22:19.448714 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.451046 kubelet[2717]: E0509 00:22:19.448896 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.451046 kubelet[2717]: W0509 00:22:19.448910 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.451046 kubelet[2717]: E0509 00:22:19.448928 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.451046 kubelet[2717]: E0509 00:22:19.449082 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.451046 kubelet[2717]: W0509 00:22:19.449089 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.451046 kubelet[2717]: E0509 00:22:19.449107 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.451046 kubelet[2717]: E0509 00:22:19.449224 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.451046 kubelet[2717]: W0509 00:22:19.449231 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.451046 kubelet[2717]: E0509 00:22:19.449247 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.451046 kubelet[2717]: E0509 00:22:19.449357 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.451248 kubelet[2717]: W0509 00:22:19.449367 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.451248 kubelet[2717]: E0509 00:22:19.449380 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.451248 kubelet[2717]: I0509 00:22:19.449402 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4c753734-0995-4fcd-9777-f094bc14fa3a-varrun\") pod \"csi-node-driver-c5864\" (UID: \"4c753734-0995-4fcd-9777-f094bc14fa3a\") " pod="calico-system/csi-node-driver-c5864" May 9 00:22:19.451248 kubelet[2717]: E0509 00:22:19.449753 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.451248 kubelet[2717]: W0509 00:22:19.449765 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.451248 kubelet[2717]: E0509 00:22:19.449781 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.451248 kubelet[2717]: I0509 00:22:19.449799 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nv68\" (UniqueName: \"kubernetes.io/projected/4c753734-0995-4fcd-9777-f094bc14fa3a-kube-api-access-7nv68\") pod \"csi-node-driver-c5864\" (UID: \"4c753734-0995-4fcd-9777-f094bc14fa3a\") " pod="calico-system/csi-node-driver-c5864" May 9 00:22:19.451701 kubelet[2717]: E0509 00:22:19.451675 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.451701 kubelet[2717]: W0509 00:22:19.451695 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.451840 kubelet[2717]: E0509 00:22:19.451752 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.452023 kubelet[2717]: E0509 00:22:19.452011 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.452023 kubelet[2717]: W0509 00:22:19.452022 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.452098 kubelet[2717]: E0509 00:22:19.452050 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.452200 kubelet[2717]: E0509 00:22:19.452187 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.452200 kubelet[2717]: W0509 00:22:19.452197 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.452317 kubelet[2717]: E0509 00:22:19.452243 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.452356 kubelet[2717]: E0509 00:22:19.452333 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.452356 kubelet[2717]: W0509 00:22:19.452340 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.452433 kubelet[2717]: E0509 00:22:19.452414 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.452508 kubelet[2717]: E0509 00:22:19.452495 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.452508 kubelet[2717]: W0509 00:22:19.452505 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.452587 kubelet[2717]: E0509 00:22:19.452567 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.452728 kubelet[2717]: E0509 00:22:19.452716 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.452728 kubelet[2717]: W0509 00:22:19.452725 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.452804 kubelet[2717]: E0509 00:22:19.452791 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.452936 kubelet[2717]: E0509 00:22:19.452927 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.452936 kubelet[2717]: W0509 00:22:19.452935 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.452989 kubelet[2717]: E0509 00:22:19.452946 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.453154 kubelet[2717]: E0509 00:22:19.453141 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.453191 kubelet[2717]: W0509 00:22:19.453154 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.453191 kubelet[2717]: E0509 00:22:19.453173 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.453349 kubelet[2717]: E0509 00:22:19.453336 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.453349 kubelet[2717]: W0509 00:22:19.453349 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.453400 kubelet[2717]: E0509 00:22:19.453360 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.453564 kubelet[2717]: E0509 00:22:19.453550 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.453564 kubelet[2717]: W0509 00:22:19.453562 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.453644 kubelet[2717]: E0509 00:22:19.453575 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.453982 kubelet[2717]: E0509 00:22:19.453891 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.453982 kubelet[2717]: W0509 00:22:19.453905 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.453982 kubelet[2717]: E0509 00:22:19.453917 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.455345 kubelet[2717]: E0509 00:22:19.455258 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.455345 kubelet[2717]: W0509 00:22:19.455309 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.455788 kubelet[2717]: E0509 00:22:19.455367 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.456020 kubelet[2717]: E0509 00:22:19.455993 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.456210 kubelet[2717]: W0509 00:22:19.456182 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.456250 kubelet[2717]: E0509 00:22:19.456212 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.458375 kubelet[2717]: E0509 00:22:19.458342 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.458375 kubelet[2717]: W0509 00:22:19.458360 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.458375 kubelet[2717]: E0509 00:22:19.458378 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.459470 kubelet[2717]: E0509 00:22:19.459443 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.459470 kubelet[2717]: W0509 00:22:19.459462 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.459520 kubelet[2717]: E0509 00:22:19.459477 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.460220 kubelet[2717]: E0509 00:22:19.460198 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.460220 kubelet[2717]: W0509 00:22:19.460217 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.460271 kubelet[2717]: E0509 00:22:19.460230 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.493042 kubelet[2717]: E0509 00:22:19.492998 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:19.493705 containerd[1538]: time="2025-05-09T00:22:19.493659384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-694774dcc6-jjfls,Uid:7bf53bdf-c8a3-41e7-8650-326274418106,Namespace:calico-system,Attempt:0,}" May 9 00:22:19.513488 containerd[1538]: time="2025-05-09T00:22:19.513215068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:22:19.513488 containerd[1538]: time="2025-05-09T00:22:19.513274506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:22:19.513488 containerd[1538]: time="2025-05-09T00:22:19.513290266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:22:19.513488 containerd[1538]: time="2025-05-09T00:22:19.513375704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:22:19.522137 kubelet[2717]: E0509 00:22:19.522091 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:19.522924 containerd[1538]: time="2025-05-09T00:22:19.522778955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pt7mz,Uid:a7704a0f-1bf2-4b38-bad9-ad8b4d0964df,Namespace:calico-system,Attempt:0,}" May 9 00:22:19.552341 kubelet[2717]: E0509 00:22:19.551517 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.552341 kubelet[2717]: W0509 00:22:19.551543 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.552341 kubelet[2717]: E0509 00:22:19.551563 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.552341 kubelet[2717]: E0509 00:22:19.551825 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.552341 kubelet[2717]: W0509 00:22:19.551835 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.552341 kubelet[2717]: E0509 00:22:19.551847 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.552341 kubelet[2717]: E0509 00:22:19.552142 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.552341 kubelet[2717]: W0509 00:22:19.552190 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.552341 kubelet[2717]: E0509 00:22:19.552217 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.553206 kubelet[2717]: E0509 00:22:19.552590 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.553206 kubelet[2717]: W0509 00:22:19.552602 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.553206 kubelet[2717]: E0509 00:22:19.552613 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.554067 kubelet[2717]: E0509 00:22:19.553672 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.554067 kubelet[2717]: W0509 00:22:19.553686 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.554067 kubelet[2717]: E0509 00:22:19.553823 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.554352 kubelet[2717]: E0509 00:22:19.554256 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.554352 kubelet[2717]: W0509 00:22:19.554270 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.554352 kubelet[2717]: E0509 00:22:19.554314 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.554690 kubelet[2717]: E0509 00:22:19.554535 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.554690 kubelet[2717]: W0509 00:22:19.554548 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.554690 kubelet[2717]: E0509 00:22:19.554628 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.554948 kubelet[2717]: E0509 00:22:19.554929 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.555098 kubelet[2717]: W0509 00:22:19.555002 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.555098 kubelet[2717]: E0509 00:22:19.555041 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.555245 kubelet[2717]: E0509 00:22:19.555232 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.555303 kubelet[2717]: W0509 00:22:19.555292 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.555430 kubelet[2717]: E0509 00:22:19.555399 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.555785 kubelet[2717]: E0509 00:22:19.555660 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.555785 kubelet[2717]: W0509 00:22:19.555675 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.555785 kubelet[2717]: E0509 00:22:19.555707 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.556034 kubelet[2717]: E0509 00:22:19.556019 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.556197 kubelet[2717]: W0509 00:22:19.556099 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.556197 kubelet[2717]: E0509 00:22:19.556152 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.556278 containerd[1538]: time="2025-05-09T00:22:19.556239060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-694774dcc6-jjfls,Uid:7bf53bdf-c8a3-41e7-8650-326274418106,Namespace:calico-system,Attempt:0,} returns sandbox id \"0ff999c8bc2d20cb995574e7bc652071dd444f576a78e604bc91f123b55a4e6b\"" May 9 00:22:19.556551 kubelet[2717]: E0509 00:22:19.556461 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.556551 kubelet[2717]: W0509 00:22:19.556474 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.556551 kubelet[2717]: E0509 00:22:19.556531 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.556741 kubelet[2717]: E0509 00:22:19.556727 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.556982 kubelet[2717]: W0509 00:22:19.556964 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.557086 kubelet[2717]: E0509 00:22:19.557060 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:19.557247 kubelet[2717]: E0509 00:22:19.557064 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.558321 kubelet[2717]: E0509 00:22:19.558266 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.558321 kubelet[2717]: W0509 00:22:19.558280 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.558392 kubelet[2717]: E0509 00:22:19.558380 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.558517 containerd[1538]: time="2025-05-09T00:22:19.558478646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 9 00:22:19.558611 kubelet[2717]: E0509 00:22:19.558594 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.558644 kubelet[2717]: W0509 00:22:19.558612 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.558703 kubelet[2717]: E0509 00:22:19.558676 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.558852 kubelet[2717]: E0509 00:22:19.558832 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.558852 kubelet[2717]: W0509 00:22:19.558847 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.558924 kubelet[2717]: E0509 00:22:19.558895 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.559937 kubelet[2717]: E0509 00:22:19.559044 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.559937 kubelet[2717]: W0509 00:22:19.559056 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.559937 kubelet[2717]: E0509 00:22:19.559131 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.559937 kubelet[2717]: E0509 00:22:19.559233 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.559937 kubelet[2717]: W0509 00:22:19.559240 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.559937 kubelet[2717]: E0509 00:22:19.559397 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.560377 kubelet[2717]: E0509 00:22:19.560357 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.560377 kubelet[2717]: W0509 00:22:19.560376 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.560440 kubelet[2717]: E0509 00:22:19.560401 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.560797 kubelet[2717]: E0509 00:22:19.560679 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.560797 kubelet[2717]: W0509 00:22:19.560694 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.560797 kubelet[2717]: E0509 00:22:19.560730 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.560964 kubelet[2717]: E0509 00:22:19.560870 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.560964 kubelet[2717]: W0509 00:22:19.560879 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.560964 kubelet[2717]: E0509 00:22:19.560936 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.561210 kubelet[2717]: E0509 00:22:19.561195 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.561210 kubelet[2717]: W0509 00:22:19.561210 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.561288 kubelet[2717]: E0509 00:22:19.561275 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.561473 kubelet[2717]: E0509 00:22:19.561460 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.561510 kubelet[2717]: W0509 00:22:19.561473 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.561549 kubelet[2717]: E0509 00:22:19.561534 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.561703 kubelet[2717]: E0509 00:22:19.561691 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.561703 kubelet[2717]: W0509 00:22:19.561703 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.561798 kubelet[2717]: E0509 00:22:19.561785 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.562422 kubelet[2717]: E0509 00:22:19.562391 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.562422 kubelet[2717]: W0509 00:22:19.562410 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.562507 kubelet[2717]: E0509 00:22:19.562432 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.572444 kubelet[2717]: E0509 00:22:19.572354 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:19.572444 kubelet[2717]: W0509 00:22:19.572374 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:19.572444 kubelet[2717]: E0509 00:22:19.572391 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:19.582992 containerd[1538]: time="2025-05-09T00:22:19.582905331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:22:19.582992 containerd[1538]: time="2025-05-09T00:22:19.582954890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:22:19.582992 containerd[1538]: time="2025-05-09T00:22:19.582978969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:22:19.583186 containerd[1538]: time="2025-05-09T00:22:19.583069247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:22:19.616491 containerd[1538]: time="2025-05-09T00:22:19.616451594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pt7mz,Uid:a7704a0f-1bf2-4b38-bad9-ad8b4d0964df,Namespace:calico-system,Attempt:0,} returns sandbox id \"87ddf306114dd945da1d38b89ff3042a21c24483676a30653f8006a3542a36d3\"" May 9 00:22:19.617192 kubelet[2717]: E0509 00:22:19.617172 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:21.173432 containerd[1538]: time="2025-05-09T00:22:21.173091639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:21.174000 containerd[1538]: time="2025-05-09T00:22:21.173607868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 9 00:22:21.174398 containerd[1538]: time="2025-05-09T00:22:21.174366771Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:21.177066 containerd[1538]: time="2025-05-09T00:22:21.177032392Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.618513907s" May 9 00:22:21.177066 containerd[1538]: time="2025-05-09T00:22:21.177066791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 9 00:22:21.178315 containerd[1538]: time="2025-05-09T00:22:21.178289244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 9 00:22:21.190031 containerd[1538]: time="2025-05-09T00:22:21.189904026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:21.195837 containerd[1538]: time="2025-05-09T00:22:21.195803055Z" level=info msg="CreateContainer within sandbox \"0ff999c8bc2d20cb995574e7bc652071dd444f576a78e604bc91f123b55a4e6b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 9 00:22:21.210488 containerd[1538]: time="2025-05-09T00:22:21.210437570Z" level=info msg="CreateContainer within sandbox \"0ff999c8bc2d20cb995574e7bc652071dd444f576a78e604bc91f123b55a4e6b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f6b6385ca1afeec95290273cd07711adf383cadca8be2ef802255a287f41d0ed\"" May 9 00:22:21.211141 containerd[1538]: time="2025-05-09T00:22:21.211037436Z" level=info msg="StartContainer for \"f6b6385ca1afeec95290273cd07711adf383cadca8be2ef802255a287f41d0ed\"" May 9 00:22:21.261136 containerd[1538]: time="2025-05-09T00:22:21.261091285Z" level=info msg="StartContainer for \"f6b6385ca1afeec95290273cd07711adf383cadca8be2ef802255a287f41d0ed\" returns successfully" May 9 00:22:21.354990 kubelet[2717]: E0509 00:22:21.354779 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c5864" podUID="4c753734-0995-4fcd-9777-f094bc14fa3a" May 9 00:22:21.417536 kubelet[2717]: E0509 00:22:21.417330 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:21.485653 kubelet[2717]: E0509 00:22:21.485527 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.485653 kubelet[2717]: W0509 00:22:21.485550 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.485653 kubelet[2717]: E0509 00:22:21.485570 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.486080 kubelet[2717]: E0509 00:22:21.486067 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.486080 kubelet[2717]: W0509 00:22:21.486080 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.486159 kubelet[2717]: E0509 00:22:21.486090 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.486279 kubelet[2717]: E0509 00:22:21.486253 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.486279 kubelet[2717]: W0509 00:22:21.486266 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.486279 kubelet[2717]: E0509 00:22:21.486275 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.486462 kubelet[2717]: E0509 00:22:21.486438 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.486462 kubelet[2717]: W0509 00:22:21.486450 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.486462 kubelet[2717]: E0509 00:22:21.486458 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.486641 kubelet[2717]: E0509 00:22:21.486629 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.486641 kubelet[2717]: W0509 00:22:21.486640 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.486713 kubelet[2717]: E0509 00:22:21.486650 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.486792 kubelet[2717]: E0509 00:22:21.486780 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.486792 kubelet[2717]: W0509 00:22:21.486790 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.486848 kubelet[2717]: E0509 00:22:21.486798 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.486974 kubelet[2717]: E0509 00:22:21.486964 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.486974 kubelet[2717]: W0509 00:22:21.486974 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.487031 kubelet[2717]: E0509 00:22:21.486981 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.487119 kubelet[2717]: E0509 00:22:21.487108 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.487119 kubelet[2717]: W0509 00:22:21.487118 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.487176 kubelet[2717]: E0509 00:22:21.487125 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.487267 kubelet[2717]: E0509 00:22:21.487255 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.487267 kubelet[2717]: W0509 00:22:21.487266 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.487326 kubelet[2717]: E0509 00:22:21.487273 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.487448 kubelet[2717]: E0509 00:22:21.487437 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.487448 kubelet[2717]: W0509 00:22:21.487448 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.487500 kubelet[2717]: E0509 00:22:21.487455 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.487635 kubelet[2717]: E0509 00:22:21.487623 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.487635 kubelet[2717]: W0509 00:22:21.487633 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.487691 kubelet[2717]: E0509 00:22:21.487644 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.487795 kubelet[2717]: E0509 00:22:21.487784 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.487795 kubelet[2717]: W0509 00:22:21.487794 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.487844 kubelet[2717]: E0509 00:22:21.487802 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.487999 kubelet[2717]: E0509 00:22:21.487974 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.487999 kubelet[2717]: W0509 00:22:21.487985 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.487999 kubelet[2717]: E0509 00:22:21.487993 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.488127 kubelet[2717]: E0509 00:22:21.488116 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.488127 kubelet[2717]: W0509 00:22:21.488126 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.488175 kubelet[2717]: E0509 00:22:21.488133 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.488265 kubelet[2717]: E0509 00:22:21.488254 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.488289 kubelet[2717]: W0509 00:22:21.488264 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.488289 kubelet[2717]: E0509 00:22:21.488272 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.566471 kubelet[2717]: E0509 00:22:21.566388 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.566471 kubelet[2717]: W0509 00:22:21.566411 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.566471 kubelet[2717]: E0509 00:22:21.566427 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.566672 kubelet[2717]: E0509 00:22:21.566656 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.566672 kubelet[2717]: W0509 00:22:21.566668 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.566750 kubelet[2717]: E0509 00:22:21.566681 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.566895 kubelet[2717]: E0509 00:22:21.566886 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.566895 kubelet[2717]: W0509 00:22:21.566895 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.566951 kubelet[2717]: E0509 00:22:21.566908 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.567082 kubelet[2717]: E0509 00:22:21.567072 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.567082 kubelet[2717]: W0509 00:22:21.567082 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.567158 kubelet[2717]: E0509 00:22:21.567095 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.567280 kubelet[2717]: E0509 00:22:21.567245 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.567280 kubelet[2717]: W0509 00:22:21.567253 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.567280 kubelet[2717]: E0509 00:22:21.567265 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.567405 kubelet[2717]: E0509 00:22:21.567395 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.567405 kubelet[2717]: W0509 00:22:21.567405 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.567453 kubelet[2717]: E0509 00:22:21.567417 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.567571 kubelet[2717]: E0509 00:22:21.567562 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.567571 kubelet[2717]: W0509 00:22:21.567571 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.567641 kubelet[2717]: E0509 00:22:21.567594 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.568010 kubelet[2717]: E0509 00:22:21.567907 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.568010 kubelet[2717]: W0509 00:22:21.567922 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.568010 kubelet[2717]: E0509 00:22:21.567967 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.568333 kubelet[2717]: E0509 00:22:21.568278 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.568333 kubelet[2717]: W0509 00:22:21.568289 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.568333 kubelet[2717]: E0509 00:22:21.568315 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.568659 kubelet[2717]: E0509 00:22:21.568602 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.568659 kubelet[2717]: W0509 00:22:21.568614 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.568659 kubelet[2717]: E0509 00:22:21.568636 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.569013 kubelet[2717]: E0509 00:22:21.568897 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.569013 kubelet[2717]: W0509 00:22:21.568908 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.569013 kubelet[2717]: E0509 00:22:21.568925 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.569262 kubelet[2717]: E0509 00:22:21.569181 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.569262 kubelet[2717]: W0509 00:22:21.569192 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.569262 kubelet[2717]: E0509 00:22:21.569208 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.569636 kubelet[2717]: E0509 00:22:21.569503 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.569636 kubelet[2717]: W0509 00:22:21.569515 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.569636 kubelet[2717]: E0509 00:22:21.569533 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.569764 kubelet[2717]: E0509 00:22:21.569749 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.569764 kubelet[2717]: W0509 00:22:21.569762 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.569836 kubelet[2717]: E0509 00:22:21.569773 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.569972 kubelet[2717]: E0509 00:22:21.569901 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.569972 kubelet[2717]: W0509 00:22:21.569911 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.569972 kubelet[2717]: E0509 00:22:21.569919 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.570076 kubelet[2717]: E0509 00:22:21.570061 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.570076 kubelet[2717]: W0509 00:22:21.570072 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.570148 kubelet[2717]: E0509 00:22:21.570081 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.570422 kubelet[2717]: E0509 00:22:21.570357 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.570422 kubelet[2717]: W0509 00:22:21.570379 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.570422 kubelet[2717]: E0509 00:22:21.570397 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:21.570560 kubelet[2717]: E0509 00:22:21.570548 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:22:21.570609 kubelet[2717]: W0509 00:22:21.570560 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:22:21.570609 kubelet[2717]: E0509 00:22:21.570569 2717 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:22:22.353933 containerd[1538]: time="2025-05-09T00:22:22.353886160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:22.354956 containerd[1538]: time="2025-05-09T00:22:22.354742581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 9 00:22:22.358208 containerd[1538]: time="2025-05-09T00:22:22.358160949Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.179840506s" May 9 00:22:22.358390 containerd[1538]: time="2025-05-09T00:22:22.358195828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 9 00:22:22.360180 containerd[1538]: time="2025-05-09T00:22:22.360152066Z" level=info msg="CreateContainer within sandbox \"87ddf306114dd945da1d38b89ff3042a21c24483676a30653f8006a3542a36d3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 9 00:22:22.360922 containerd[1538]: time="2025-05-09T00:22:22.360884971Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:22.361574 containerd[1538]: time="2025-05-09T00:22:22.361546277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:22.375037 containerd[1538]: time="2025-05-09T00:22:22.374997591Z" level=info msg="CreateContainer within sandbox \"87ddf306114dd945da1d38b89ff3042a21c24483676a30653f8006a3542a36d3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e4ae4c86caca160053a172f3d7e7ee37705cc754cce05f4711333ff27e642b3c\"" May 9 00:22:22.375445 containerd[1538]: time="2025-05-09T00:22:22.375420462Z" level=info msg="StartContainer for \"e4ae4c86caca160053a172f3d7e7ee37705cc754cce05f4711333ff27e642b3c\"" May 9 00:22:22.421968 kubelet[2717]: I0509 00:22:22.421939 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:22:22.423005 kubelet[2717]: E0509 00:22:22.422921 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:22.426731 containerd[1538]: time="2025-05-09T00:22:22.426501178Z" level=info msg="StartContainer for \"e4ae4c86caca160053a172f3d7e7ee37705cc754cce05f4711333ff27e642b3c\" returns successfully" May 9 00:22:22.480301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4ae4c86caca160053a172f3d7e7ee37705cc754cce05f4711333ff27e642b3c-rootfs.mount: Deactivated successfully. May 9 00:22:22.505480 containerd[1538]: time="2025-05-09T00:22:22.499905779Z" level=info msg="shim disconnected" id=e4ae4c86caca160053a172f3d7e7ee37705cc754cce05f4711333ff27e642b3c namespace=k8s.io May 9 00:22:22.505735 containerd[1538]: time="2025-05-09T00:22:22.505483100Z" level=warning msg="cleaning up after shim disconnected" id=e4ae4c86caca160053a172f3d7e7ee37705cc754cce05f4711333ff27e642b3c namespace=k8s.io May 9 00:22:22.505735 containerd[1538]: time="2025-05-09T00:22:22.505496860Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:22:23.353962 kubelet[2717]: E0509 00:22:23.353906 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c5864" podUID="4c753734-0995-4fcd-9777-f094bc14fa3a" May 9 00:22:23.425218 kubelet[2717]: E0509 00:22:23.425177 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:23.426854 containerd[1538]: time="2025-05-09T00:22:23.426780646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 9 00:22:23.439606 kubelet[2717]: I0509 00:22:23.439534 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-694774dcc6-jjfls" podStartSLOduration=2.820023182 podStartE2EDuration="4.439516147s" podCreationTimestamp="2025-05-09 00:22:19 +0000 UTC" firstStartedPulling="2025-05-09 00:22:19.558240891 +0000 UTC m=+20.279525532" lastFinishedPulling="2025-05-09 00:22:21.177733856 +0000 UTC m=+21.899018497" observedRunningTime="2025-05-09 00:22:21.426606009 +0000 UTC m=+22.147890770" watchObservedRunningTime="2025-05-09 00:22:23.439516147 +0000 UTC m=+24.160800828" May 9 00:22:25.354656 kubelet[2717]: E0509 00:22:25.354610 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c5864" podUID="4c753734-0995-4fcd-9777-f094bc14fa3a" May 9 00:22:26.070385 containerd[1538]: time="2025-05-09T00:22:26.070292323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:26.071049 containerd[1538]: time="2025-05-09T00:22:26.071012870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 9 00:22:26.071633 containerd[1538]: time="2025-05-09T00:22:26.071575060Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:26.075121 containerd[1538]: time="2025-05-09T00:22:26.073977817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:26.075121 containerd[1538]: time="2025-05-09T00:22:26.074754683Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 2.647924958s" May 9 00:22:26.075121 containerd[1538]: time="2025-05-09T00:22:26.074780322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 9 00:22:26.086145 containerd[1538]: time="2025-05-09T00:22:26.086110679Z" level=info msg="CreateContainer within sandbox \"87ddf306114dd945da1d38b89ff3042a21c24483676a30653f8006a3542a36d3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 9 00:22:26.096660 containerd[1538]: time="2025-05-09T00:22:26.096544772Z" level=info msg="CreateContainer within sandbox \"87ddf306114dd945da1d38b89ff3042a21c24483676a30653f8006a3542a36d3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3c5d34e6df4404e313630b14770a3894f29debd3666930af1cf675c9752ed105\"" May 9 00:22:26.097125 containerd[1538]: time="2025-05-09T00:22:26.097099402Z" level=info msg="StartContainer for \"3c5d34e6df4404e313630b14770a3894f29debd3666930af1cf675c9752ed105\"" May 9 00:22:26.157175 containerd[1538]: time="2025-05-09T00:22:26.155424078Z" level=info msg="StartContainer for \"3c5d34e6df4404e313630b14770a3894f29debd3666930af1cf675c9752ed105\" returns successfully" May 9 00:22:26.433469 kubelet[2717]: E0509 00:22:26.433075 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:26.802290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c5d34e6df4404e313630b14770a3894f29debd3666930af1cf675c9752ed105-rootfs.mount: Deactivated successfully. May 9 00:22:26.825356 kubelet[2717]: I0509 00:22:26.825076 2717 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 9 00:22:26.938009 containerd[1538]: time="2025-05-09T00:22:26.937116315Z" level=info msg="shim disconnected" id=3c5d34e6df4404e313630b14770a3894f29debd3666930af1cf675c9752ed105 namespace=k8s.io May 9 00:22:26.938009 containerd[1538]: time="2025-05-09T00:22:26.937174114Z" level=warning msg="cleaning up after shim disconnected" id=3c5d34e6df4404e313630b14770a3894f29debd3666930af1cf675c9752ed105 namespace=k8s.io May 9 00:22:26.938009 containerd[1538]: time="2025-05-09T00:22:26.937185274Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:22:26.959275 kubelet[2717]: I0509 00:22:26.959058 2717 topology_manager.go:215] "Topology Admit Handler" podUID="38013c7d-6040-44b1-bf5b-b5ceecbabd97" podNamespace="calico-system" podName="calico-kube-controllers-85b589894b-bpbf6" May 9 00:22:27.001474 kubelet[2717]: I0509 00:22:27.000347 2717 topology_manager.go:215] "Topology Admit Handler" podUID="2b79fe7f-93ab-4e08-981f-3cf87817c0d8" podNamespace="calico-apiserver" podName="calico-apiserver-578848dcfb-6t55f" May 9 00:22:27.001474 kubelet[2717]: I0509 00:22:27.000507 2717 topology_manager.go:215] "Topology Admit Handler" podUID="35cd080a-2fa4-4c86-aa58-2bc34990adbf" podNamespace="calico-apiserver" podName="calico-apiserver-578848dcfb-67brr" May 9 00:22:27.001474 kubelet[2717]: I0509 00:22:27.000771 2717 topology_manager.go:215] "Topology Admit Handler" podUID="a2ed08d3-7f7d-48a2-9e9c-5f86ae30999a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-r585w" May 9 00:22:27.001474 kubelet[2717]: I0509 00:22:27.000995 2717 topology_manager.go:215] "Topology Admit Handler" podUID="1e86186f-4466-4f52-abab-bf5cb10cf167" podNamespace="kube-system" podName="coredns-7db6d8ff4d-l97jq" May 9 00:22:27.106849 kubelet[2717]: I0509 00:22:27.106729 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/35cd080a-2fa4-4c86-aa58-2bc34990adbf-calico-apiserver-certs\") pod \"calico-apiserver-578848dcfb-67brr\" (UID: \"35cd080a-2fa4-4c86-aa58-2bc34990adbf\") " pod="calico-apiserver/calico-apiserver-578848dcfb-67brr" May 9 00:22:27.106849 kubelet[2717]: I0509 00:22:27.106811 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9s4c\" (UniqueName: \"kubernetes.io/projected/2b79fe7f-93ab-4e08-981f-3cf87817c0d8-kube-api-access-p9s4c\") pod \"calico-apiserver-578848dcfb-6t55f\" (UID: \"2b79fe7f-93ab-4e08-981f-3cf87817c0d8\") " pod="calico-apiserver/calico-apiserver-578848dcfb-6t55f" May 9 00:22:27.106849 kubelet[2717]: I0509 00:22:27.106832 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e86186f-4466-4f52-abab-bf5cb10cf167-config-volume\") pod \"coredns-7db6d8ff4d-l97jq\" (UID: \"1e86186f-4466-4f52-abab-bf5cb10cf167\") " pod="kube-system/coredns-7db6d8ff4d-l97jq" May 9 00:22:27.107018 kubelet[2717]: I0509 00:22:27.106859 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpwpv\" (UniqueName: \"kubernetes.io/projected/38013c7d-6040-44b1-bf5b-b5ceecbabd97-kube-api-access-xpwpv\") pod \"calico-kube-controllers-85b589894b-bpbf6\" (UID: \"38013c7d-6040-44b1-bf5b-b5ceecbabd97\") " pod="calico-system/calico-kube-controllers-85b589894b-bpbf6" May 9 00:22:27.107089 kubelet[2717]: I0509 00:22:27.107060 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38013c7d-6040-44b1-bf5b-b5ceecbabd97-tigera-ca-bundle\") pod \"calico-kube-controllers-85b589894b-bpbf6\" (UID: \"38013c7d-6040-44b1-bf5b-b5ceecbabd97\") " pod="calico-system/calico-kube-controllers-85b589894b-bpbf6" May 9 00:22:27.107131 kubelet[2717]: I0509 00:22:27.107105 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2b79fe7f-93ab-4e08-981f-3cf87817c0d8-calico-apiserver-certs\") pod \"calico-apiserver-578848dcfb-6t55f\" (UID: \"2b79fe7f-93ab-4e08-981f-3cf87817c0d8\") " pod="calico-apiserver/calico-apiserver-578848dcfb-6t55f" May 9 00:22:27.107131 kubelet[2717]: I0509 00:22:27.107126 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v45c\" (UniqueName: \"kubernetes.io/projected/35cd080a-2fa4-4c86-aa58-2bc34990adbf-kube-api-access-4v45c\") pod \"calico-apiserver-578848dcfb-67brr\" (UID: \"35cd080a-2fa4-4c86-aa58-2bc34990adbf\") " pod="calico-apiserver/calico-apiserver-578848dcfb-67brr" May 9 00:22:27.107186 kubelet[2717]: I0509 00:22:27.107145 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6sjx\" (UniqueName: \"kubernetes.io/projected/a2ed08d3-7f7d-48a2-9e9c-5f86ae30999a-kube-api-access-c6sjx\") pod \"coredns-7db6d8ff4d-r585w\" (UID: \"a2ed08d3-7f7d-48a2-9e9c-5f86ae30999a\") " pod="kube-system/coredns-7db6d8ff4d-r585w" May 9 00:22:27.107519 kubelet[2717]: I0509 00:22:27.107498 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvqqn\" (UniqueName: \"kubernetes.io/projected/1e86186f-4466-4f52-abab-bf5cb10cf167-kube-api-access-mvqqn\") pod \"coredns-7db6d8ff4d-l97jq\" (UID: \"1e86186f-4466-4f52-abab-bf5cb10cf167\") " pod="kube-system/coredns-7db6d8ff4d-l97jq" May 9 00:22:27.107599 kubelet[2717]: I0509 00:22:27.107549 2717 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2ed08d3-7f7d-48a2-9e9c-5f86ae30999a-config-volume\") pod \"coredns-7db6d8ff4d-r585w\" (UID: \"a2ed08d3-7f7d-48a2-9e9c-5f86ae30999a\") " pod="kube-system/coredns-7db6d8ff4d-r585w" May 9 00:22:27.262507 containerd[1538]: time="2025-05-09T00:22:27.262459311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85b589894b-bpbf6,Uid:38013c7d-6040-44b1-bf5b-b5ceecbabd97,Namespace:calico-system,Attempt:0,}" May 9 00:22:27.303960 kubelet[2717]: E0509 00:22:27.303914 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:27.304661 containerd[1538]: time="2025-05-09T00:22:27.304620106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r585w,Uid:a2ed08d3-7f7d-48a2-9e9c-5f86ae30999a,Namespace:kube-system,Attempt:0,}" May 9 00:22:27.309389 containerd[1538]: time="2025-05-09T00:22:27.309044830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578848dcfb-67brr,Uid:35cd080a-2fa4-4c86-aa58-2bc34990adbf,Namespace:calico-apiserver,Attempt:0,}" May 9 00:22:27.310816 kubelet[2717]: E0509 00:22:27.310788 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:27.311551 containerd[1538]: time="2025-05-09T00:22:27.311351670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l97jq,Uid:1e86186f-4466-4f52-abab-bf5cb10cf167,Namespace:kube-system,Attempt:0,}" May 9 00:22:27.314036 containerd[1538]: time="2025-05-09T00:22:27.313009602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578848dcfb-6t55f,Uid:2b79fe7f-93ab-4e08-981f-3cf87817c0d8,Namespace:calico-apiserver,Attempt:0,}" May 9 00:22:27.356735 containerd[1538]: time="2025-05-09T00:22:27.356655610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c5864,Uid:4c753734-0995-4fcd-9777-f094bc14fa3a,Namespace:calico-system,Attempt:0,}" May 9 00:22:27.441560 kubelet[2717]: E0509 00:22:27.439897 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:27.448779 containerd[1538]: time="2025-05-09T00:22:27.448741186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 9 00:22:27.522627 containerd[1538]: time="2025-05-09T00:22:27.522560036Z" level=error msg="Failed to destroy network for sandbox \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.523142 containerd[1538]: time="2025-05-09T00:22:27.523112346Z" level=error msg="encountered an error cleaning up failed sandbox \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.523306 containerd[1538]: time="2025-05-09T00:22:27.523279463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578848dcfb-67brr,Uid:35cd080a-2fa4-4c86-aa58-2bc34990adbf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.526956 kubelet[2717]: E0509 00:22:27.526881 2717 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.527059 kubelet[2717]: E0509 00:22:27.526965 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-578848dcfb-67brr" May 9 00:22:27.527059 kubelet[2717]: E0509 00:22:27.526990 2717 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-578848dcfb-67brr" May 9 00:22:27.527059 kubelet[2717]: E0509 00:22:27.527034 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-578848dcfb-67brr_calico-apiserver(35cd080a-2fa4-4c86-aa58-2bc34990adbf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-578848dcfb-67brr_calico-apiserver(35cd080a-2fa4-4c86-aa58-2bc34990adbf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-578848dcfb-67brr" podUID="35cd080a-2fa4-4c86-aa58-2bc34990adbf" May 9 00:22:27.528338 containerd[1538]: time="2025-05-09T00:22:27.528300257Z" level=error msg="Failed to destroy network for sandbox \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.528679 containerd[1538]: time="2025-05-09T00:22:27.528650251Z" level=error msg="encountered an error cleaning up failed sandbox \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.528725 containerd[1538]: time="2025-05-09T00:22:27.528701250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l97jq,Uid:1e86186f-4466-4f52-abab-bf5cb10cf167,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.528927 kubelet[2717]: E0509 00:22:27.528898 2717 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.528979 kubelet[2717]: E0509 00:22:27.528947 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-l97jq" May 9 00:22:27.528979 kubelet[2717]: E0509 00:22:27.528967 2717 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-l97jq" May 9 00:22:27.529034 kubelet[2717]: E0509 00:22:27.529001 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-l97jq_kube-system(1e86186f-4466-4f52-abab-bf5cb10cf167)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-l97jq_kube-system(1e86186f-4466-4f52-abab-bf5cb10cf167)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-l97jq" podUID="1e86186f-4466-4f52-abab-bf5cb10cf167" May 9 00:22:27.537795 containerd[1538]: time="2025-05-09T00:22:27.537743414Z" level=error msg="Failed to destroy network for sandbox \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.538142 containerd[1538]: time="2025-05-09T00:22:27.538102888Z" level=error msg="encountered an error cleaning up failed sandbox \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.538184 containerd[1538]: time="2025-05-09T00:22:27.538167567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578848dcfb-6t55f,Uid:2b79fe7f-93ab-4e08-981f-3cf87817c0d8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.538512 kubelet[2717]: E0509 00:22:27.538400 2717 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.538512 kubelet[2717]: E0509 00:22:27.538485 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-578848dcfb-6t55f" May 9 00:22:27.538621 kubelet[2717]: E0509 00:22:27.538518 2717 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-578848dcfb-6t55f" May 9 00:22:27.538621 kubelet[2717]: E0509 00:22:27.538566 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-578848dcfb-6t55f_calico-apiserver(2b79fe7f-93ab-4e08-981f-3cf87817c0d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-578848dcfb-6t55f_calico-apiserver(2b79fe7f-93ab-4e08-981f-3cf87817c0d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-578848dcfb-6t55f" podUID="2b79fe7f-93ab-4e08-981f-3cf87817c0d8" May 9 00:22:27.539097 containerd[1538]: time="2025-05-09T00:22:27.539054512Z" level=error msg="Failed to destroy network for sandbox \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.539504 containerd[1538]: time="2025-05-09T00:22:27.539474665Z" level=error msg="Failed to destroy network for sandbox \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.539840 containerd[1538]: time="2025-05-09T00:22:27.539763980Z" level=error msg="encountered an error cleaning up failed sandbox \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.539969 containerd[1538]: time="2025-05-09T00:22:27.539831378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r585w,Uid:a2ed08d3-7f7d-48a2-9e9c-5f86ae30999a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.540088 kubelet[2717]: E0509 00:22:27.540015 2717 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.540088 kubelet[2717]: E0509 00:22:27.540059 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-r585w" May 9 00:22:27.540088 kubelet[2717]: E0509 00:22:27.540075 2717 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-r585w" May 9 00:22:27.540196 kubelet[2717]: E0509 00:22:27.540102 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-r585w_kube-system(a2ed08d3-7f7d-48a2-9e9c-5f86ae30999a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-r585w_kube-system(a2ed08d3-7f7d-48a2-9e9c-5f86ae30999a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-r585w" podUID="a2ed08d3-7f7d-48a2-9e9c-5f86ae30999a" May 9 00:22:27.540762 containerd[1538]: time="2025-05-09T00:22:27.540515647Z" level=error msg="encountered an error cleaning up failed sandbox \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.540762 containerd[1538]: time="2025-05-09T00:22:27.540664004Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85b589894b-bpbf6,Uid:38013c7d-6040-44b1-bf5b-b5ceecbabd97,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.540888 kubelet[2717]: E0509 00:22:27.540815 2717 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.540888 kubelet[2717]: E0509 00:22:27.540852 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85b589894b-bpbf6" May 9 00:22:27.540888 kubelet[2717]: E0509 00:22:27.540867 2717 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85b589894b-bpbf6" May 9 00:22:27.540964 kubelet[2717]: E0509 00:22:27.540896 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85b589894b-bpbf6_calico-system(38013c7d-6040-44b1-bf5b-b5ceecbabd97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85b589894b-bpbf6_calico-system(38013c7d-6040-44b1-bf5b-b5ceecbabd97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85b589894b-bpbf6" podUID="38013c7d-6040-44b1-bf5b-b5ceecbabd97" May 9 00:22:27.545367 containerd[1538]: time="2025-05-09T00:22:27.545303124Z" level=error msg="Failed to destroy network for sandbox \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.545677 containerd[1538]: time="2025-05-09T00:22:27.545650598Z" level=error msg="encountered an error cleaning up failed sandbox \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.545727 containerd[1538]: time="2025-05-09T00:22:27.545703277Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c5864,Uid:4c753734-0995-4fcd-9777-f094bc14fa3a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.545948 kubelet[2717]: E0509 00:22:27.545918 2717 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:27.545996 kubelet[2717]: E0509 00:22:27.545962 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c5864" May 9 00:22:27.545996 kubelet[2717]: E0509 00:22:27.545978 2717 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c5864" May 9 00:22:27.546055 kubelet[2717]: E0509 00:22:27.546011 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c5864_calico-system(4c753734-0995-4fcd-9777-f094bc14fa3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c5864_calico-system(4c753734-0995-4fcd-9777-f094bc14fa3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c5864" podUID="4c753734-0995-4fcd-9777-f094bc14fa3a" May 9 00:22:28.441203 kubelet[2717]: I0509 00:22:28.440902 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" May 9 00:22:28.441631 containerd[1538]: time="2025-05-09T00:22:28.441599592Z" level=info msg="StopPodSandbox for \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\"" May 9 00:22:28.441891 containerd[1538]: time="2025-05-09T00:22:28.441761229Z" level=info msg="Ensure that sandbox 5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4 in task-service has been cleanup successfully" May 9 00:22:28.443056 kubelet[2717]: I0509 00:22:28.443023 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" May 9 00:22:28.444075 containerd[1538]: time="2025-05-09T00:22:28.444047791Z" level=info msg="StopPodSandbox for \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\"" May 9 00:22:28.444885 containerd[1538]: time="2025-05-09T00:22:28.444476344Z" level=info msg="Ensure that sandbox 2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64 in task-service has been cleanup successfully" May 9 00:22:28.447661 kubelet[2717]: I0509 00:22:28.447145 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" May 9 00:22:28.447770 containerd[1538]: time="2025-05-09T00:22:28.447687971Z" level=info msg="StopPodSandbox for \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\"" May 9 00:22:28.449277 containerd[1538]: time="2025-05-09T00:22:28.447828489Z" level=info msg="Ensure that sandbox 97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c in task-service has been cleanup successfully" May 9 00:22:28.449347 kubelet[2717]: I0509 00:22:28.449020 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" May 9 00:22:28.449955 containerd[1538]: time="2025-05-09T00:22:28.449549860Z" level=info msg="StopPodSandbox for \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\"" May 9 00:22:28.449955 containerd[1538]: time="2025-05-09T00:22:28.449809216Z" level=info msg="Ensure that sandbox 730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2 in task-service has been cleanup successfully" May 9 00:22:28.452553 kubelet[2717]: I0509 00:22:28.452525 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" May 9 00:22:28.454007 containerd[1538]: time="2025-05-09T00:22:28.453978867Z" level=info msg="StopPodSandbox for \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\"" May 9 00:22:28.454529 containerd[1538]: time="2025-05-09T00:22:28.454492378Z" level=info msg="Ensure that sandbox 0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e in task-service has been cleanup successfully" May 9 00:22:28.457618 kubelet[2717]: I0509 00:22:28.457061 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" May 9 00:22:28.457734 containerd[1538]: time="2025-05-09T00:22:28.457695165Z" level=info msg="StopPodSandbox for \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\"" May 9 00:22:28.457879 containerd[1538]: time="2025-05-09T00:22:28.457853763Z" level=info msg="Ensure that sandbox 64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce in task-service has been cleanup successfully" May 9 00:22:28.498160 containerd[1538]: time="2025-05-09T00:22:28.498092497Z" level=error msg="StopPodSandbox for \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\" failed" error="failed to destroy network for sandbox \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:28.500999 containerd[1538]: time="2025-05-09T00:22:28.500847011Z" level=error msg="StopPodSandbox for \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\" failed" error="failed to destroy network for sandbox \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:28.501668 kubelet[2717]: E0509 00:22:28.501466 2717 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" May 9 00:22:28.501668 kubelet[2717]: E0509 00:22:28.501526 2717 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c"} May 9 00:22:28.501668 kubelet[2717]: E0509 00:22:28.501612 2717 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a2ed08d3-7f7d-48a2-9e9c-5f86ae30999a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:22:28.501668 kubelet[2717]: E0509 00:22:28.501635 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a2ed08d3-7f7d-48a2-9e9c-5f86ae30999a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-r585w" podUID="a2ed08d3-7f7d-48a2-9e9c-5f86ae30999a" May 9 00:22:28.502280 containerd[1538]: time="2025-05-09T00:22:28.502186189Z" level=error msg="StopPodSandbox for \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\" failed" error="failed to destroy network for sandbox \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:28.502554 kubelet[2717]: E0509 00:22:28.502452 2717 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" May 9 00:22:28.502554 kubelet[2717]: E0509 00:22:28.502490 2717 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4"} May 9 00:22:28.502554 kubelet[2717]: E0509 00:22:28.502516 2717 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1e86186f-4466-4f52-abab-bf5cb10cf167\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:22:28.502554 kubelet[2717]: E0509 00:22:28.502524 2717 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" May 9 00:22:28.502756 kubelet[2717]: E0509 00:22:28.502599 2717 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64"} May 9 00:22:28.502756 kubelet[2717]: E0509 00:22:28.502631 2717 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c753734-0995-4fcd-9777-f094bc14fa3a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:22:28.502756 kubelet[2717]: E0509 00:22:28.502650 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c753734-0995-4fcd-9777-f094bc14fa3a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c5864" podUID="4c753734-0995-4fcd-9777-f094bc14fa3a" May 9 00:22:28.502756 kubelet[2717]: E0509 00:22:28.502536 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1e86186f-4466-4f52-abab-bf5cb10cf167\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-l97jq" podUID="1e86186f-4466-4f52-abab-bf5cb10cf167" May 9 00:22:28.508528 containerd[1538]: time="2025-05-09T00:22:28.508481525Z" level=error msg="StopPodSandbox for \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\" failed" error="failed to destroy network for sandbox \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:28.508726 kubelet[2717]: E0509 00:22:28.508671 2717 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" May 9 00:22:28.508776 kubelet[2717]: E0509 00:22:28.508744 2717 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e"} May 9 00:22:28.508803 kubelet[2717]: E0509 00:22:28.508789 2717 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"35cd080a-2fa4-4c86-aa58-2bc34990adbf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:22:28.508851 kubelet[2717]: E0509 00:22:28.508808 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"35cd080a-2fa4-4c86-aa58-2bc34990adbf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-578848dcfb-67brr" podUID="35cd080a-2fa4-4c86-aa58-2bc34990adbf" May 9 00:22:28.510188 containerd[1538]: time="2025-05-09T00:22:28.510150537Z" level=error msg="StopPodSandbox for \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\" failed" error="failed to destroy network for sandbox \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:28.510456 kubelet[2717]: E0509 00:22:28.510318 2717 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" May 9 00:22:28.510456 kubelet[2717]: E0509 00:22:28.510355 2717 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce"} May 9 00:22:28.510456 kubelet[2717]: E0509 00:22:28.510385 2717 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b79fe7f-93ab-4e08-981f-3cf87817c0d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:22:28.510456 kubelet[2717]: E0509 00:22:28.510406 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b79fe7f-93ab-4e08-981f-3cf87817c0d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-578848dcfb-6t55f" podUID="2b79fe7f-93ab-4e08-981f-3cf87817c0d8" May 9 00:22:28.512661 containerd[1538]: time="2025-05-09T00:22:28.512563578Z" level=error msg="StopPodSandbox for \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\" failed" error="failed to destroy network for sandbox \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:22:28.512800 kubelet[2717]: E0509 00:22:28.512769 2717 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" May 9 00:22:28.512853 kubelet[2717]: E0509 00:22:28.512810 2717 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2"} May 9 00:22:28.512881 kubelet[2717]: E0509 00:22:28.512836 2717 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"38013c7d-6040-44b1-bf5b-b5ceecbabd97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:22:28.512881 kubelet[2717]: E0509 00:22:28.512869 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"38013c7d-6040-44b1-bf5b-b5ceecbabd97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85b589894b-bpbf6" podUID="38013c7d-6040-44b1-bf5b-b5ceecbabd97" May 9 00:22:28.898956 systemd[1]: Started sshd@7-10.0.0.135:22-10.0.0.1:34866.service - OpenSSH per-connection server daemon (10.0.0.1:34866). May 9 00:22:28.938151 sshd[3897]: Accepted publickey for core from 10.0.0.1 port 34866 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:22:28.939691 sshd[3897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:22:28.943995 systemd-logind[1519]: New session 8 of user core. May 9 00:22:28.951902 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 00:22:29.089799 sshd[3897]: pam_unix(sshd:session): session closed for user core May 9 00:22:29.093401 systemd[1]: sshd@7-10.0.0.135:22-10.0.0.1:34866.service: Deactivated successfully. May 9 00:22:29.096438 systemd-logind[1519]: Session 8 logged out. Waiting for processes to exit. May 9 00:22:29.097183 systemd[1]: session-8.scope: Deactivated successfully. May 9 00:22:29.098772 systemd-logind[1519]: Removed session 8. May 9 00:22:31.050352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3193389110.mount: Deactivated successfully. May 9 00:22:31.199415 containerd[1538]: time="2025-05-09T00:22:31.199342846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:31.200084 containerd[1538]: time="2025-05-09T00:22:31.200039675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 9 00:22:31.212017 containerd[1538]: time="2025-05-09T00:22:31.211985779Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:31.213832 containerd[1538]: time="2025-05-09T00:22:31.213801392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:31.223309 containerd[1538]: time="2025-05-09T00:22:31.223230652Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 3.773423124s" May 9 00:22:31.223355 containerd[1538]: time="2025-05-09T00:22:31.223307491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 9 00:22:31.230908 containerd[1538]: time="2025-05-09T00:22:31.230790180Z" level=info msg="CreateContainer within sandbox \"87ddf306114dd945da1d38b89ff3042a21c24483676a30653f8006a3542a36d3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 9 00:22:31.242962 containerd[1538]: time="2025-05-09T00:22:31.242918201Z" level=info msg="CreateContainer within sandbox \"87ddf306114dd945da1d38b89ff3042a21c24483676a30653f8006a3542a36d3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9ad33b23e2659f61d7d30fc5b62fc1902a2b40fa1de55e596b04100c2fca9ce8\"" May 9 00:22:31.243442 containerd[1538]: time="2025-05-09T00:22:31.243406673Z" level=info msg="StartContainer for \"9ad33b23e2659f61d7d30fc5b62fc1902a2b40fa1de55e596b04100c2fca9ce8\"" May 9 00:22:31.420038 containerd[1538]: time="2025-05-09T00:22:31.418994274Z" level=info msg="StartContainer for \"9ad33b23e2659f61d7d30fc5b62fc1902a2b40fa1de55e596b04100c2fca9ce8\" returns successfully" May 9 00:22:31.465149 kubelet[2717]: E0509 00:22:31.465110 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:31.495308 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 9 00:22:31.495410 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 9 00:22:32.466271 kubelet[2717]: I0509 00:22:32.466231 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:22:32.466986 kubelet[2717]: E0509 00:22:32.466955 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:34.103869 systemd[1]: Started sshd@8-10.0.0.135:22-10.0.0.1:55316.service - OpenSSH per-connection server daemon (10.0.0.1:55316). May 9 00:22:34.148678 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 55316 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:22:34.152812 sshd[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:22:34.158841 systemd-logind[1519]: New session 9 of user core. May 9 00:22:34.170901 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 00:22:34.310497 sshd[4117]: pam_unix(sshd:session): session closed for user core May 9 00:22:34.313736 systemd-logind[1519]: Session 9 logged out. Waiting for processes to exit. May 9 00:22:34.314115 systemd[1]: sshd@8-10.0.0.135:22-10.0.0.1:55316.service: Deactivated successfully. May 9 00:22:34.316057 systemd[1]: session-9.scope: Deactivated successfully. May 9 00:22:34.316693 systemd-logind[1519]: Removed session 9. May 9 00:22:38.640758 kubelet[2717]: I0509 00:22:38.639923 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:22:38.640758 kubelet[2717]: E0509 00:22:38.640655 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:39.323806 systemd[1]: Started sshd@9-10.0.0.135:22-10.0.0.1:55330.service - OpenSSH per-connection server daemon (10.0.0.1:55330). May 9 00:22:39.356685 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 55330 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:22:39.358017 sshd[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:22:39.363380 systemd-logind[1519]: New session 10 of user core. May 9 00:22:39.370800 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 00:22:39.487040 sshd[4299]: pam_unix(sshd:session): session closed for user core May 9 00:22:39.495834 systemd[1]: Started sshd@10-10.0.0.135:22-10.0.0.1:55344.service - OpenSSH per-connection server daemon (10.0.0.1:55344). May 9 00:22:39.496240 systemd[1]: sshd@9-10.0.0.135:22-10.0.0.1:55330.service: Deactivated successfully. May 9 00:22:39.499092 systemd[1]: session-10.scope: Deactivated successfully. May 9 00:22:39.499478 systemd-logind[1519]: Session 10 logged out. Waiting for processes to exit. May 9 00:22:39.501291 systemd-logind[1519]: Removed session 10. May 9 00:22:39.529340 sshd[4312]: Accepted publickey for core from 10.0.0.1 port 55344 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:22:39.531048 sshd[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:22:39.535332 systemd-logind[1519]: New session 11 of user core. May 9 00:22:39.542864 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 00:22:39.687208 sshd[4312]: pam_unix(sshd:session): session closed for user core May 9 00:22:39.692905 systemd[1]: Started sshd@11-10.0.0.135:22-10.0.0.1:55346.service - OpenSSH per-connection server daemon (10.0.0.1:55346). May 9 00:22:39.693346 systemd[1]: sshd@10-10.0.0.135:22-10.0.0.1:55344.service: Deactivated successfully. May 9 00:22:39.699736 systemd[1]: session-11.scope: Deactivated successfully. May 9 00:22:39.704336 systemd-logind[1519]: Session 11 logged out. Waiting for processes to exit. May 9 00:22:39.709321 systemd-logind[1519]: Removed session 11. May 9 00:22:39.733344 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 55346 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:22:39.734726 sshd[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:22:39.739065 systemd-logind[1519]: New session 12 of user core. May 9 00:22:39.748848 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 00:22:39.860810 sshd[4326]: pam_unix(sshd:session): session closed for user core May 9 00:22:39.864099 systemd[1]: sshd@11-10.0.0.135:22-10.0.0.1:55346.service: Deactivated successfully. May 9 00:22:39.866160 systemd-logind[1519]: Session 12 logged out. Waiting for processes to exit. May 9 00:22:39.866563 systemd[1]: session-12.scope: Deactivated successfully. May 9 00:22:39.867732 systemd-logind[1519]: Removed session 12. May 9 00:22:41.355472 containerd[1538]: time="2025-05-09T00:22:41.355200264Z" level=info msg="StopPodSandbox for \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\"" May 9 00:22:41.451802 kubelet[2717]: I0509 00:22:41.451656 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pt7mz" podStartSLOduration=10.846017358 podStartE2EDuration="22.451636171s" podCreationTimestamp="2025-05-09 00:22:19 +0000 UTC" firstStartedPulling="2025-05-09 00:22:19.618336588 +0000 UTC m=+20.339621269" lastFinishedPulling="2025-05-09 00:22:31.223955401 +0000 UTC m=+31.945240082" observedRunningTime="2025-05-09 00:22:31.479916533 +0000 UTC m=+32.201201214" watchObservedRunningTime="2025-05-09 00:22:41.451636171 +0000 UTC m=+42.172920852" May 9 00:22:41.557704 containerd[1538]: 2025-05-09 00:22:41.451 [INFO][4407] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" May 9 00:22:41.557704 containerd[1538]: 2025-05-09 00:22:41.451 [INFO][4407] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" iface="eth0" netns="/var/run/netns/cni-32b84f91-b8b7-2025-b964-378ffee69d09" May 9 00:22:41.557704 containerd[1538]: 2025-05-09 00:22:41.452 [INFO][4407] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" iface="eth0" netns="/var/run/netns/cni-32b84f91-b8b7-2025-b964-378ffee69d09" May 9 00:22:41.557704 containerd[1538]: 2025-05-09 00:22:41.453 [INFO][4407] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" iface="eth0" netns="/var/run/netns/cni-32b84f91-b8b7-2025-b964-378ffee69d09" May 9 00:22:41.557704 containerd[1538]: 2025-05-09 00:22:41.453 [INFO][4407] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" May 9 00:22:41.557704 containerd[1538]: 2025-05-09 00:22:41.453 [INFO][4407] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" May 9 00:22:41.557704 containerd[1538]: 2025-05-09 00:22:41.543 [INFO][4415] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" HandleID="k8s-pod-network.97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" Workload="localhost-k8s-coredns--7db6d8ff4d--r585w-eth0" May 9 00:22:41.557704 containerd[1538]: 2025-05-09 00:22:41.543 [INFO][4415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:22:41.557704 containerd[1538]: 2025-05-09 00:22:41.544 [INFO][4415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:22:41.557704 containerd[1538]: 2025-05-09 00:22:41.553 [WARNING][4415] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" HandleID="k8s-pod-network.97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" Workload="localhost-k8s-coredns--7db6d8ff4d--r585w-eth0" May 9 00:22:41.557704 containerd[1538]: 2025-05-09 00:22:41.553 [INFO][4415] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" HandleID="k8s-pod-network.97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" Workload="localhost-k8s-coredns--7db6d8ff4d--r585w-eth0" May 9 00:22:41.557704 containerd[1538]: 2025-05-09 00:22:41.554 [INFO][4415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:22:41.557704 containerd[1538]: 2025-05-09 00:22:41.556 [INFO][4407] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" May 9 00:22:41.558427 containerd[1538]: time="2025-05-09T00:22:41.557840732Z" level=info msg="TearDown network for sandbox \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\" successfully" May 9 00:22:41.558427 containerd[1538]: time="2025-05-09T00:22:41.557868331Z" level=info msg="StopPodSandbox for \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\" returns successfully" May 9 00:22:41.558495 kubelet[2717]: E0509 00:22:41.558233 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:41.559866 containerd[1538]: time="2025-05-09T00:22:41.559566713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r585w,Uid:a2ed08d3-7f7d-48a2-9e9c-5f86ae30999a,Namespace:kube-system,Attempt:1,}" May 9 00:22:41.560067 systemd[1]: run-netns-cni\x2d32b84f91\x2db8b7\x2d2025\x2db964\x2d378ffee69d09.mount: Deactivated successfully. May 9 00:22:41.678115 systemd-networkd[1224]: calic9cc19bc975: Link UP May 9 00:22:41.678344 systemd-networkd[1224]: calic9cc19bc975: Gained carrier May 9 00:22:41.712626 containerd[1538]: 2025-05-09 00:22:41.591 [INFO][4424] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 9 00:22:41.712626 containerd[1538]: 2025-05-09 00:22:41.603 [INFO][4424] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--r585w-eth0 coredns-7db6d8ff4d- kube-system a2ed08d3-7f7d-48a2-9e9c-5f86ae30999a 918 0 2025-05-09 00:22:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-r585w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic9cc19bc975 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r585w" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r585w-" May 9 00:22:41.712626 containerd[1538]: 2025-05-09 00:22:41.603 [INFO][4424] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r585w" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r585w-eth0" May 9 00:22:41.712626 containerd[1538]: 2025-05-09 00:22:41.630 [INFO][4438] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df" HandleID="k8s-pod-network.1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df" Workload="localhost-k8s-coredns--7db6d8ff4d--r585w-eth0" May 9 00:22:41.712626 containerd[1538]: 2025-05-09 00:22:41.641 [INFO][4438] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df" HandleID="k8s-pod-network.1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df" Workload="localhost-k8s-coredns--7db6d8ff4d--r585w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f2e90), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-r585w", "timestamp":"2025-05-09 00:22:41.630674496 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:22:41.712626 containerd[1538]: 2025-05-09 00:22:41.641 [INFO][4438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:22:41.712626 containerd[1538]: 2025-05-09 00:22:41.641 [INFO][4438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:22:41.712626 containerd[1538]: 2025-05-09 00:22:41.641 [INFO][4438] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 00:22:41.712626 containerd[1538]: 2025-05-09 00:22:41.643 [INFO][4438] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df" host="localhost" May 9 00:22:41.712626 containerd[1538]: 2025-05-09 00:22:41.647 [INFO][4438] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 00:22:41.712626 containerd[1538]: 2025-05-09 00:22:41.653 [INFO][4438] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 00:22:41.712626 containerd[1538]: 2025-05-09 00:22:41.654 [INFO][4438] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 00:22:41.712626 containerd[1538]: 2025-05-09 00:22:41.656 [INFO][4438] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 00:22:41.712626 containerd[1538]: 2025-05-09 00:22:41.656 [INFO][4438] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df" host="localhost" May 9 00:22:41.712626 containerd[1538]: 2025-05-09 00:22:41.658 [INFO][4438] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df May 9 00:22:41.712626 containerd[1538]: 2025-05-09 00:22:41.661 [INFO][4438] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df" host="localhost" May 9 00:22:41.712626 containerd[1538]: 2025-05-09 00:22:41.665 [INFO][4438] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df" host="localhost" May 9 00:22:41.712626 containerd[1538]: 2025-05-09 00:22:41.665 [INFO][4438] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df" host="localhost" May 9 00:22:41.712626 containerd[1538]: 2025-05-09 00:22:41.665 [INFO][4438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:22:41.712626 containerd[1538]: 2025-05-09 00:22:41.665 [INFO][4438] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df" HandleID="k8s-pod-network.1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df" Workload="localhost-k8s-coredns--7db6d8ff4d--r585w-eth0" May 9 00:22:41.717138 containerd[1538]: 2025-05-09 00:22:41.669 [INFO][4424] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r585w" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r585w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--r585w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a2ed08d3-7f7d-48a2-9e9c-5f86ae30999a", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-r585w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9cc19bc975", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:22:41.717138 containerd[1538]: 2025-05-09 00:22:41.669 [INFO][4424] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r585w" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r585w-eth0" May 9 00:22:41.717138 containerd[1538]: 2025-05-09 00:22:41.669 [INFO][4424] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9cc19bc975 ContainerID="1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r585w" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r585w-eth0" May 9 00:22:41.717138 containerd[1538]: 2025-05-09 00:22:41.678 [INFO][4424] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r585w" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r585w-eth0" May 9 00:22:41.717138 containerd[1538]: 2025-05-09 00:22:41.679 [INFO][4424] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r585w" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r585w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--r585w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a2ed08d3-7f7d-48a2-9e9c-5f86ae30999a", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df", Pod:"coredns-7db6d8ff4d-r585w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9cc19bc975", MAC:"be:1f:25:ae:fe:23", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:22:41.717138 containerd[1538]: 2025-05-09 00:22:41.695 [INFO][4424] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r585w" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r585w-eth0" May 9 00:22:41.734945 containerd[1538]: time="2025-05-09T00:22:41.734862279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:22:41.734945 containerd[1538]: time="2025-05-09T00:22:41.734909878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:22:41.734945 containerd[1538]: time="2025-05-09T00:22:41.734921678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:22:41.735279 containerd[1538]: time="2025-05-09T00:22:41.735243595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:22:41.757031 systemd-resolved[1432]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:22:41.772838 containerd[1538]: time="2025-05-09T00:22:41.772700786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r585w,Uid:a2ed08d3-7f7d-48a2-9e9c-5f86ae30999a,Namespace:kube-system,Attempt:1,} returns sandbox id \"1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df\"" May 9 00:22:41.773490 kubelet[2717]: E0509 00:22:41.773466 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:41.775878 containerd[1538]: time="2025-05-09T00:22:41.775827511Z" level=info msg="CreateContainer within sandbox \"1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:22:41.786861 containerd[1538]: time="2025-05-09T00:22:41.786793912Z" level=info msg="CreateContainer within sandbox \"1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"97e91e086995f2ab25d39869e8dfdc4b9135e80f362d8c1ca3b1790e7fda6866\"" May 9 00:22:41.787816 containerd[1538]: time="2025-05-09T00:22:41.787208907Z" level=info msg="StartContainer for \"97e91e086995f2ab25d39869e8dfdc4b9135e80f362d8c1ca3b1790e7fda6866\"" May 9 00:22:41.831629 containerd[1538]: time="2025-05-09T00:22:41.831563583Z" level=info msg="StartContainer for \"97e91e086995f2ab25d39869e8dfdc4b9135e80f362d8c1ca3b1790e7fda6866\" returns successfully" May 9 00:22:42.355871 containerd[1538]: time="2025-05-09T00:22:42.355490238Z" level=info msg="StopPodSandbox for \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\"" May 9 00:22:42.355871 containerd[1538]: time="2025-05-09T00:22:42.355648236Z" level=info msg="StopPodSandbox for \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\"" May 9 00:22:42.355871 containerd[1538]: time="2025-05-09T00:22:42.355679436Z" level=info msg="StopPodSandbox for \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\"" May 9 00:22:42.468498 containerd[1538]: 2025-05-09 00:22:42.423 [INFO][4618] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" May 9 00:22:42.468498 containerd[1538]: 2025-05-09 00:22:42.423 [INFO][4618] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" iface="eth0" netns="/var/run/netns/cni-1128ae96-5f73-dff7-e9e9-7c9a4a8b60aa" May 9 00:22:42.468498 containerd[1538]: 2025-05-09 00:22:42.423 [INFO][4618] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" iface="eth0" netns="/var/run/netns/cni-1128ae96-5f73-dff7-e9e9-7c9a4a8b60aa" May 9 00:22:42.468498 containerd[1538]: 2025-05-09 00:22:42.423 [INFO][4618] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" iface="eth0" netns="/var/run/netns/cni-1128ae96-5f73-dff7-e9e9-7c9a4a8b60aa" May 9 00:22:42.468498 containerd[1538]: 2025-05-09 00:22:42.423 [INFO][4618] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" May 9 00:22:42.468498 containerd[1538]: 2025-05-09 00:22:42.423 [INFO][4618] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" May 9 00:22:42.468498 containerd[1538]: 2025-05-09 00:22:42.455 [INFO][4641] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" HandleID="k8s-pod-network.2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" Workload="localhost-k8s-csi--node--driver--c5864-eth0" May 9 00:22:42.468498 containerd[1538]: 2025-05-09 00:22:42.455 [INFO][4641] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:22:42.468498 containerd[1538]: 2025-05-09 00:22:42.455 [INFO][4641] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:22:42.468498 containerd[1538]: 2025-05-09 00:22:42.463 [WARNING][4641] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" HandleID="k8s-pod-network.2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" Workload="localhost-k8s-csi--node--driver--c5864-eth0" May 9 00:22:42.468498 containerd[1538]: 2025-05-09 00:22:42.463 [INFO][4641] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" HandleID="k8s-pod-network.2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" Workload="localhost-k8s-csi--node--driver--c5864-eth0" May 9 00:22:42.468498 containerd[1538]: 2025-05-09 00:22:42.464 [INFO][4641] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:22:42.468498 containerd[1538]: 2025-05-09 00:22:42.467 [INFO][4618] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" May 9 00:22:42.469768 containerd[1538]: time="2025-05-09T00:22:42.468628793Z" level=info msg="TearDown network for sandbox \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\" successfully" May 9 00:22:42.469768 containerd[1538]: time="2025-05-09T00:22:42.468654552Z" level=info msg="StopPodSandbox for \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\" returns successfully" May 9 00:22:42.469768 containerd[1538]: time="2025-05-09T00:22:42.469266306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c5864,Uid:4c753734-0995-4fcd-9777-f094bc14fa3a,Namespace:calico-system,Attempt:1,}" May 9 00:22:42.477003 containerd[1538]: 2025-05-09 00:22:42.418 [INFO][4613] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" May 9 00:22:42.477003 containerd[1538]: 2025-05-09 00:22:42.419 [INFO][4613] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" iface="eth0" netns="/var/run/netns/cni-1fd7568f-8623-a70a-5ed5-945953f7a156" May 9 00:22:42.477003 containerd[1538]: 2025-05-09 00:22:42.421 [INFO][4613] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" iface="eth0" netns="/var/run/netns/cni-1fd7568f-8623-a70a-5ed5-945953f7a156" May 9 00:22:42.477003 containerd[1538]: 2025-05-09 00:22:42.421 [INFO][4613] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" iface="eth0" netns="/var/run/netns/cni-1fd7568f-8623-a70a-5ed5-945953f7a156" May 9 00:22:42.477003 containerd[1538]: 2025-05-09 00:22:42.421 [INFO][4613] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" May 9 00:22:42.477003 containerd[1538]: 2025-05-09 00:22:42.421 [INFO][4613] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" May 9 00:22:42.477003 containerd[1538]: 2025-05-09 00:22:42.455 [INFO][4638] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" HandleID="k8s-pod-network.0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" Workload="localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0" May 9 00:22:42.477003 containerd[1538]: 2025-05-09 00:22:42.455 [INFO][4638] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:22:42.477003 containerd[1538]: 2025-05-09 00:22:42.464 [INFO][4638] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:22:42.477003 containerd[1538]: 2025-05-09 00:22:42.473 [WARNING][4638] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" HandleID="k8s-pod-network.0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" Workload="localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0" May 9 00:22:42.477003 containerd[1538]: 2025-05-09 00:22:42.473 [INFO][4638] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" HandleID="k8s-pod-network.0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" Workload="localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0" May 9 00:22:42.477003 containerd[1538]: 2025-05-09 00:22:42.474 [INFO][4638] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:22:42.477003 containerd[1538]: 2025-05-09 00:22:42.475 [INFO][4613] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" May 9 00:22:42.477616 containerd[1538]: time="2025-05-09T00:22:42.477502578Z" level=info msg="TearDown network for sandbox \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\" successfully" May 9 00:22:42.477616 containerd[1538]: time="2025-05-09T00:22:42.477527778Z" level=info msg="StopPodSandbox for \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\" returns successfully" May 9 00:22:42.478263 containerd[1538]: time="2025-05-09T00:22:42.478228690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578848dcfb-67brr,Uid:35cd080a-2fa4-4c86-aa58-2bc34990adbf,Namespace:calico-apiserver,Attempt:1,}" May 9 00:22:42.490059 kubelet[2717]: E0509 00:22:42.488601 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:42.493758 containerd[1538]: 2025-05-09 00:22:42.420 [INFO][4611] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" May 9 00:22:42.493758 containerd[1538]: 2025-05-09 00:22:42.420 [INFO][4611] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" iface="eth0" netns="/var/run/netns/cni-3b70d0ae-e1ec-1794-e4fe-88fdb3abc789" May 9 00:22:42.493758 containerd[1538]: 2025-05-09 00:22:42.421 [INFO][4611] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" iface="eth0" netns="/var/run/netns/cni-3b70d0ae-e1ec-1794-e4fe-88fdb3abc789" May 9 00:22:42.493758 containerd[1538]: 2025-05-09 00:22:42.422 [INFO][4611] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" iface="eth0" netns="/var/run/netns/cni-3b70d0ae-e1ec-1794-e4fe-88fdb3abc789" May 9 00:22:42.493758 containerd[1538]: 2025-05-09 00:22:42.422 [INFO][4611] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" May 9 00:22:42.493758 containerd[1538]: 2025-05-09 00:22:42.422 [INFO][4611] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" May 9 00:22:42.493758 containerd[1538]: 2025-05-09 00:22:42.455 [INFO][4639] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" HandleID="k8s-pod-network.64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" Workload="localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0" May 9 00:22:42.493758 containerd[1538]: 2025-05-09 00:22:42.456 [INFO][4639] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:22:42.493758 containerd[1538]: 2025-05-09 00:22:42.474 [INFO][4639] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:22:42.493758 containerd[1538]: 2025-05-09 00:22:42.483 [WARNING][4639] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" HandleID="k8s-pod-network.64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" Workload="localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0" May 9 00:22:42.493758 containerd[1538]: 2025-05-09 00:22:42.483 [INFO][4639] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" HandleID="k8s-pod-network.64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" Workload="localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0" May 9 00:22:42.493758 containerd[1538]: 2025-05-09 00:22:42.485 [INFO][4639] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:22:42.493758 containerd[1538]: 2025-05-09 00:22:42.490 [INFO][4611] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" May 9 00:22:42.493758 containerd[1538]: time="2025-05-09T00:22:42.493239370Z" level=info msg="TearDown network for sandbox \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\" successfully" May 9 00:22:42.493758 containerd[1538]: time="2025-05-09T00:22:42.493265170Z" level=info msg="StopPodSandbox for \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\" returns successfully" May 9 00:22:42.494223 containerd[1538]: time="2025-05-09T00:22:42.494003082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578848dcfb-6t55f,Uid:2b79fe7f-93ab-4e08-981f-3cf87817c0d8,Namespace:calico-apiserver,Attempt:1,}" May 9 00:22:42.536854 kubelet[2717]: I0509 00:22:42.535960 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-r585w" podStartSLOduration=29.535941916 podStartE2EDuration="29.535941916s" podCreationTimestamp="2025-05-09 00:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:22:42.50271207 +0000 UTC m=+43.223996871" watchObservedRunningTime="2025-05-09 00:22:42.535941916 +0000 UTC m=+43.257226597" May 9 00:22:42.567344 systemd[1]: run-netns-cni\x2d1128ae96\x2d5f73\x2ddff7\x2de9e9\x2d7c9a4a8b60aa.mount: Deactivated successfully. May 9 00:22:42.567492 systemd[1]: run-netns-cni\x2d3b70d0ae\x2de1ec\x2d1794\x2de4fe\x2d88fdb3abc789.mount: Deactivated successfully. May 9 00:22:42.567866 systemd[1]: run-netns-cni\x2d1fd7568f\x2d8623\x2da70a\x2d5ed5\x2d945953f7a156.mount: Deactivated successfully. May 9 00:22:42.627979 systemd-networkd[1224]: cali1acc8e87414: Link UP May 9 00:22:42.628169 systemd-networkd[1224]: cali1acc8e87414: Gained carrier May 9 00:22:42.643705 containerd[1538]: 2025-05-09 00:22:42.514 [INFO][4662] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 9 00:22:42.643705 containerd[1538]: 2025-05-09 00:22:42.534 [INFO][4662] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--c5864-eth0 csi-node-driver- calico-system 4c753734-0995-4fcd-9777-f094bc14fa3a 936 0 2025-05-09 00:22:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-c5864 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1acc8e87414 [] []}} ContainerID="d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120" Namespace="calico-system" Pod="csi-node-driver-c5864" WorkloadEndpoint="localhost-k8s-csi--node--driver--c5864-" May 9 00:22:42.643705 containerd[1538]: 2025-05-09 00:22:42.534 [INFO][4662] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120" Namespace="calico-system" Pod="csi-node-driver-c5864" WorkloadEndpoint="localhost-k8s-csi--node--driver--c5864-eth0" May 9 00:22:42.643705 containerd[1538]: 2025-05-09 00:22:42.581 [INFO][4709] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120" HandleID="k8s-pod-network.d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120" Workload="localhost-k8s-csi--node--driver--c5864-eth0" May 9 00:22:42.643705 containerd[1538]: 2025-05-09 00:22:42.596 [INFO][4709] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120" HandleID="k8s-pod-network.d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120" Workload="localhost-k8s-csi--node--driver--c5864-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000365b60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-c5864", "timestamp":"2025-05-09 00:22:42.581648149 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:22:42.643705 containerd[1538]: 2025-05-09 00:22:42.596 [INFO][4709] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:22:42.643705 containerd[1538]: 2025-05-09 00:22:42.596 [INFO][4709] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:22:42.643705 containerd[1538]: 2025-05-09 00:22:42.596 [INFO][4709] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 00:22:42.643705 containerd[1538]: 2025-05-09 00:22:42.598 [INFO][4709] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120" host="localhost" May 9 00:22:42.643705 containerd[1538]: 2025-05-09 00:22:42.603 [INFO][4709] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 00:22:42.643705 containerd[1538]: 2025-05-09 00:22:42.607 [INFO][4709] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 00:22:42.643705 containerd[1538]: 2025-05-09 00:22:42.609 [INFO][4709] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 00:22:42.643705 containerd[1538]: 2025-05-09 00:22:42.611 [INFO][4709] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 00:22:42.643705 containerd[1538]: 2025-05-09 00:22:42.611 [INFO][4709] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120" host="localhost" May 9 00:22:42.643705 containerd[1538]: 2025-05-09 00:22:42.614 [INFO][4709] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120 May 9 00:22:42.643705 containerd[1538]: 2025-05-09 00:22:42.618 [INFO][4709] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120" host="localhost" May 9 00:22:42.643705 containerd[1538]: 2025-05-09 00:22:42.623 [INFO][4709] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120" host="localhost" May 9 00:22:42.643705 containerd[1538]: 2025-05-09 00:22:42.623 [INFO][4709] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120" host="localhost" May 9 00:22:42.643705 containerd[1538]: 2025-05-09 00:22:42.623 [INFO][4709] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:22:42.643705 containerd[1538]: 2025-05-09 00:22:42.623 [INFO][4709] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120" HandleID="k8s-pod-network.d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120" Workload="localhost-k8s-csi--node--driver--c5864-eth0" May 9 00:22:42.644263 containerd[1538]: 2025-05-09 00:22:42.625 [INFO][4662] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120" Namespace="calico-system" Pod="csi-node-driver-c5864" WorkloadEndpoint="localhost-k8s-csi--node--driver--c5864-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c5864-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c753734-0995-4fcd-9777-f094bc14fa3a", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-c5864", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1acc8e87414", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:22:42.644263 containerd[1538]: 2025-05-09 00:22:42.626 [INFO][4662] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120" Namespace="calico-system" Pod="csi-node-driver-c5864" WorkloadEndpoint="localhost-k8s-csi--node--driver--c5864-eth0" May 9 00:22:42.644263 containerd[1538]: 2025-05-09 00:22:42.626 [INFO][4662] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1acc8e87414 ContainerID="d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120" Namespace="calico-system" Pod="csi-node-driver-c5864" WorkloadEndpoint="localhost-k8s-csi--node--driver--c5864-eth0" May 9 00:22:42.644263 containerd[1538]: 2025-05-09 00:22:42.628 [INFO][4662] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120" Namespace="calico-system" Pod="csi-node-driver-c5864" WorkloadEndpoint="localhost-k8s-csi--node--driver--c5864-eth0" May 9 00:22:42.644263 containerd[1538]: 2025-05-09 00:22:42.628 [INFO][4662] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120" Namespace="calico-system" Pod="csi-node-driver-c5864" WorkloadEndpoint="localhost-k8s-csi--node--driver--c5864-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c5864-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c753734-0995-4fcd-9777-f094bc14fa3a", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120", Pod:"csi-node-driver-c5864", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1acc8e87414", MAC:"0e:74:64:df:6e:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:22:42.644263 containerd[1538]: 2025-05-09 00:22:42.640 [INFO][4662] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120" Namespace="calico-system" Pod="csi-node-driver-c5864" WorkloadEndpoint="localhost-k8s-csi--node--driver--c5864-eth0" May 9 00:22:42.671166 systemd-networkd[1224]: calie1ea9b6fbb8: Link UP May 9 00:22:42.671891 systemd-networkd[1224]: calie1ea9b6fbb8: Gained carrier May 9 00:22:42.676544 containerd[1538]: time="2025-05-09T00:22:42.676462659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:22:42.676739 containerd[1538]: time="2025-05-09T00:22:42.676519218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:22:42.676739 containerd[1538]: time="2025-05-09T00:22:42.676556698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:22:42.676739 containerd[1538]: time="2025-05-09T00:22:42.676663457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:22:42.688031 containerd[1538]: 2025-05-09 00:22:42.537 [INFO][4693] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 9 00:22:42.688031 containerd[1538]: 2025-05-09 00:22:42.557 [INFO][4693] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0 calico-apiserver-578848dcfb- calico-apiserver 2b79fe7f-93ab-4e08-981f-3cf87817c0d8 935 0 2025-05-09 00:22:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:578848dcfb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-578848dcfb-6t55f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie1ea9b6fbb8 [] []}} ContainerID="46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5" Namespace="calico-apiserver" Pod="calico-apiserver-578848dcfb-6t55f" WorkloadEndpoint="localhost-k8s-calico--apiserver--578848dcfb--6t55f-" May 9 00:22:42.688031 containerd[1538]: 2025-05-09 00:22:42.557 [INFO][4693] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5" Namespace="calico-apiserver" Pod="calico-apiserver-578848dcfb-6t55f" WorkloadEndpoint="localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0" May 9 00:22:42.688031 containerd[1538]: 2025-05-09 00:22:42.594 [INFO][4717] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5" HandleID="k8s-pod-network.46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5" Workload="localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0" May 9 00:22:42.688031 containerd[1538]: 2025-05-09 00:22:42.606 [INFO][4717] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5" HandleID="k8s-pod-network.46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5" Workload="localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e5170), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-578848dcfb-6t55f", "timestamp":"2025-05-09 00:22:42.594311574 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:22:42.688031 containerd[1538]: 2025-05-09 00:22:42.606 [INFO][4717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:22:42.688031 containerd[1538]: 2025-05-09 00:22:42.623 [INFO][4717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:22:42.688031 containerd[1538]: 2025-05-09 00:22:42.623 [INFO][4717] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 00:22:42.688031 containerd[1538]: 2025-05-09 00:22:42.626 [INFO][4717] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5" host="localhost" May 9 00:22:42.688031 containerd[1538]: 2025-05-09 00:22:42.633 [INFO][4717] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 00:22:42.688031 containerd[1538]: 2025-05-09 00:22:42.643 [INFO][4717] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 00:22:42.688031 containerd[1538]: 2025-05-09 00:22:42.648 [INFO][4717] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 00:22:42.688031 containerd[1538]: 2025-05-09 00:22:42.653 [INFO][4717] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 00:22:42.688031 containerd[1538]: 2025-05-09 00:22:42.653 [INFO][4717] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5" host="localhost" May 9 00:22:42.688031 containerd[1538]: 2025-05-09 00:22:42.654 [INFO][4717] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5 May 9 00:22:42.688031 containerd[1538]: 2025-05-09 00:22:42.659 [INFO][4717] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5" host="localhost" May 9 00:22:42.688031 containerd[1538]: 2025-05-09 00:22:42.664 [INFO][4717] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5" host="localhost" May 9 00:22:42.688031 containerd[1538]: 2025-05-09 00:22:42.665 [INFO][4717] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5" host="localhost" May 9 00:22:42.688031 containerd[1538]: 2025-05-09 00:22:42.665 [INFO][4717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:22:42.688031 containerd[1538]: 2025-05-09 00:22:42.665 [INFO][4717] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5" HandleID="k8s-pod-network.46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5" Workload="localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0" May 9 00:22:42.688788 containerd[1538]: 2025-05-09 00:22:42.667 [INFO][4693] cni-plugin/k8s.go 386: Populated endpoint ContainerID="46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5" Namespace="calico-apiserver" Pod="calico-apiserver-578848dcfb-6t55f" WorkloadEndpoint="localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0", GenerateName:"calico-apiserver-578848dcfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b79fe7f-93ab-4e08-981f-3cf87817c0d8", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"578848dcfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-578848dcfb-6t55f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1ea9b6fbb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:22:42.688788 containerd[1538]: 2025-05-09 00:22:42.667 [INFO][4693] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5" Namespace="calico-apiserver" Pod="calico-apiserver-578848dcfb-6t55f" WorkloadEndpoint="localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0" May 9 00:22:42.688788 containerd[1538]: 2025-05-09 00:22:42.667 [INFO][4693] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1ea9b6fbb8 ContainerID="46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5" Namespace="calico-apiserver" Pod="calico-apiserver-578848dcfb-6t55f" WorkloadEndpoint="localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0" May 9 00:22:42.688788 containerd[1538]: 2025-05-09 00:22:42.671 [INFO][4693] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5" Namespace="calico-apiserver" Pod="calico-apiserver-578848dcfb-6t55f" WorkloadEndpoint="localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0" May 9 00:22:42.688788 containerd[1538]: 2025-05-09 00:22:42.672 [INFO][4693] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5" Namespace="calico-apiserver" Pod="calico-apiserver-578848dcfb-6t55f" WorkloadEndpoint="localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0", GenerateName:"calico-apiserver-578848dcfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b79fe7f-93ab-4e08-981f-3cf87817c0d8", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"578848dcfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5", Pod:"calico-apiserver-578848dcfb-6t55f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1ea9b6fbb8", MAC:"a6:1d:86:37:84:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:22:42.688788 containerd[1538]: 2025-05-09 00:22:42.685 [INFO][4693] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5" Namespace="calico-apiserver" Pod="calico-apiserver-578848dcfb-6t55f" WorkloadEndpoint="localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0" May 9 00:22:42.709174 systemd-networkd[1224]: cali6e5946a20da: Link UP May 9 00:22:42.709373 systemd-networkd[1224]: cali6e5946a20da: Gained carrier May 9 00:22:42.711059 containerd[1538]: time="2025-05-09T00:22:42.710536336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:22:42.711059 containerd[1538]: time="2025-05-09T00:22:42.710977091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:22:42.711264 containerd[1538]: time="2025-05-09T00:22:42.711088050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:22:42.711343 containerd[1538]: time="2025-05-09T00:22:42.711302208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:22:42.712703 systemd-resolved[1432]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:22:42.722387 containerd[1538]: 2025-05-09 00:22:42.545 [INFO][4674] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 9 00:22:42.722387 containerd[1538]: 2025-05-09 00:22:42.563 [INFO][4674] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0 calico-apiserver-578848dcfb- calico-apiserver 35cd080a-2fa4-4c86-aa58-2bc34990adbf 934 0 2025-05-09 00:22:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:578848dcfb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-578848dcfb-67brr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6e5946a20da [] []}} ContainerID="bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4" Namespace="calico-apiserver" Pod="calico-apiserver-578848dcfb-67brr" WorkloadEndpoint="localhost-k8s-calico--apiserver--578848dcfb--67brr-" May 9 00:22:42.722387 containerd[1538]: 2025-05-09 00:22:42.563 [INFO][4674] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4" Namespace="calico-apiserver" Pod="calico-apiserver-578848dcfb-67brr" WorkloadEndpoint="localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0" May 9 00:22:42.722387 containerd[1538]: 2025-05-09 00:22:42.607 [INFO][4726] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4" HandleID="k8s-pod-network.bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4" Workload="localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0" May 9 00:22:42.722387 containerd[1538]: 2025-05-09 00:22:42.618 [INFO][4726] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4" HandleID="k8s-pod-network.bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4" Workload="localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005a1730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-578848dcfb-67brr", "timestamp":"2025-05-09 00:22:42.607202517 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:22:42.722387 containerd[1538]: 2025-05-09 00:22:42.618 [INFO][4726] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:22:42.722387 containerd[1538]: 2025-05-09 00:22:42.665 [INFO][4726] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:22:42.722387 containerd[1538]: 2025-05-09 00:22:42.665 [INFO][4726] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 00:22:42.722387 containerd[1538]: 2025-05-09 00:22:42.667 [INFO][4726] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4" host="localhost" May 9 00:22:42.722387 containerd[1538]: 2025-05-09 00:22:42.677 [INFO][4726] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 00:22:42.722387 containerd[1538]: 2025-05-09 00:22:42.683 [INFO][4726] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 00:22:42.722387 containerd[1538]: 2025-05-09 00:22:42.685 [INFO][4726] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 00:22:42.722387 containerd[1538]: 2025-05-09 00:22:42.689 [INFO][4726] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 00:22:42.722387 containerd[1538]: 2025-05-09 00:22:42.689 [INFO][4726] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4" host="localhost" May 9 00:22:42.722387 containerd[1538]: 2025-05-09 00:22:42.691 [INFO][4726] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4 May 9 00:22:42.722387 containerd[1538]: 2025-05-09 00:22:42.697 [INFO][4726] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4" host="localhost" May 9 00:22:42.722387 containerd[1538]: 2025-05-09 00:22:42.704 [INFO][4726] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4" host="localhost" May 9 00:22:42.722387 containerd[1538]: 2025-05-09 00:22:42.704 [INFO][4726] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4" host="localhost" May 9 00:22:42.722387 containerd[1538]: 2025-05-09 00:22:42.704 [INFO][4726] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:22:42.722387 containerd[1538]: 2025-05-09 00:22:42.704 [INFO][4726] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4" HandleID="k8s-pod-network.bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4" Workload="localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0" May 9 00:22:42.723134 containerd[1538]: 2025-05-09 00:22:42.706 [INFO][4674] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4" Namespace="calico-apiserver" Pod="calico-apiserver-578848dcfb-67brr" WorkloadEndpoint="localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0", GenerateName:"calico-apiserver-578848dcfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"35cd080a-2fa4-4c86-aa58-2bc34990adbf", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"578848dcfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-578848dcfb-67brr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e5946a20da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:22:42.723134 containerd[1538]: 2025-05-09 00:22:42.707 [INFO][4674] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4" Namespace="calico-apiserver" Pod="calico-apiserver-578848dcfb-67brr" WorkloadEndpoint="localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0" May 9 00:22:42.723134 containerd[1538]: 2025-05-09 00:22:42.707 [INFO][4674] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e5946a20da ContainerID="bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4" Namespace="calico-apiserver" Pod="calico-apiserver-578848dcfb-67brr" WorkloadEndpoint="localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0" May 9 00:22:42.723134 containerd[1538]: 2025-05-09 00:22:42.709 [INFO][4674] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4" Namespace="calico-apiserver" Pod="calico-apiserver-578848dcfb-67brr" WorkloadEndpoint="localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0" May 9 00:22:42.723134 containerd[1538]: 2025-05-09 00:22:42.709 [INFO][4674] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4" Namespace="calico-apiserver" Pod="calico-apiserver-578848dcfb-67brr" WorkloadEndpoint="localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0", GenerateName:"calico-apiserver-578848dcfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"35cd080a-2fa4-4c86-aa58-2bc34990adbf", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"578848dcfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4", Pod:"calico-apiserver-578848dcfb-67brr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e5946a20da", MAC:"92:5a:69:97:fe:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:22:42.723134 containerd[1538]: 2025-05-09 00:22:42.720 [INFO][4674] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4" Namespace="calico-apiserver" Pod="calico-apiserver-578848dcfb-67brr" WorkloadEndpoint="localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0" May 9 00:22:42.740187 systemd-resolved[1432]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:22:42.743682 containerd[1538]: time="2025-05-09T00:22:42.742769073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:22:42.743682 containerd[1538]: time="2025-05-09T00:22:42.742838032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:22:42.743682 containerd[1538]: time="2025-05-09T00:22:42.742848792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:22:42.743682 containerd[1538]: time="2025-05-09T00:22:42.742925711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:22:42.756888 containerd[1538]: time="2025-05-09T00:22:42.756839323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c5864,Uid:4c753734-0995-4fcd-9777-f094bc14fa3a,Namespace:calico-system,Attempt:1,} returns sandbox id \"d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120\"" May 9 00:22:42.759552 containerd[1538]: time="2025-05-09T00:22:42.759497975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 9 00:22:42.763218 containerd[1538]: time="2025-05-09T00:22:42.763183815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578848dcfb-6t55f,Uid:2b79fe7f-93ab-4e08-981f-3cf87817c0d8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5\"" May 9 00:22:42.768266 systemd-resolved[1432]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:22:42.784096 containerd[1538]: time="2025-05-09T00:22:42.784049913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578848dcfb-67brr,Uid:35cd080a-2fa4-4c86-aa58-2bc34990adbf,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4\"" May 9 00:22:42.864410 kubelet[2717]: I0509 00:22:42.864225 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:22:42.864917 kubelet[2717]: E0509 00:22:42.864898 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:43.050606 kernel: bpftool[4915]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 9 00:22:43.157823 systemd-networkd[1224]: calic9cc19bc975: Gained IPv6LL May 9 00:22:43.199466 systemd-networkd[1224]: vxlan.calico: Link UP May 9 00:22:43.199472 systemd-networkd[1224]: vxlan.calico: Gained carrier May 9 00:22:43.357860 containerd[1538]: time="2025-05-09T00:22:43.356092270Z" level=info msg="StopPodSandbox for \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\"" May 9 00:22:43.357860 containerd[1538]: time="2025-05-09T00:22:43.356137749Z" level=info msg="StopPodSandbox for \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\"" May 9 00:22:43.496490 kubelet[2717]: E0509 00:22:43.496458 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:43.498143 kubelet[2717]: E0509 00:22:43.498023 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:43.514399 containerd[1538]: 2025-05-09 00:22:43.451 [INFO][5027] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" May 9 00:22:43.514399 containerd[1538]: 2025-05-09 00:22:43.453 [INFO][5027] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" iface="eth0" netns="/var/run/netns/cni-39a3eb83-0419-acca-5934-c690dd206ea0" May 9 00:22:43.514399 containerd[1538]: 2025-05-09 00:22:43.453 [INFO][5027] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" iface="eth0" netns="/var/run/netns/cni-39a3eb83-0419-acca-5934-c690dd206ea0" May 9 00:22:43.514399 containerd[1538]: 2025-05-09 00:22:43.454 [INFO][5027] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" iface="eth0" netns="/var/run/netns/cni-39a3eb83-0419-acca-5934-c690dd206ea0" May 9 00:22:43.514399 containerd[1538]: 2025-05-09 00:22:43.454 [INFO][5027] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" May 9 00:22:43.514399 containerd[1538]: 2025-05-09 00:22:43.454 [INFO][5027] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" May 9 00:22:43.514399 containerd[1538]: 2025-05-09 00:22:43.496 [INFO][5068] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" HandleID="k8s-pod-network.5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" Workload="localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0" May 9 00:22:43.514399 containerd[1538]: 2025-05-09 00:22:43.496 [INFO][5068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:22:43.514399 containerd[1538]: 2025-05-09 00:22:43.496 [INFO][5068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:22:43.514399 containerd[1538]: 2025-05-09 00:22:43.507 [WARNING][5068] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" HandleID="k8s-pod-network.5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" Workload="localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0" May 9 00:22:43.514399 containerd[1538]: 2025-05-09 00:22:43.507 [INFO][5068] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" HandleID="k8s-pod-network.5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" Workload="localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0" May 9 00:22:43.514399 containerd[1538]: 2025-05-09 00:22:43.509 [INFO][5068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:22:43.514399 containerd[1538]: 2025-05-09 00:22:43.512 [INFO][5027] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" May 9 00:22:43.514874 containerd[1538]: time="2025-05-09T00:22:43.514546822Z" level=info msg="TearDown network for sandbox \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\" successfully" May 9 00:22:43.514874 containerd[1538]: time="2025-05-09T00:22:43.514572262Z" level=info msg="StopPodSandbox for \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\" returns successfully" May 9 00:22:43.514927 kubelet[2717]: E0509 00:22:43.514861 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:43.516157 containerd[1538]: time="2025-05-09T00:22:43.516114286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l97jq,Uid:1e86186f-4466-4f52-abab-bf5cb10cf167,Namespace:kube-system,Attempt:1,}" May 9 00:22:43.526568 containerd[1538]: 2025-05-09 00:22:43.462 [INFO][5005] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" May 9 00:22:43.526568 containerd[1538]: 2025-05-09 00:22:43.462 [INFO][5005] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" iface="eth0" netns="/var/run/netns/cni-5717d704-d6a7-1062-9ef8-19c719561f72" May 9 00:22:43.526568 containerd[1538]: 2025-05-09 00:22:43.463 [INFO][5005] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" iface="eth0" netns="/var/run/netns/cni-5717d704-d6a7-1062-9ef8-19c719561f72" May 9 00:22:43.526568 containerd[1538]: 2025-05-09 00:22:43.463 [INFO][5005] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" iface="eth0" netns="/var/run/netns/cni-5717d704-d6a7-1062-9ef8-19c719561f72" May 9 00:22:43.526568 containerd[1538]: 2025-05-09 00:22:43.463 [INFO][5005] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" May 9 00:22:43.526568 containerd[1538]: 2025-05-09 00:22:43.463 [INFO][5005] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" May 9 00:22:43.526568 containerd[1538]: 2025-05-09 00:22:43.505 [INFO][5079] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" HandleID="k8s-pod-network.730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" Workload="localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0" May 9 00:22:43.526568 containerd[1538]: 2025-05-09 00:22:43.505 [INFO][5079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:22:43.526568 containerd[1538]: 2025-05-09 00:22:43.509 [INFO][5079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:22:43.526568 containerd[1538]: 2025-05-09 00:22:43.521 [WARNING][5079] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" HandleID="k8s-pod-network.730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" Workload="localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0" May 9 00:22:43.526568 containerd[1538]: 2025-05-09 00:22:43.521 [INFO][5079] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" HandleID="k8s-pod-network.730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" Workload="localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0" May 9 00:22:43.526568 containerd[1538]: 2025-05-09 00:22:43.522 [INFO][5079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:22:43.526568 containerd[1538]: 2025-05-09 00:22:43.524 [INFO][5005] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" May 9 00:22:43.527041 containerd[1538]: time="2025-05-09T00:22:43.526709455Z" level=info msg="TearDown network for sandbox \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\" successfully" May 9 00:22:43.527041 containerd[1538]: time="2025-05-09T00:22:43.526733695Z" level=info msg="StopPodSandbox for \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\" returns successfully" May 9 00:22:43.527866 containerd[1538]: time="2025-05-09T00:22:43.527731685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85b589894b-bpbf6,Uid:38013c7d-6040-44b1-bf5b-b5ceecbabd97,Namespace:calico-system,Attempt:1,}" May 9 00:22:43.563884 systemd[1]: run-netns-cni\x2d39a3eb83\x2d0419\x2dacca\x2d5934\x2dc690dd206ea0.mount: Deactivated successfully. May 9 00:22:43.564015 systemd[1]: run-netns-cni\x2d5717d704\x2dd6a7\x2d1062\x2d9ef8\x2d19c719561f72.mount: Deactivated successfully. May 9 00:22:43.640501 systemd-networkd[1224]: calib2dc8549adf: Link UP May 9 00:22:43.640917 systemd-networkd[1224]: calib2dc8549adf: Gained carrier May 9 00:22:43.655339 containerd[1538]: 2025-05-09 00:22:43.568 [INFO][5100] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0 coredns-7db6d8ff4d- kube-system 1e86186f-4466-4f52-abab-bf5cb10cf167 969 0 2025-05-09 00:22:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-l97jq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib2dc8549adf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l97jq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--l97jq-" May 9 00:22:43.655339 containerd[1538]: 2025-05-09 00:22:43.568 [INFO][5100] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l97jq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0" May 9 00:22:43.655339 containerd[1538]: 2025-05-09 00:22:43.598 [INFO][5128] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66" HandleID="k8s-pod-network.fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66" Workload="localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0" May 9 00:22:43.655339 containerd[1538]: 2025-05-09 00:22:43.610 [INFO][5128] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66" HandleID="k8s-pod-network.fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66" Workload="localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000482ab0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-l97jq", "timestamp":"2025-05-09 00:22:43.598698787 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:22:43.655339 containerd[1538]: 2025-05-09 00:22:43.610 [INFO][5128] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:22:43.655339 containerd[1538]: 2025-05-09 00:22:43.610 [INFO][5128] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:22:43.655339 containerd[1538]: 2025-05-09 00:22:43.610 [INFO][5128] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 00:22:43.655339 containerd[1538]: 2025-05-09 00:22:43.612 [INFO][5128] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66" host="localhost" May 9 00:22:43.655339 containerd[1538]: 2025-05-09 00:22:43.618 [INFO][5128] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 00:22:43.655339 containerd[1538]: 2025-05-09 00:22:43.621 [INFO][5128] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 00:22:43.655339 containerd[1538]: 2025-05-09 00:22:43.623 [INFO][5128] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 00:22:43.655339 containerd[1538]: 2025-05-09 00:22:43.625 [INFO][5128] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 00:22:43.655339 containerd[1538]: 2025-05-09 00:22:43.625 [INFO][5128] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66" host="localhost" May 9 00:22:43.655339 containerd[1538]: 2025-05-09 00:22:43.626 [INFO][5128] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66 May 9 00:22:43.655339 containerd[1538]: 2025-05-09 00:22:43.630 [INFO][5128] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66" host="localhost" May 9 00:22:43.655339 containerd[1538]: 2025-05-09 00:22:43.635 [INFO][5128] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66" host="localhost" May 9 00:22:43.655339 containerd[1538]: 2025-05-09 00:22:43.635 [INFO][5128] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66" host="localhost" May 9 00:22:43.655339 containerd[1538]: 2025-05-09 00:22:43.635 [INFO][5128] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:22:43.655339 containerd[1538]: 2025-05-09 00:22:43.635 [INFO][5128] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66" HandleID="k8s-pod-network.fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66" Workload="localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0" May 9 00:22:43.656297 containerd[1538]: 2025-05-09 00:22:43.638 [INFO][5100] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l97jq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1e86186f-4466-4f52-abab-bf5cb10cf167", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-l97jq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2dc8549adf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:22:43.656297 containerd[1538]: 2025-05-09 00:22:43.638 [INFO][5100] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l97jq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0" May 9 00:22:43.656297 containerd[1538]: 2025-05-09 00:22:43.638 [INFO][5100] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2dc8549adf ContainerID="fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l97jq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0" May 9 00:22:43.656297 containerd[1538]: 2025-05-09 00:22:43.640 [INFO][5100] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l97jq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0" May 9 00:22:43.656297 containerd[1538]: 2025-05-09 00:22:43.642 [INFO][5100] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l97jq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1e86186f-4466-4f52-abab-bf5cb10cf167", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66", Pod:"coredns-7db6d8ff4d-l97jq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2dc8549adf", MAC:"aa:c4:38:62:2a:32", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:22:43.656297 containerd[1538]: 2025-05-09 00:22:43.652 [INFO][5100] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l97jq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0" May 9 00:22:43.677396 systemd-networkd[1224]: caliad323ada641: Link UP May 9 00:22:43.677656 systemd-networkd[1224]: caliad323ada641: Gained carrier May 9 00:22:43.678959 containerd[1538]: time="2025-05-09T00:22:43.678852273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:22:43.679059 containerd[1538]: time="2025-05-09T00:22:43.678964392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:22:43.679059 containerd[1538]: time="2025-05-09T00:22:43.679012272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:22:43.680504 containerd[1538]: time="2025-05-09T00:22:43.679193510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:22:43.692482 containerd[1538]: 2025-05-09 00:22:43.570 [INFO][5111] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0 calico-kube-controllers-85b589894b- calico-system 38013c7d-6040-44b1-bf5b-b5ceecbabd97 970 0 2025-05-09 00:22:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85b589894b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-85b589894b-bpbf6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliad323ada641 [] []}} ContainerID="cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050" Namespace="calico-system" Pod="calico-kube-controllers-85b589894b-bpbf6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-" May 9 00:22:43.692482 containerd[1538]: 2025-05-09 00:22:43.570 [INFO][5111] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050" Namespace="calico-system" Pod="calico-kube-controllers-85b589894b-bpbf6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0" May 9 00:22:43.692482 containerd[1538]: 2025-05-09 00:22:43.600 [INFO][5134] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050" HandleID="k8s-pod-network.cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050" Workload="localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0" May 9 00:22:43.692482 containerd[1538]: 2025-05-09 00:22:43.617 [INFO][5134] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050" HandleID="k8s-pod-network.cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050" Workload="localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000362c80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-85b589894b-bpbf6", "timestamp":"2025-05-09 00:22:43.600658446 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:22:43.692482 containerd[1538]: 2025-05-09 00:22:43.617 [INFO][5134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:22:43.692482 containerd[1538]: 2025-05-09 00:22:43.635 [INFO][5134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:22:43.692482 containerd[1538]: 2025-05-09 00:22:43.636 [INFO][5134] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 00:22:43.692482 containerd[1538]: 2025-05-09 00:22:43.638 [INFO][5134] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050" host="localhost" May 9 00:22:43.692482 containerd[1538]: 2025-05-09 00:22:43.645 [INFO][5134] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 00:22:43.692482 containerd[1538]: 2025-05-09 00:22:43.652 [INFO][5134] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 00:22:43.692482 containerd[1538]: 2025-05-09 00:22:43.655 [INFO][5134] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 00:22:43.692482 containerd[1538]: 2025-05-09 00:22:43.658 [INFO][5134] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 00:22:43.692482 containerd[1538]: 2025-05-09 00:22:43.658 [INFO][5134] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050" host="localhost" May 9 00:22:43.692482 containerd[1538]: 2025-05-09 00:22:43.659 [INFO][5134] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050 May 9 00:22:43.692482 containerd[1538]: 2025-05-09 00:22:43.663 [INFO][5134] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050" host="localhost" May 9 00:22:43.692482 containerd[1538]: 2025-05-09 00:22:43.670 [INFO][5134] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050" host="localhost" May 9 00:22:43.692482 containerd[1538]: 2025-05-09 00:22:43.671 [INFO][5134] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050" host="localhost" May 9 00:22:43.692482 containerd[1538]: 2025-05-09 00:22:43.671 [INFO][5134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:22:43.692482 containerd[1538]: 2025-05-09 00:22:43.671 [INFO][5134] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050" HandleID="k8s-pod-network.cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050" Workload="localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0" May 9 00:22:43.693085 containerd[1538]: 2025-05-09 00:22:43.673 [INFO][5111] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050" Namespace="calico-system" Pod="calico-kube-controllers-85b589894b-bpbf6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0", GenerateName:"calico-kube-controllers-85b589894b-", Namespace:"calico-system", SelfLink:"", UID:"38013c7d-6040-44b1-bf5b-b5ceecbabd97", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85b589894b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-85b589894b-bpbf6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad323ada641", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:22:43.693085 containerd[1538]: 2025-05-09 00:22:43.673 [INFO][5111] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050" Namespace="calico-system" Pod="calico-kube-controllers-85b589894b-bpbf6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0" May 9 00:22:43.693085 containerd[1538]: 2025-05-09 00:22:43.673 [INFO][5111] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad323ada641 ContainerID="cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050" Namespace="calico-system" Pod="calico-kube-controllers-85b589894b-bpbf6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0" May 9 00:22:43.693085 containerd[1538]: 2025-05-09 00:22:43.676 [INFO][5111] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050" Namespace="calico-system" Pod="calico-kube-controllers-85b589894b-bpbf6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0" May 9 00:22:43.693085 containerd[1538]: 2025-05-09 00:22:43.676 [INFO][5111] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050" Namespace="calico-system" Pod="calico-kube-controllers-85b589894b-bpbf6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0", GenerateName:"calico-kube-controllers-85b589894b-", Namespace:"calico-system", SelfLink:"", UID:"38013c7d-6040-44b1-bf5b-b5ceecbabd97", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85b589894b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050", Pod:"calico-kube-controllers-85b589894b-bpbf6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad323ada641", MAC:"42:e8:7f:be:e7:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:22:43.693085 containerd[1538]: 2025-05-09 00:22:43.689 [INFO][5111] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050" Namespace="calico-system" Pod="calico-kube-controllers-85b589894b-bpbf6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0" May 9 00:22:43.707513 systemd-resolved[1432]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:22:43.715174 containerd[1538]: time="2025-05-09T00:22:43.714539422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:22:43.715174 containerd[1538]: time="2025-05-09T00:22:43.714973618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:22:43.715174 containerd[1538]: time="2025-05-09T00:22:43.714987697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:22:43.715307 containerd[1538]: time="2025-05-09T00:22:43.715115416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:22:43.737976 systemd-resolved[1432]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:22:43.745798 containerd[1538]: time="2025-05-09T00:22:43.745759257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l97jq,Uid:1e86186f-4466-4f52-abab-bf5cb10cf167,Namespace:kube-system,Attempt:1,} returns sandbox id \"fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66\"" May 9 00:22:43.747187 kubelet[2717]: E0509 00:22:43.747158 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:43.757594 containerd[1538]: time="2025-05-09T00:22:43.757544775Z" level=info msg="CreateContainer within sandbox \"fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:22:43.770968 containerd[1538]: time="2025-05-09T00:22:43.770834677Z" level=info msg="CreateContainer within sandbox \"fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee4af96335a30a9f1b735cd2a12b350a7b433ff7002549409cd75f6c5b934fd1\"" May 9 00:22:43.771682 containerd[1538]: time="2025-05-09T00:22:43.771543949Z" level=info msg="StartContainer for \"ee4af96335a30a9f1b735cd2a12b350a7b433ff7002549409cd75f6c5b934fd1\"" May 9 00:22:43.777836 containerd[1538]: time="2025-05-09T00:22:43.777807764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85b589894b-bpbf6,Uid:38013c7d-6040-44b1-bf5b-b5ceecbabd97,Namespace:calico-system,Attempt:1,} returns sandbox id \"cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050\"" May 9 00:22:43.821047 containerd[1538]: time="2025-05-09T00:22:43.820998995Z" level=info msg="StartContainer for \"ee4af96335a30a9f1b735cd2a12b350a7b433ff7002549409cd75f6c5b934fd1\" returns successfully" May 9 00:22:43.892145 containerd[1538]: time="2025-05-09T00:22:43.892032256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:43.892881 containerd[1538]: time="2025-05-09T00:22:43.892842608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 9 00:22:43.893756 containerd[1538]: time="2025-05-09T00:22:43.893722879Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:43.895664 containerd[1538]: time="2025-05-09T00:22:43.895635019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:43.896605 containerd[1538]: time="2025-05-09T00:22:43.896514730Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.136976436s" May 9 00:22:43.896605 containerd[1538]: time="2025-05-09T00:22:43.896547089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 9 00:22:43.898609 containerd[1538]: time="2025-05-09T00:22:43.897901555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 9 00:22:43.899713 containerd[1538]: time="2025-05-09T00:22:43.899683817Z" level=info msg="CreateContainer within sandbox \"d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 9 00:22:43.924738 containerd[1538]: time="2025-05-09T00:22:43.924689637Z" level=info msg="CreateContainer within sandbox \"d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"199c87bd62d032e76ca7e6e2232f6886ce2d3b3e193f5f0923c00873d2b857ac\"" May 9 00:22:43.925397 containerd[1538]: time="2025-05-09T00:22:43.925350470Z" level=info msg="StartContainer for \"199c87bd62d032e76ca7e6e2232f6886ce2d3b3e193f5f0923c00873d2b857ac\"" May 9 00:22:43.993339 containerd[1538]: time="2025-05-09T00:22:43.993297523Z" level=info msg="StartContainer for \"199c87bd62d032e76ca7e6e2232f6886ce2d3b3e193f5f0923c00873d2b857ac\" returns successfully" May 9 00:22:44.181721 systemd-networkd[1224]: cali1acc8e87414: Gained IPv6LL May 9 00:22:44.373727 systemd-networkd[1224]: vxlan.calico: Gained IPv6LL May 9 00:22:44.437738 systemd-networkd[1224]: cali6e5946a20da: Gained IPv6LL May 9 00:22:44.499982 kubelet[2717]: E0509 00:22:44.499815 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:44.506756 kubelet[2717]: E0509 00:22:44.506724 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:44.511257 kubelet[2717]: I0509 00:22:44.510922 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-l97jq" podStartSLOduration=31.51090626 podStartE2EDuration="31.51090626s" podCreationTimestamp="2025-05-09 00:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:22:44.510508064 +0000 UTC m=+45.231792745" watchObservedRunningTime="2025-05-09 00:22:44.51090626 +0000 UTC m=+45.232190941" May 9 00:22:44.629775 systemd-networkd[1224]: calie1ea9b6fbb8: Gained IPv6LL May 9 00:22:44.876692 systemd[1]: Started sshd@12-10.0.0.135:22-10.0.0.1:57632.service - OpenSSH per-connection server daemon (10.0.0.1:57632). May 9 00:22:44.916988 sshd[5340]: Accepted publickey for core from 10.0.0.1 port 57632 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:22:44.918532 sshd[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:22:44.922534 systemd-logind[1519]: New session 13 of user core. May 9 00:22:44.936818 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 00:22:45.144481 sshd[5340]: pam_unix(sshd:session): session closed for user core May 9 00:22:45.148201 systemd[1]: sshd@12-10.0.0.135:22-10.0.0.1:57632.service: Deactivated successfully. May 9 00:22:45.150440 systemd-logind[1519]: Session 13 logged out. Waiting for processes to exit. May 9 00:22:45.150657 systemd[1]: session-13.scope: Deactivated successfully. May 9 00:22:45.152464 systemd-logind[1519]: Removed session 13. May 9 00:22:45.461842 systemd-networkd[1224]: calib2dc8549adf: Gained IPv6LL May 9 00:22:45.508467 kubelet[2717]: E0509 00:22:45.508377 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:45.521701 containerd[1538]: time="2025-05-09T00:22:45.521650462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:45.522962 containerd[1538]: time="2025-05-09T00:22:45.522746531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 9 00:22:45.523624 containerd[1538]: time="2025-05-09T00:22:45.523574643Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:45.525858 containerd[1538]: time="2025-05-09T00:22:45.525818500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:45.527198 containerd[1538]: time="2025-05-09T00:22:45.527154727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.628711337s" May 9 00:22:45.527198 containerd[1538]: time="2025-05-09T00:22:45.527193807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 9 00:22:45.528989 containerd[1538]: time="2025-05-09T00:22:45.528810710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 9 00:22:45.529907 containerd[1538]: time="2025-05-09T00:22:45.529861820Z" level=info msg="CreateContainer within sandbox \"46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 9 00:22:45.542103 containerd[1538]: time="2025-05-09T00:22:45.542061579Z" level=info msg="CreateContainer within sandbox \"46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b9570c7a8aeae1c8b876696dfb7d9c864d833b225af45b1f1c066da01dd9e7b8\"" May 9 00:22:45.542561 containerd[1538]: time="2025-05-09T00:22:45.542535134Z" level=info msg="StartContainer for \"b9570c7a8aeae1c8b876696dfb7d9c864d833b225af45b1f1c066da01dd9e7b8\"" May 9 00:22:45.626084 containerd[1538]: time="2025-05-09T00:22:45.626035184Z" level=info msg="StartContainer for \"b9570c7a8aeae1c8b876696dfb7d9c864d833b225af45b1f1c066da01dd9e7b8\" returns successfully" May 9 00:22:45.717715 systemd-networkd[1224]: caliad323ada641: Gained IPv6LL May 9 00:22:45.777602 containerd[1538]: time="2025-05-09T00:22:45.777539157Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:45.778179 containerd[1538]: time="2025-05-09T00:22:45.777991033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 9 00:22:45.780698 containerd[1538]: time="2025-05-09T00:22:45.780664046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 251.809136ms" May 9 00:22:45.780764 containerd[1538]: time="2025-05-09T00:22:45.780698966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 9 00:22:45.781545 containerd[1538]: time="2025-05-09T00:22:45.781513758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 9 00:22:45.783244 containerd[1538]: time="2025-05-09T00:22:45.783209661Z" level=info msg="CreateContainer within sandbox \"bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 9 00:22:45.792734 containerd[1538]: time="2025-05-09T00:22:45.792683287Z" level=info msg="CreateContainer within sandbox \"bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f111e82266e61d7616011051ecc0691b5d32209e3e1bea08066380fa82f7ec90\"" May 9 00:22:45.793173 containerd[1538]: time="2025-05-09T00:22:45.793149722Z" level=info msg="StartContainer for \"f111e82266e61d7616011051ecc0691b5d32209e3e1bea08066380fa82f7ec90\"" May 9 00:22:45.848071 containerd[1538]: time="2025-05-09T00:22:45.847942177Z" level=info msg="StartContainer for \"f111e82266e61d7616011051ecc0691b5d32209e3e1bea08066380fa82f7ec90\" returns successfully" May 9 00:22:46.526955 kubelet[2717]: E0509 00:22:46.526928 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:22:46.532403 kubelet[2717]: I0509 00:22:46.532357 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-578848dcfb-67brr" podStartSLOduration=24.535990266 podStartE2EDuration="27.532340362s" podCreationTimestamp="2025-05-09 00:22:19 +0000 UTC" firstStartedPulling="2025-05-09 00:22:42.785012983 +0000 UTC m=+43.506297624" lastFinishedPulling="2025-05-09 00:22:45.781363079 +0000 UTC m=+46.502647720" observedRunningTime="2025-05-09 00:22:46.530320741 +0000 UTC m=+47.251605422" watchObservedRunningTime="2025-05-09 00:22:46.532340362 +0000 UTC m=+47.253625043" May 9 00:22:47.354281 containerd[1538]: time="2025-05-09T00:22:47.354234468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:47.355009 containerd[1538]: time="2025-05-09T00:22:47.354974660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 9 00:22:47.356216 containerd[1538]: time="2025-05-09T00:22:47.356174729Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:47.358597 containerd[1538]: time="2025-05-09T00:22:47.358552346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:47.359498 containerd[1538]: time="2025-05-09T00:22:47.359452978Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.577906901s" May 9 00:22:47.359544 containerd[1538]: time="2025-05-09T00:22:47.359491657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 9 00:22:47.360632 containerd[1538]: time="2025-05-09T00:22:47.360514968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 9 00:22:47.366032 containerd[1538]: time="2025-05-09T00:22:47.365980675Z" level=info msg="CreateContainer within sandbox \"cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 9 00:22:47.380375 containerd[1538]: time="2025-05-09T00:22:47.380343858Z" level=info msg="CreateContainer within sandbox \"cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1562fafb90f175953ce63049cde6a8b003ab2ba281dda52cbfde512a1d387bfa\"" May 9 00:22:47.381019 containerd[1538]: time="2025-05-09T00:22:47.380802694Z" level=info msg="StartContainer for \"1562fafb90f175953ce63049cde6a8b003ab2ba281dda52cbfde512a1d387bfa\"" May 9 00:22:47.432055 containerd[1538]: time="2025-05-09T00:22:47.432019765Z" level=info msg="StartContainer for \"1562fafb90f175953ce63049cde6a8b003ab2ba281dda52cbfde512a1d387bfa\" returns successfully" May 9 00:22:47.531809 kubelet[2717]: I0509 00:22:47.531592 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:22:47.532979 kubelet[2717]: I0509 00:22:47.532558 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:22:47.549651 kubelet[2717]: I0509 00:22:47.549595 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-578848dcfb-6t55f" podStartSLOduration=25.78571633 podStartE2EDuration="28.549569803s" podCreationTimestamp="2025-05-09 00:22:19 +0000 UTC" firstStartedPulling="2025-05-09 00:22:42.764192885 +0000 UTC m=+43.485477566" lastFinishedPulling="2025-05-09 00:22:45.528046358 +0000 UTC m=+46.249331039" observedRunningTime="2025-05-09 00:22:46.540511202 +0000 UTC m=+47.261795843" watchObservedRunningTime="2025-05-09 00:22:47.549569803 +0000 UTC m=+48.270854484" May 9 00:22:47.550000 kubelet[2717]: I0509 00:22:47.549958 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-85b589894b-bpbf6" podStartSLOduration=24.969670531 podStartE2EDuration="28.54995304s" podCreationTimestamp="2025-05-09 00:22:19 +0000 UTC" firstStartedPulling="2025-05-09 00:22:43.779893102 +0000 UTC m=+44.501177743" lastFinishedPulling="2025-05-09 00:22:47.360175571 +0000 UTC m=+48.081460252" observedRunningTime="2025-05-09 00:22:47.549449965 +0000 UTC m=+48.270734646" watchObservedRunningTime="2025-05-09 00:22:47.54995304 +0000 UTC m=+48.271237721" May 9 00:22:48.440794 containerd[1538]: time="2025-05-09T00:22:48.440742139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:48.441605 containerd[1538]: time="2025-05-09T00:22:48.441243494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 9 00:22:48.442115 containerd[1538]: time="2025-05-09T00:22:48.442090046Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:48.444675 containerd[1538]: time="2025-05-09T00:22:48.444632982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:22:48.445485 containerd[1538]: time="2025-05-09T00:22:48.445450375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.084901688s" May 9 00:22:48.445610 containerd[1538]: time="2025-05-09T00:22:48.445562934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 9 00:22:48.454266 containerd[1538]: time="2025-05-09T00:22:48.454208613Z" level=info msg="CreateContainer within sandbox \"d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 9 00:22:48.466744 containerd[1538]: time="2025-05-09T00:22:48.466707336Z" level=info msg="CreateContainer within sandbox \"d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cb268fa2ebbf5a651fb48128ec46bd53b64e13253a2c8536361a8eafb454c881\"" May 9 00:22:48.467395 containerd[1538]: time="2025-05-09T00:22:48.467334130Z" level=info msg="StartContainer for \"cb268fa2ebbf5a651fb48128ec46bd53b64e13253a2c8536361a8eafb454c881\"" May 9 00:22:48.508750 containerd[1538]: time="2025-05-09T00:22:48.508693543Z" level=info msg="StartContainer for \"cb268fa2ebbf5a651fb48128ec46bd53b64e13253a2c8536361a8eafb454c881\" returns successfully" May 9 00:22:48.545341 kubelet[2717]: I0509 00:22:48.545282 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-c5864" podStartSLOduration=23.857089862 podStartE2EDuration="29.54526636s" podCreationTimestamp="2025-05-09 00:22:19 +0000 UTC" firstStartedPulling="2025-05-09 00:22:42.758219828 +0000 UTC m=+43.479504509" lastFinishedPulling="2025-05-09 00:22:48.446396326 +0000 UTC m=+49.167681007" observedRunningTime="2025-05-09 00:22:48.544526647 +0000 UTC m=+49.265811328" watchObservedRunningTime="2025-05-09 00:22:48.54526636 +0000 UTC m=+49.266551041" May 9 00:22:49.447180 kubelet[2717]: I0509 00:22:49.447142 2717 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 9 00:22:49.450667 kubelet[2717]: I0509 00:22:49.450643 2717 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 9 00:22:50.154970 systemd[1]: Started sshd@13-10.0.0.135:22-10.0.0.1:57644.service - OpenSSH per-connection server daemon (10.0.0.1:57644). May 9 00:22:50.205074 sshd[5581]: Accepted publickey for core from 10.0.0.1 port 57644 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:22:50.206666 sshd[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:22:50.211751 systemd-logind[1519]: New session 14 of user core. May 9 00:22:50.218807 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 00:22:50.381704 sshd[5581]: pam_unix(sshd:session): session closed for user core May 9 00:22:50.384909 systemd[1]: sshd@13-10.0.0.135:22-10.0.0.1:57644.service: Deactivated successfully. May 9 00:22:50.387785 systemd[1]: session-14.scope: Deactivated successfully. May 9 00:22:50.389107 systemd-logind[1519]: Session 14 logged out. Waiting for processes to exit. May 9 00:22:50.390249 systemd-logind[1519]: Removed session 14. May 9 00:22:51.029123 kubelet[2717]: I0509 00:22:51.028968 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:22:55.392826 systemd[1]: Started sshd@14-10.0.0.135:22-10.0.0.1:60302.service - OpenSSH per-connection server daemon (10.0.0.1:60302). May 9 00:22:55.432269 sshd[5608]: Accepted publickey for core from 10.0.0.1 port 60302 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:22:55.433518 sshd[5608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:22:55.437636 systemd-logind[1519]: New session 15 of user core. May 9 00:22:55.449892 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 00:22:55.604650 sshd[5608]: pam_unix(sshd:session): session closed for user core May 9 00:22:55.607132 systemd[1]: sshd@14-10.0.0.135:22-10.0.0.1:60302.service: Deactivated successfully. May 9 00:22:55.609831 systemd[1]: session-15.scope: Deactivated successfully. May 9 00:22:55.610703 systemd-logind[1519]: Session 15 logged out. Waiting for processes to exit. May 9 00:22:55.611805 systemd-logind[1519]: Removed session 15. May 9 00:22:59.352316 containerd[1538]: time="2025-05-09T00:22:59.352010099Z" level=info msg="StopPodSandbox for \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\"" May 9 00:22:59.451118 containerd[1538]: 2025-05-09 00:22:59.416 [WARNING][5641] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--r585w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a2ed08d3-7f7d-48a2-9e9c-5f86ae30999a", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df", Pod:"coredns-7db6d8ff4d-r585w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9cc19bc975", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:22:59.451118 containerd[1538]: 2025-05-09 00:22:59.416 [INFO][5641] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" May 9 00:22:59.451118 containerd[1538]: 2025-05-09 00:22:59.416 [INFO][5641] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" iface="eth0" netns="" May 9 00:22:59.451118 containerd[1538]: 2025-05-09 00:22:59.416 [INFO][5641] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" May 9 00:22:59.451118 containerd[1538]: 2025-05-09 00:22:59.416 [INFO][5641] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" May 9 00:22:59.451118 containerd[1538]: 2025-05-09 00:22:59.437 [INFO][5650] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" HandleID="k8s-pod-network.97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" Workload="localhost-k8s-coredns--7db6d8ff4d--r585w-eth0" May 9 00:22:59.451118 containerd[1538]: 2025-05-09 00:22:59.437 [INFO][5650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:22:59.451118 containerd[1538]: 2025-05-09 00:22:59.437 [INFO][5650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:22:59.451118 containerd[1538]: 2025-05-09 00:22:59.446 [WARNING][5650] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" HandleID="k8s-pod-network.97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" Workload="localhost-k8s-coredns--7db6d8ff4d--r585w-eth0" May 9 00:22:59.451118 containerd[1538]: 2025-05-09 00:22:59.446 [INFO][5650] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" HandleID="k8s-pod-network.97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" Workload="localhost-k8s-coredns--7db6d8ff4d--r585w-eth0" May 9 00:22:59.451118 containerd[1538]: 2025-05-09 00:22:59.447 [INFO][5650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:22:59.451118 containerd[1538]: 2025-05-09 00:22:59.449 [INFO][5641] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" May 9 00:22:59.453951 containerd[1538]: time="2025-05-09T00:22:59.451157828Z" level=info msg="TearDown network for sandbox \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\" successfully" May 9 00:22:59.453951 containerd[1538]: time="2025-05-09T00:22:59.451185028Z" level=info msg="StopPodSandbox for \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\" returns successfully" May 9 00:22:59.453951 containerd[1538]: time="2025-05-09T00:22:59.453499970Z" level=info msg="RemovePodSandbox for \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\"" May 9 00:22:59.455769 containerd[1538]: time="2025-05-09T00:22:59.455737672Z" level=info msg="Forcibly stopping sandbox \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\"" May 9 00:22:59.524235 containerd[1538]: 2025-05-09 00:22:59.490 [WARNING][5673] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--r585w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a2ed08d3-7f7d-48a2-9e9c-5f86ae30999a", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1109d602a0a9f37fde13ad6401b58ddae28af15e52abe712e1bd821264b270df", Pod:"coredns-7db6d8ff4d-r585w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9cc19bc975", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:22:59.524235 containerd[1538]: 2025-05-09 00:22:59.490 [INFO][5673] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" May 9 00:22:59.524235 containerd[1538]: 2025-05-09 00:22:59.490 [INFO][5673] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" iface="eth0" netns="" May 9 00:22:59.524235 containerd[1538]: 2025-05-09 00:22:59.490 [INFO][5673] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" May 9 00:22:59.524235 containerd[1538]: 2025-05-09 00:22:59.490 [INFO][5673] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" May 9 00:22:59.524235 containerd[1538]: 2025-05-09 00:22:59.511 [INFO][5681] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" HandleID="k8s-pod-network.97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" Workload="localhost-k8s-coredns--7db6d8ff4d--r585w-eth0" May 9 00:22:59.524235 containerd[1538]: 2025-05-09 00:22:59.511 [INFO][5681] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:22:59.524235 containerd[1538]: 2025-05-09 00:22:59.511 [INFO][5681] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:22:59.524235 containerd[1538]: 2025-05-09 00:22:59.519 [WARNING][5681] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" HandleID="k8s-pod-network.97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" Workload="localhost-k8s-coredns--7db6d8ff4d--r585w-eth0" May 9 00:22:59.524235 containerd[1538]: 2025-05-09 00:22:59.519 [INFO][5681] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" HandleID="k8s-pod-network.97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" Workload="localhost-k8s-coredns--7db6d8ff4d--r585w-eth0" May 9 00:22:59.524235 containerd[1538]: 2025-05-09 00:22:59.521 [INFO][5681] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:22:59.524235 containerd[1538]: 2025-05-09 00:22:59.522 [INFO][5673] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c" May 9 00:22:59.524719 containerd[1538]: time="2025-05-09T00:22:59.524271645Z" level=info msg="TearDown network for sandbox \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\" successfully" May 9 00:22:59.527105 containerd[1538]: time="2025-05-09T00:22:59.527076303Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:22:59.527169 containerd[1538]: time="2025-05-09T00:22:59.527133423Z" level=info msg="RemovePodSandbox \"97c6b803ecbea30caf5a816527439195753560cf7561414e19cbbf40847e812c\" returns successfully" May 9 00:22:59.527772 containerd[1538]: time="2025-05-09T00:22:59.527744738Z" level=info msg="StopPodSandbox for \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\"" May 9 00:22:59.601570 containerd[1538]: 2025-05-09 00:22:59.566 [WARNING][5705] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1e86186f-4466-4f52-abab-bf5cb10cf167", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66", Pod:"coredns-7db6d8ff4d-l97jq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2dc8549adf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:22:59.601570 containerd[1538]: 2025-05-09 00:22:59.567 [INFO][5705] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" May 9 00:22:59.601570 containerd[1538]: 2025-05-09 00:22:59.567 [INFO][5705] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" iface="eth0" netns="" May 9 00:22:59.601570 containerd[1538]: 2025-05-09 00:22:59.567 [INFO][5705] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" May 9 00:22:59.601570 containerd[1538]: 2025-05-09 00:22:59.568 [INFO][5705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" May 9 00:22:59.601570 containerd[1538]: 2025-05-09 00:22:59.588 [INFO][5714] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" HandleID="k8s-pod-network.5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" Workload="localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0" May 9 00:22:59.601570 containerd[1538]: 2025-05-09 00:22:59.589 [INFO][5714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:22:59.601570 containerd[1538]: 2025-05-09 00:22:59.589 [INFO][5714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:22:59.601570 containerd[1538]: 2025-05-09 00:22:59.596 [WARNING][5714] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" HandleID="k8s-pod-network.5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" Workload="localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0" May 9 00:22:59.601570 containerd[1538]: 2025-05-09 00:22:59.596 [INFO][5714] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" HandleID="k8s-pod-network.5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" Workload="localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0" May 9 00:22:59.601570 containerd[1538]: 2025-05-09 00:22:59.598 [INFO][5714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:22:59.601570 containerd[1538]: 2025-05-09 00:22:59.599 [INFO][5705] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" May 9 00:22:59.601570 containerd[1538]: time="2025-05-09T00:22:59.601455310Z" level=info msg="TearDown network for sandbox \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\" successfully" May 9 00:22:59.601570 containerd[1538]: time="2025-05-09T00:22:59.601479070Z" level=info msg="StopPodSandbox for \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\" returns successfully" May 9 00:22:59.602109 containerd[1538]: time="2025-05-09T00:22:59.602066825Z" level=info msg="RemovePodSandbox for \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\"" May 9 00:22:59.602109 containerd[1538]: time="2025-05-09T00:22:59.602099145Z" level=info msg="Forcibly stopping sandbox \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\"" May 9 00:22:59.666743 containerd[1538]: 2025-05-09 00:22:59.634 [WARNING][5736] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1e86186f-4466-4f52-abab-bf5cb10cf167", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe9c171cdfabf344bf4d2fad8ff7c87c568e2106f2176a333085e9db6ba93d66", Pod:"coredns-7db6d8ff4d-l97jq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2dc8549adf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:22:59.666743 containerd[1538]: 2025-05-09 00:22:59.634 [INFO][5736] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" May 9 00:22:59.666743 containerd[1538]: 2025-05-09 00:22:59.634 [INFO][5736] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" iface="eth0" netns="" May 9 00:22:59.666743 containerd[1538]: 2025-05-09 00:22:59.634 [INFO][5736] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" May 9 00:22:59.666743 containerd[1538]: 2025-05-09 00:22:59.634 [INFO][5736] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" May 9 00:22:59.666743 containerd[1538]: 2025-05-09 00:22:59.654 [INFO][5745] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" HandleID="k8s-pod-network.5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" Workload="localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0" May 9 00:22:59.666743 containerd[1538]: 2025-05-09 00:22:59.654 [INFO][5745] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:22:59.666743 containerd[1538]: 2025-05-09 00:22:59.654 [INFO][5745] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:22:59.666743 containerd[1538]: 2025-05-09 00:22:59.662 [WARNING][5745] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" HandleID="k8s-pod-network.5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" Workload="localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0" May 9 00:22:59.666743 containerd[1538]: 2025-05-09 00:22:59.662 [INFO][5745] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" HandleID="k8s-pod-network.5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" Workload="localhost-k8s-coredns--7db6d8ff4d--l97jq-eth0" May 9 00:22:59.666743 containerd[1538]: 2025-05-09 00:22:59.663 [INFO][5745] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:22:59.666743 containerd[1538]: 2025-05-09 00:22:59.665 [INFO][5736] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4" May 9 00:22:59.666743 containerd[1538]: time="2025-05-09T00:22:59.666723949Z" level=info msg="TearDown network for sandbox \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\" successfully" May 9 00:22:59.673935 containerd[1538]: time="2025-05-09T00:22:59.673899332Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:22:59.674009 containerd[1538]: time="2025-05-09T00:22:59.673964532Z" level=info msg="RemovePodSandbox \"5f14cfb19cd1d9f56cbc4185c33fb5e26b18847d022dc9764b8396952d4230a4\" returns successfully" May 9 00:22:59.674647 containerd[1538]: time="2025-05-09T00:22:59.674383008Z" level=info msg="StopPodSandbox for \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\"" May 9 00:22:59.738153 containerd[1538]: 2025-05-09 00:22:59.708 [WARNING][5768] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0", GenerateName:"calico-kube-controllers-85b589894b-", Namespace:"calico-system", SelfLink:"", UID:"38013c7d-6040-44b1-bf5b-b5ceecbabd97", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85b589894b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050", Pod:"calico-kube-controllers-85b589894b-bpbf6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad323ada641", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:22:59.738153 containerd[1538]: 2025-05-09 00:22:59.708 [INFO][5768] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" May 9 00:22:59.738153 containerd[1538]: 2025-05-09 00:22:59.708 [INFO][5768] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" iface="eth0" netns="" May 9 00:22:59.738153 containerd[1538]: 2025-05-09 00:22:59.708 [INFO][5768] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" May 9 00:22:59.738153 containerd[1538]: 2025-05-09 00:22:59.708 [INFO][5768] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" May 9 00:22:59.738153 containerd[1538]: 2025-05-09 00:22:59.725 [INFO][5776] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" HandleID="k8s-pod-network.730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" Workload="localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0" May 9 00:22:59.738153 containerd[1538]: 2025-05-09 00:22:59.725 [INFO][5776] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:22:59.738153 containerd[1538]: 2025-05-09 00:22:59.725 [INFO][5776] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:22:59.738153 containerd[1538]: 2025-05-09 00:22:59.733 [WARNING][5776] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" HandleID="k8s-pod-network.730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" Workload="localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0" May 9 00:22:59.738153 containerd[1538]: 2025-05-09 00:22:59.733 [INFO][5776] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" HandleID="k8s-pod-network.730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" Workload="localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0" May 9 00:22:59.738153 containerd[1538]: 2025-05-09 00:22:59.735 [INFO][5776] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:22:59.738153 containerd[1538]: 2025-05-09 00:22:59.736 [INFO][5768] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" May 9 00:22:59.738986 containerd[1538]: time="2025-05-09T00:22:59.738179139Z" level=info msg="TearDown network for sandbox \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\" successfully" May 9 00:22:59.738986 containerd[1538]: time="2025-05-09T00:22:59.738204099Z" level=info msg="StopPodSandbox for \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\" returns successfully" May 9 00:22:59.739430 containerd[1538]: time="2025-05-09T00:22:59.739130452Z" level=info msg="RemovePodSandbox for \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\"" May 9 00:22:59.739430 containerd[1538]: time="2025-05-09T00:22:59.739164292Z" level=info msg="Forcibly stopping sandbox \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\"" May 9 00:22:59.804228 containerd[1538]: 2025-05-09 00:22:59.771 [WARNING][5799] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0", GenerateName:"calico-kube-controllers-85b589894b-", Namespace:"calico-system", SelfLink:"", UID:"38013c7d-6040-44b1-bf5b-b5ceecbabd97", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85b589894b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cca64ab07ea94bda179c35015352d0a2cce4aaaf59b25c0fb6f19ee7fef87050", Pod:"calico-kube-controllers-85b589894b-bpbf6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad323ada641", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:22:59.804228 containerd[1538]: 2025-05-09 00:22:59.772 [INFO][5799] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" May 9 00:22:59.804228 containerd[1538]: 2025-05-09 00:22:59.772 [INFO][5799] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" iface="eth0" netns="" May 9 00:22:59.804228 containerd[1538]: 2025-05-09 00:22:59.772 [INFO][5799] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" May 9 00:22:59.804228 containerd[1538]: 2025-05-09 00:22:59.772 [INFO][5799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" May 9 00:22:59.804228 containerd[1538]: 2025-05-09 00:22:59.791 [INFO][5808] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" HandleID="k8s-pod-network.730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" Workload="localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0" May 9 00:22:59.804228 containerd[1538]: 2025-05-09 00:22:59.791 [INFO][5808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:22:59.804228 containerd[1538]: 2025-05-09 00:22:59.791 [INFO][5808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:22:59.804228 containerd[1538]: 2025-05-09 00:22:59.799 [WARNING][5808] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" HandleID="k8s-pod-network.730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" Workload="localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0" May 9 00:22:59.804228 containerd[1538]: 2025-05-09 00:22:59.799 [INFO][5808] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" HandleID="k8s-pod-network.730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" Workload="localhost-k8s-calico--kube--controllers--85b589894b--bpbf6-eth0" May 9 00:22:59.804228 containerd[1538]: 2025-05-09 00:22:59.801 [INFO][5808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:22:59.804228 containerd[1538]: 2025-05-09 00:22:59.802 [INFO][5799] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2" May 9 00:22:59.805811 containerd[1538]: time="2025-05-09T00:22:59.804688169Z" level=info msg="TearDown network for sandbox \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\" successfully" May 9 00:22:59.807703 containerd[1538]: time="2025-05-09T00:22:59.807673945Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:22:59.807777 containerd[1538]: time="2025-05-09T00:22:59.807732505Z" level=info msg="RemovePodSandbox \"730c9c6490c0117fe4fe7614735167f703d39a4b69da6ed9a05d2e98106fcfc2\" returns successfully" May 9 00:22:59.808481 containerd[1538]: time="2025-05-09T00:22:59.808159021Z" level=info msg="StopPodSandbox for \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\"" May 9 00:22:59.871252 containerd[1538]: 2025-05-09 00:22:59.841 [WARNING][5830] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c5864-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c753734-0995-4fcd-9777-f094bc14fa3a", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120", Pod:"csi-node-driver-c5864", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1acc8e87414", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:22:59.871252 containerd[1538]: 2025-05-09 00:22:59.841 [INFO][5830] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" May 9 00:22:59.871252 containerd[1538]: 2025-05-09 00:22:59.841 [INFO][5830] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" iface="eth0" netns="" May 9 00:22:59.871252 containerd[1538]: 2025-05-09 00:22:59.841 [INFO][5830] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" May 9 00:22:59.871252 containerd[1538]: 2025-05-09 00:22:59.841 [INFO][5830] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" May 9 00:22:59.871252 containerd[1538]: 2025-05-09 00:22:59.859 [INFO][5838] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" HandleID="k8s-pod-network.2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" Workload="localhost-k8s-csi--node--driver--c5864-eth0" May 9 00:22:59.871252 containerd[1538]: 2025-05-09 00:22:59.859 [INFO][5838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:22:59.871252 containerd[1538]: 2025-05-09 00:22:59.859 [INFO][5838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:22:59.871252 containerd[1538]: 2025-05-09 00:22:59.867 [WARNING][5838] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" HandleID="k8s-pod-network.2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" Workload="localhost-k8s-csi--node--driver--c5864-eth0" May 9 00:22:59.871252 containerd[1538]: 2025-05-09 00:22:59.867 [INFO][5838] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" HandleID="k8s-pod-network.2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" Workload="localhost-k8s-csi--node--driver--c5864-eth0" May 9 00:22:59.871252 containerd[1538]: 2025-05-09 00:22:59.868 [INFO][5838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:22:59.871252 containerd[1538]: 2025-05-09 00:22:59.870 [INFO][5830] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" May 9 00:22:59.871668 containerd[1538]: time="2025-05-09T00:22:59.871285238Z" level=info msg="TearDown network for sandbox \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\" successfully" May 9 00:22:59.871668 containerd[1538]: time="2025-05-09T00:22:59.871310238Z" level=info msg="StopPodSandbox for \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\" returns successfully" May 9 00:22:59.871804 containerd[1538]: time="2025-05-09T00:22:59.871760994Z" level=info msg="RemovePodSandbox for \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\"" May 9 00:22:59.871804 containerd[1538]: time="2025-05-09T00:22:59.871798314Z" level=info msg="Forcibly stopping sandbox \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\"" May 9 00:22:59.937058 containerd[1538]: 2025-05-09 00:22:59.903 [WARNING][5860] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c5864-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c753734-0995-4fcd-9777-f094bc14fa3a", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d65776edb9fd89b308e3e3663b3ea75fb8cfe72807b397aa253b26d77b336120", Pod:"csi-node-driver-c5864", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1acc8e87414", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:22:59.937058 containerd[1538]: 2025-05-09 00:22:59.903 [INFO][5860] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" May 9 00:22:59.937058 containerd[1538]: 2025-05-09 00:22:59.903 [INFO][5860] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" iface="eth0" netns="" May 9 00:22:59.937058 containerd[1538]: 2025-05-09 00:22:59.903 [INFO][5860] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" May 9 00:22:59.937058 containerd[1538]: 2025-05-09 00:22:59.903 [INFO][5860] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" May 9 00:22:59.937058 containerd[1538]: 2025-05-09 00:22:59.925 [INFO][5869] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" HandleID="k8s-pod-network.2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" Workload="localhost-k8s-csi--node--driver--c5864-eth0" May 9 00:22:59.937058 containerd[1538]: 2025-05-09 00:22:59.925 [INFO][5869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:22:59.937058 containerd[1538]: 2025-05-09 00:22:59.925 [INFO][5869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:22:59.937058 containerd[1538]: 2025-05-09 00:22:59.933 [WARNING][5869] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" HandleID="k8s-pod-network.2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" Workload="localhost-k8s-csi--node--driver--c5864-eth0" May 9 00:22:59.937058 containerd[1538]: 2025-05-09 00:22:59.933 [INFO][5869] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" HandleID="k8s-pod-network.2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" Workload="localhost-k8s-csi--node--driver--c5864-eth0" May 9 00:22:59.937058 containerd[1538]: 2025-05-09 00:22:59.934 [INFO][5869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:22:59.937058 containerd[1538]: 2025-05-09 00:22:59.935 [INFO][5860] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64" May 9 00:22:59.937058 containerd[1538]: time="2025-05-09T00:22:59.937022274Z" level=info msg="TearDown network for sandbox \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\" successfully" May 9 00:22:59.939569 containerd[1538]: time="2025-05-09T00:22:59.939541573Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:22:59.939664 containerd[1538]: time="2025-05-09T00:22:59.939639653Z" level=info msg="RemovePodSandbox \"2651e50c392ef1acf29e11df9ceb9acc60de68d1e5fdc94c898b5b9ae18d2a64\" returns successfully" May 9 00:22:59.940426 containerd[1538]: time="2025-05-09T00:22:59.940283088Z" level=info msg="StopPodSandbox for \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\"" May 9 00:23:00.002490 containerd[1538]: 2025-05-09 00:22:59.972 [WARNING][5892] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0", GenerateName:"calico-apiserver-578848dcfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b79fe7f-93ab-4e08-981f-3cf87817c0d8", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"578848dcfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5", Pod:"calico-apiserver-578848dcfb-6t55f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1ea9b6fbb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:23:00.002490 containerd[1538]: 2025-05-09 00:22:59.973 [INFO][5892] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" May 9 00:23:00.002490 containerd[1538]: 2025-05-09 00:22:59.973 [INFO][5892] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" iface="eth0" netns="" May 9 00:23:00.002490 containerd[1538]: 2025-05-09 00:22:59.973 [INFO][5892] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" May 9 00:23:00.002490 containerd[1538]: 2025-05-09 00:22:59.973 [INFO][5892] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" May 9 00:23:00.002490 containerd[1538]: 2025-05-09 00:22:59.990 [INFO][5900] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" HandleID="k8s-pod-network.64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" Workload="localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0" May 9 00:23:00.002490 containerd[1538]: 2025-05-09 00:22:59.990 [INFO][5900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:23:00.002490 containerd[1538]: 2025-05-09 00:22:59.990 [INFO][5900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:23:00.002490 containerd[1538]: 2025-05-09 00:22:59.998 [WARNING][5900] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" HandleID="k8s-pod-network.64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" Workload="localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0" May 9 00:23:00.002490 containerd[1538]: 2025-05-09 00:22:59.998 [INFO][5900] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" HandleID="k8s-pod-network.64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" Workload="localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0" May 9 00:23:00.002490 containerd[1538]: 2025-05-09 00:22:59.999 [INFO][5900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:23:00.002490 containerd[1538]: 2025-05-09 00:23:00.001 [INFO][5892] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" May 9 00:23:00.002898 containerd[1538]: time="2025-05-09T00:23:00.002498672Z" level=info msg="TearDown network for sandbox \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\" successfully" May 9 00:23:00.002898 containerd[1538]: time="2025-05-09T00:23:00.002533151Z" level=info msg="StopPodSandbox for \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\" returns successfully" May 9 00:23:00.003019 containerd[1538]: time="2025-05-09T00:23:00.002973348Z" level=info msg="RemovePodSandbox for \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\"" May 9 00:23:00.003019 containerd[1538]: time="2025-05-09T00:23:00.003010867Z" level=info msg="Forcibly stopping sandbox \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\"" May 9 00:23:00.069507 containerd[1538]: 2025-05-09 00:23:00.035 [WARNING][5922] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0", GenerateName:"calico-apiserver-578848dcfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b79fe7f-93ab-4e08-981f-3cf87817c0d8", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"578848dcfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46e8fa59202747cd9a0c26d2581f6e6989c53c199e0168229ea760d45e9020c5", Pod:"calico-apiserver-578848dcfb-6t55f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1ea9b6fbb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:23:00.069507 containerd[1538]: 2025-05-09 00:23:00.036 [INFO][5922] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" May 9 00:23:00.069507 containerd[1538]: 2025-05-09 00:23:00.036 [INFO][5922] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" iface="eth0" netns="" May 9 00:23:00.069507 containerd[1538]: 2025-05-09 00:23:00.036 [INFO][5922] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" May 9 00:23:00.069507 containerd[1538]: 2025-05-09 00:23:00.036 [INFO][5922] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" May 9 00:23:00.069507 containerd[1538]: 2025-05-09 00:23:00.055 [INFO][5931] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" HandleID="k8s-pod-network.64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" Workload="localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0" May 9 00:23:00.069507 containerd[1538]: 2025-05-09 00:23:00.055 [INFO][5931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:23:00.069507 containerd[1538]: 2025-05-09 00:23:00.055 [INFO][5931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:23:00.069507 containerd[1538]: 2025-05-09 00:23:00.063 [WARNING][5931] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" HandleID="k8s-pod-network.64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" Workload="localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0" May 9 00:23:00.069507 containerd[1538]: 2025-05-09 00:23:00.063 [INFO][5931] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" HandleID="k8s-pod-network.64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" Workload="localhost-k8s-calico--apiserver--578848dcfb--6t55f-eth0" May 9 00:23:00.069507 containerd[1538]: 2025-05-09 00:23:00.065 [INFO][5931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:23:00.069507 containerd[1538]: 2025-05-09 00:23:00.067 [INFO][5922] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce" May 9 00:23:00.069949 containerd[1538]: time="2025-05-09T00:23:00.069556582Z" level=info msg="TearDown network for sandbox \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\" successfully" May 9 00:23:00.076681 containerd[1538]: time="2025-05-09T00:23:00.076629847Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:23:00.076783 containerd[1538]: time="2025-05-09T00:23:00.076698646Z" level=info msg="RemovePodSandbox \"64d7e791ff8d28c8c28953658b7c5eba4694e6b7d519a54244895606ec59d5ce\" returns successfully" May 9 00:23:00.077127 containerd[1538]: time="2025-05-09T00:23:00.077088243Z" level=info msg="StopPodSandbox for \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\"" May 9 00:23:00.144562 containerd[1538]: 2025-05-09 00:23:00.113 [WARNING][5955] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0", GenerateName:"calico-apiserver-578848dcfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"35cd080a-2fa4-4c86-aa58-2bc34990adbf", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"578848dcfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4", Pod:"calico-apiserver-578848dcfb-67brr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e5946a20da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:23:00.144562 containerd[1538]: 2025-05-09 00:23:00.113 [INFO][5955] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" May 9 00:23:00.144562 containerd[1538]: 2025-05-09 00:23:00.113 [INFO][5955] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" iface="eth0" netns="" May 9 00:23:00.144562 containerd[1538]: 2025-05-09 00:23:00.113 [INFO][5955] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" May 9 00:23:00.144562 containerd[1538]: 2025-05-09 00:23:00.113 [INFO][5955] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" May 9 00:23:00.144562 containerd[1538]: 2025-05-09 00:23:00.132 [INFO][5963] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" HandleID="k8s-pod-network.0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" Workload="localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0" May 9 00:23:00.144562 containerd[1538]: 2025-05-09 00:23:00.132 [INFO][5963] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:23:00.144562 containerd[1538]: 2025-05-09 00:23:00.132 [INFO][5963] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:23:00.144562 containerd[1538]: 2025-05-09 00:23:00.140 [WARNING][5963] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" HandleID="k8s-pod-network.0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" Workload="localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0" May 9 00:23:00.144562 containerd[1538]: 2025-05-09 00:23:00.140 [INFO][5963] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" HandleID="k8s-pod-network.0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" Workload="localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0" May 9 00:23:00.144562 containerd[1538]: 2025-05-09 00:23:00.141 [INFO][5963] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:23:00.144562 containerd[1538]: 2025-05-09 00:23:00.143 [INFO][5955] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" May 9 00:23:00.145184 containerd[1538]: time="2025-05-09T00:23:00.144652830Z" level=info msg="TearDown network for sandbox \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\" successfully" May 9 00:23:00.145184 containerd[1538]: time="2025-05-09T00:23:00.144680309Z" level=info msg="StopPodSandbox for \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\" returns successfully" May 9 00:23:00.145736 containerd[1538]: time="2025-05-09T00:23:00.145698861Z" level=info msg="RemovePodSandbox for \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\"" May 9 00:23:00.145777 containerd[1538]: time="2025-05-09T00:23:00.145737701Z" level=info msg="Forcibly stopping sandbox \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\"" May 9 00:23:00.212064 containerd[1538]: 2025-05-09 00:23:00.177 [WARNING][5985] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0", GenerateName:"calico-apiserver-578848dcfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"35cd080a-2fa4-4c86-aa58-2bc34990adbf", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 22, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"578848dcfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc6ca1230837f7be45ff3c2e44b844c61e88814bd3600b94f95e9d63a1661ea4", Pod:"calico-apiserver-578848dcfb-67brr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e5946a20da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:23:00.212064 containerd[1538]: 2025-05-09 00:23:00.178 [INFO][5985] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" May 9 00:23:00.212064 containerd[1538]: 2025-05-09 00:23:00.178 [INFO][5985] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" iface="eth0" netns="" May 9 00:23:00.212064 containerd[1538]: 2025-05-09 00:23:00.178 [INFO][5985] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" May 9 00:23:00.212064 containerd[1538]: 2025-05-09 00:23:00.178 [INFO][5985] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" May 9 00:23:00.212064 containerd[1538]: 2025-05-09 00:23:00.195 [INFO][5993] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" HandleID="k8s-pod-network.0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" Workload="localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0" May 9 00:23:00.212064 containerd[1538]: 2025-05-09 00:23:00.195 [INFO][5993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:23:00.212064 containerd[1538]: 2025-05-09 00:23:00.195 [INFO][5993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:23:00.212064 containerd[1538]: 2025-05-09 00:23:00.206 [WARNING][5993] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" HandleID="k8s-pod-network.0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" Workload="localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0" May 9 00:23:00.212064 containerd[1538]: 2025-05-09 00:23:00.206 [INFO][5993] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" HandleID="k8s-pod-network.0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" Workload="localhost-k8s-calico--apiserver--578848dcfb--67brr-eth0" May 9 00:23:00.212064 containerd[1538]: 2025-05-09 00:23:00.209 [INFO][5993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:23:00.212064 containerd[1538]: 2025-05-09 00:23:00.210 [INFO][5985] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e" May 9 00:23:00.212064 containerd[1538]: time="2025-05-09T00:23:00.211993738Z" level=info msg="TearDown network for sandbox \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\" successfully" May 9 00:23:00.223378 containerd[1538]: time="2025-05-09T00:23:00.223334809Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:23:00.223480 containerd[1538]: time="2025-05-09T00:23:00.223417488Z" level=info msg="RemovePodSandbox \"0cbac31547d62da1e624ad12becd08c4f8fbca7c4399f17442fd0c3720007c7e\" returns successfully" May 9 00:23:00.620844 systemd[1]: Started sshd@15-10.0.0.135:22-10.0.0.1:60318.service - OpenSSH per-connection server daemon (10.0.0.1:60318). May 9 00:23:00.662777 sshd[6001]: Accepted publickey for core from 10.0.0.1 port 60318 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:23:00.664450 sshd[6001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:23:00.668949 systemd-logind[1519]: New session 16 of user core. May 9 00:23:00.677974 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 00:23:00.839452 sshd[6001]: pam_unix(sshd:session): session closed for user core May 9 00:23:00.848835 systemd[1]: Started sshd@16-10.0.0.135:22-10.0.0.1:60330.service - OpenSSH per-connection server daemon (10.0.0.1:60330). May 9 00:23:00.849238 systemd[1]: sshd@15-10.0.0.135:22-10.0.0.1:60318.service: Deactivated successfully. May 9 00:23:00.852441 systemd-logind[1519]: Session 16 logged out. Waiting for processes to exit. May 9 00:23:00.853093 systemd[1]: session-16.scope: Deactivated successfully. May 9 00:23:00.855701 systemd-logind[1519]: Removed session 16. May 9 00:23:00.884412 sshd[6013]: Accepted publickey for core from 10.0.0.1 port 60330 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:23:00.885981 sshd[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:23:00.890435 systemd-logind[1519]: New session 17 of user core. May 9 00:23:00.899031 systemd[1]: Started session-17.scope - Session 17 of User core. May 9 00:23:01.172775 sshd[6013]: pam_unix(sshd:session): session closed for user core May 9 00:23:01.180833 systemd[1]: Started sshd@17-10.0.0.135:22-10.0.0.1:60340.service - OpenSSH per-connection server daemon (10.0.0.1:60340). May 9 00:23:01.181325 systemd[1]: sshd@16-10.0.0.135:22-10.0.0.1:60330.service: Deactivated successfully. May 9 00:23:01.184030 systemd-logind[1519]: Session 17 logged out. Waiting for processes to exit. May 9 00:23:01.184143 systemd[1]: session-17.scope: Deactivated successfully. May 9 00:23:01.185316 systemd-logind[1519]: Removed session 17. May 9 00:23:01.218105 sshd[6026]: Accepted publickey for core from 10.0.0.1 port 60340 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:23:01.219310 sshd[6026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:23:01.223067 systemd-logind[1519]: New session 18 of user core. May 9 00:23:01.230798 systemd[1]: Started session-18.scope - Session 18 of User core. May 9 00:23:02.723746 sshd[6026]: pam_unix(sshd:session): session closed for user core May 9 00:23:02.735703 systemd[1]: Started sshd@18-10.0.0.135:22-10.0.0.1:55244.service - OpenSSH per-connection server daemon (10.0.0.1:55244). May 9 00:23:02.737084 systemd[1]: sshd@17-10.0.0.135:22-10.0.0.1:60340.service: Deactivated successfully. May 9 00:23:02.744996 systemd[1]: session-18.scope: Deactivated successfully. May 9 00:23:02.747435 systemd-logind[1519]: Session 18 logged out. Waiting for processes to exit. May 9 00:23:02.749099 systemd-logind[1519]: Removed session 18. May 9 00:23:02.779106 sshd[6048]: Accepted publickey for core from 10.0.0.1 port 55244 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:23:02.780405 sshd[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:23:02.784072 systemd-logind[1519]: New session 19 of user core. May 9 00:23:02.794804 systemd[1]: Started session-19.scope - Session 19 of User core. May 9 00:23:03.075189 sshd[6048]: pam_unix(sshd:session): session closed for user core May 9 00:23:03.082838 systemd[1]: Started sshd@19-10.0.0.135:22-10.0.0.1:55250.service - OpenSSH per-connection server daemon (10.0.0.1:55250). May 9 00:23:03.083316 systemd[1]: sshd@18-10.0.0.135:22-10.0.0.1:55244.service: Deactivated successfully. May 9 00:23:03.087559 systemd-logind[1519]: Session 19 logged out. Waiting for processes to exit. May 9 00:23:03.087809 systemd[1]: session-19.scope: Deactivated successfully. May 9 00:23:03.090477 systemd-logind[1519]: Removed session 19. May 9 00:23:03.121926 sshd[6064]: Accepted publickey for core from 10.0.0.1 port 55250 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:23:03.123314 sshd[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:23:03.130767 systemd-logind[1519]: New session 20 of user core. May 9 00:23:03.140862 systemd[1]: Started session-20.scope - Session 20 of User core. May 9 00:23:03.270898 sshd[6064]: pam_unix(sshd:session): session closed for user core May 9 00:23:03.274297 systemd[1]: sshd@19-10.0.0.135:22-10.0.0.1:55250.service: Deactivated successfully. May 9 00:23:03.276702 systemd-logind[1519]: Session 20 logged out. Waiting for processes to exit. May 9 00:23:03.277019 systemd[1]: session-20.scope: Deactivated successfully. May 9 00:23:03.278536 systemd-logind[1519]: Removed session 20. May 9 00:23:08.288813 systemd[1]: Started sshd@20-10.0.0.135:22-10.0.0.1:55256.service - OpenSSH per-connection server daemon (10.0.0.1:55256). May 9 00:23:08.323086 sshd[6094]: Accepted publickey for core from 10.0.0.1 port 55256 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:23:08.324319 sshd[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:23:08.328006 systemd-logind[1519]: New session 21 of user core. May 9 00:23:08.342808 systemd[1]: Started session-21.scope - Session 21 of User core. May 9 00:23:08.354489 kubelet[2717]: E0509 00:23:08.354456 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:23:08.474181 sshd[6094]: pam_unix(sshd:session): session closed for user core May 9 00:23:08.477731 systemd[1]: sshd@20-10.0.0.135:22-10.0.0.1:55256.service: Deactivated successfully. May 9 00:23:08.479757 systemd-logind[1519]: Session 21 logged out. Waiting for processes to exit. May 9 00:23:08.480206 systemd[1]: session-21.scope: Deactivated successfully. May 9 00:23:08.481171 systemd-logind[1519]: Removed session 21. May 9 00:23:08.697934 kubelet[2717]: E0509 00:23:08.697869 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:23:13.486898 systemd[1]: Started sshd@21-10.0.0.135:22-10.0.0.1:58968.service - OpenSSH per-connection server daemon (10.0.0.1:58968). May 9 00:23:13.538672 sshd[6130]: Accepted publickey for core from 10.0.0.1 port 58968 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:23:13.541269 sshd[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:23:13.552692 systemd-logind[1519]: New session 22 of user core. May 9 00:23:13.560016 systemd[1]: Started session-22.scope - Session 22 of User core. May 9 00:23:13.677662 sshd[6130]: pam_unix(sshd:session): session closed for user core May 9 00:23:13.681163 systemd[1]: sshd@21-10.0.0.135:22-10.0.0.1:58968.service: Deactivated successfully. May 9 00:23:13.684048 systemd[1]: session-22.scope: Deactivated successfully. May 9 00:23:13.684669 systemd-logind[1519]: Session 22 logged out. Waiting for processes to exit. May 9 00:23:13.685459 systemd-logind[1519]: Removed session 22. May 9 00:23:15.004891 kubelet[2717]: I0509 00:23:15.004841 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:23:15.355456 kubelet[2717]: E0509 00:23:15.354622 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:23:18.689865 systemd[1]: Started sshd@22-10.0.0.135:22-10.0.0.1:58974.service - OpenSSH per-connection server daemon (10.0.0.1:58974). May 9 00:23:18.726349 sshd[6188]: Accepted publickey for core from 10.0.0.1 port 58974 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:23:18.728106 sshd[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:23:18.732197 systemd-logind[1519]: New session 23 of user core. May 9 00:23:18.742862 systemd[1]: Started session-23.scope - Session 23 of User core. May 9 00:23:18.918788 sshd[6188]: pam_unix(sshd:session): session closed for user core May 9 00:23:18.921876 systemd[1]: sshd@22-10.0.0.135:22-10.0.0.1:58974.service: Deactivated successfully. May 9 00:23:18.924473 systemd[1]: session-23.scope: Deactivated successfully. May 9 00:23:18.926833 systemd-logind[1519]: Session 23 logged out. Waiting for processes to exit. May 9 00:23:18.927899 systemd-logind[1519]: Removed session 23. May 9 00:23:20.354960 kubelet[2717]: E0509 00:23:20.354563 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"