May 13 00:27:05.905888 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 00:27:05.905909 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon May 12 22:51:32 -00 2025 May 13 00:27:05.905919 kernel: KASLR enabled May 13 00:27:05.905924 kernel: efi: EFI v2.7 by EDK II May 13 00:27:05.905930 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 13 00:27:05.905936 kernel: random: crng init done May 13 00:27:05.905943 kernel: ACPI: Early table checksum verification disabled May 13 00:27:05.905948 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 13 00:27:05.905954 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 00:27:05.905962 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:27:05.905968 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:27:05.905973 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:27:05.905979 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:27:05.905985 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:27:05.905993 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:27:05.906000 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:27:05.906007 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:27:05.906013 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:27:05.906019 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 00:27:05.906025 kernel: NUMA: Failed to initialise from firmware May 13 00:27:05.906032 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:27:05.906038 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] May 13 00:27:05.906045 kernel: Zone ranges: May 13 00:27:05.906051 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:27:05.906057 kernel: DMA32 empty May 13 00:27:05.906064 kernel: Normal empty May 13 00:27:05.906070 kernel: Movable zone start for each node May 13 00:27:05.906076 kernel: Early memory node ranges May 13 00:27:05.906083 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 13 00:27:05.906089 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 13 00:27:05.906096 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 13 00:27:05.906102 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 13 00:27:05.906108 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 13 00:27:05.906114 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 13 00:27:05.906121 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 00:27:05.906127 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:27:05.906133 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 00:27:05.906147 kernel: psci: probing for conduit method from ACPI. May 13 00:27:05.906155 kernel: psci: PSCIv1.1 detected in firmware. May 13 00:27:05.906163 kernel: psci: Using standard PSCI v0.2 function IDs May 13 00:27:05.906174 kernel: psci: Trusted OS migration not required May 13 00:27:05.906183 kernel: psci: SMC Calling Convention v1.1 May 13 00:27:05.906190 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 00:27:05.906203 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 00:27:05.906211 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 00:27:05.906218 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 00:27:05.906225 kernel: Detected PIPT I-cache on CPU0 May 13 00:27:05.906231 kernel: CPU features: detected: GIC system register CPU interface May 13 00:27:05.906238 kernel: CPU features: detected: Hardware dirty bit management May 13 00:27:05.906245 kernel: CPU features: detected: Spectre-v4 May 13 00:27:05.906251 kernel: CPU features: detected: Spectre-BHB May 13 00:27:05.906258 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 00:27:05.906265 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 00:27:05.906273 kernel: CPU features: detected: ARM erratum 1418040 May 13 00:27:05.906280 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 00:27:05.906286 kernel: alternatives: applying boot alternatives May 13 00:27:05.906294 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c683f9f6a9915f3c14a7bce5c93750f29fcd5cf6eb0774e11e882c5681cc19c0 May 13 00:27:05.906301 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:27:05.906308 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:27:05.906315 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:27:05.906321 kernel: Fallback order for Node 0: 0 May 13 00:27:05.906328 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 00:27:05.906334 kernel: Policy zone: DMA May 13 00:27:05.906341 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:27:05.906349 kernel: software IO TLB: area num 4. May 13 00:27:05.906357 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 13 00:27:05.906364 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) May 13 00:27:05.906371 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:27:05.906378 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:27:05.906385 kernel: rcu: RCU event tracing is enabled. May 13 00:27:05.906392 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:27:05.906399 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:27:05.906467 kernel: Tracing variant of Tasks RCU enabled. May 13 00:27:05.906474 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:27:05.906481 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:27:05.906488 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 00:27:05.906497 kernel: GICv3: 256 SPIs implemented May 13 00:27:05.906503 kernel: GICv3: 0 Extended SPIs implemented May 13 00:27:05.906510 kernel: Root IRQ handler: gic_handle_irq May 13 00:27:05.906516 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 00:27:05.906523 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 00:27:05.906529 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 00:27:05.906536 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 13 00:27:05.906543 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 13 00:27:05.906550 kernel: GICv3: using LPI property table @0x00000000400f0000 May 13 00:27:05.906556 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 13 00:27:05.906563 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 00:27:05.906571 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:27:05.906578 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 00:27:05.906584 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 00:27:05.906591 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 00:27:05.906598 kernel: arm-pv: using stolen time PV May 13 00:27:05.906605 kernel: Console: colour dummy device 80x25 May 13 00:27:05.906612 kernel: ACPI: Core revision 20230628 May 13 00:27:05.906619 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 00:27:05.906626 kernel: pid_max: default: 32768 minimum: 301 May 13 00:27:05.906633 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 00:27:05.906641 kernel: landlock: Up and running. May 13 00:27:05.906648 kernel: SELinux: Initializing. May 13 00:27:05.906655 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:27:05.906661 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:27:05.906668 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 00:27:05.906675 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:27:05.906682 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:27:05.906689 kernel: rcu: Hierarchical SRCU implementation. May 13 00:27:05.906696 kernel: rcu: Max phase no-delay instances is 400. May 13 00:27:05.906704 kernel: Platform MSI: ITS@0x8080000 domain created May 13 00:27:05.906711 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 00:27:05.906718 kernel: Remapping and enabling EFI services. May 13 00:27:05.906725 kernel: smp: Bringing up secondary CPUs ... May 13 00:27:05.906731 kernel: Detected PIPT I-cache on CPU1 May 13 00:27:05.906738 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 00:27:05.906745 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 13 00:27:05.906752 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:27:05.906759 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 00:27:05.906767 kernel: Detected PIPT I-cache on CPU2 May 13 00:27:05.906774 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 00:27:05.906781 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 13 00:27:05.906793 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:27:05.906801 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 00:27:05.906808 kernel: Detected PIPT I-cache on CPU3 May 13 00:27:05.906815 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 00:27:05.906822 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 13 00:27:05.906830 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:27:05.906837 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 00:27:05.906844 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:27:05.906852 kernel: SMP: Total of 4 processors activated. May 13 00:27:05.906859 kernel: CPU features: detected: 32-bit EL0 Support May 13 00:27:05.906867 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 00:27:05.906874 kernel: CPU features: detected: Common not Private translations May 13 00:27:05.906881 kernel: CPU features: detected: CRC32 instructions May 13 00:27:05.906888 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 00:27:05.906897 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 00:27:05.906904 kernel: CPU features: detected: LSE atomic instructions May 13 00:27:05.906911 kernel: CPU features: detected: Privileged Access Never May 13 00:27:05.906918 kernel: CPU features: detected: RAS Extension Support May 13 00:27:05.906926 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 00:27:05.906933 kernel: CPU: All CPU(s) started at EL1 May 13 00:27:05.906940 kernel: alternatives: applying system-wide alternatives May 13 00:27:05.906947 kernel: devtmpfs: initialized May 13 00:27:05.906954 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:27:05.906963 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:27:05.906970 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:27:05.906977 kernel: SMBIOS 3.0.0 present. May 13 00:27:05.906984 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 13 00:27:05.906992 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:27:05.906999 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 00:27:05.907006 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 00:27:05.907014 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 00:27:05.907021 kernel: audit: initializing netlink subsys (disabled) May 13 00:27:05.907029 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 May 13 00:27:05.907036 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:27:05.907043 kernel: cpuidle: using governor menu May 13 00:27:05.907051 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 00:27:05.907058 kernel: ASID allocator initialised with 32768 entries May 13 00:27:05.907065 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:27:05.907072 kernel: Serial: AMBA PL011 UART driver May 13 00:27:05.907080 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 00:27:05.907087 kernel: Modules: 0 pages in range for non-PLT usage May 13 00:27:05.907096 kernel: Modules: 509008 pages in range for PLT usage May 13 00:27:05.907103 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:27:05.907110 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 00:27:05.907117 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 00:27:05.907124 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 00:27:05.907132 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:27:05.907139 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 00:27:05.907146 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 00:27:05.907153 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 00:27:05.907160 kernel: ACPI: Added _OSI(Module Device) May 13 00:27:05.907169 kernel: ACPI: Added _OSI(Processor Device) May 13 00:27:05.907176 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:27:05.907183 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:27:05.907190 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:27:05.907197 kernel: ACPI: Interpreter enabled May 13 00:27:05.907209 kernel: ACPI: Using GIC for interrupt routing May 13 00:27:05.907216 kernel: ACPI: MCFG table detected, 1 entries May 13 00:27:05.907224 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 00:27:05.907231 kernel: printk: console [ttyAMA0] enabled May 13 00:27:05.907240 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:27:05.907372 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:27:05.907459 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 00:27:05.907526 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 00:27:05.907589 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 00:27:05.907650 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 00:27:05.907660 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 00:27:05.907670 kernel: PCI host bridge to bus 0000:00 May 13 00:27:05.907738 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 00:27:05.907797 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 00:27:05.907855 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 00:27:05.907910 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:27:05.907987 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 00:27:05.908060 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:27:05.908129 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 00:27:05.908194 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 00:27:05.908272 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:27:05.908338 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:27:05.908402 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 00:27:05.908484 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 00:27:05.908547 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 00:27:05.908604 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 00:27:05.908660 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 00:27:05.908670 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 00:27:05.908677 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 00:27:05.908685 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 00:27:05.908692 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 00:27:05.908699 kernel: iommu: Default domain type: Translated May 13 00:27:05.908708 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 00:27:05.908716 kernel: efivars: Registered efivars operations May 13 00:27:05.908723 kernel: vgaarb: loaded May 13 00:27:05.908730 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 00:27:05.908737 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:27:05.908745 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:27:05.908752 kernel: pnp: PnP ACPI init May 13 00:27:05.908827 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 00:27:05.908838 kernel: pnp: PnP ACPI: found 1 devices May 13 00:27:05.908847 kernel: NET: Registered PF_INET protocol family May 13 00:27:05.908854 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:27:05.908862 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:27:05.908869 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:27:05.908876 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:27:05.908884 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 00:27:05.908891 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:27:05.908898 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:27:05.908907 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:27:05.908915 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:27:05.908922 kernel: PCI: CLS 0 bytes, default 64 May 13 00:27:05.908929 kernel: kvm [1]: HYP mode not available May 13 00:27:05.908936 kernel: Initialise system trusted keyrings May 13 00:27:05.908944 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:27:05.908951 kernel: Key type asymmetric registered May 13 00:27:05.908958 kernel: Asymmetric key parser 'x509' registered May 13 00:27:05.908965 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 00:27:05.908972 kernel: io scheduler mq-deadline registered May 13 00:27:05.908981 kernel: io scheduler kyber registered May 13 00:27:05.908988 kernel: io scheduler bfq registered May 13 00:27:05.908996 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 00:27:05.909003 kernel: ACPI: button: Power Button [PWRB] May 13 00:27:05.909011 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 00:27:05.909077 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 00:27:05.909087 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:27:05.909094 kernel: thunder_xcv, ver 1.0 May 13 00:27:05.909102 kernel: thunder_bgx, ver 1.0 May 13 00:27:05.909110 kernel: nicpf, ver 1.0 May 13 00:27:05.909118 kernel: nicvf, ver 1.0 May 13 00:27:05.909189 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 00:27:05.909260 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T00:27:05 UTC (1747096025) May 13 00:27:05.909270 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 00:27:05.909277 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 00:27:05.909285 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 00:27:05.909292 kernel: watchdog: Hard watchdog permanently disabled May 13 00:27:05.909301 kernel: NET: Registered PF_INET6 protocol family May 13 00:27:05.909309 kernel: Segment Routing with IPv6 May 13 00:27:05.909316 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:27:05.909323 kernel: NET: Registered PF_PACKET protocol family May 13 00:27:05.909330 kernel: Key type dns_resolver registered May 13 00:27:05.909337 kernel: registered taskstats version 1 May 13 00:27:05.909345 kernel: Loading compiled-in X.509 certificates May 13 00:27:05.909352 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: ce22d51a4ec909274ada9cb7da7d7cb78db539c6' May 13 00:27:05.909360 kernel: Key type .fscrypt registered May 13 00:27:05.909368 kernel: Key type fscrypt-provisioning registered May 13 00:27:05.909375 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:27:05.909382 kernel: ima: Allocated hash algorithm: sha1 May 13 00:27:05.909390 kernel: ima: No architecture policies found May 13 00:27:05.909397 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 00:27:05.909404 kernel: clk: Disabling unused clocks May 13 00:27:05.909421 kernel: Freeing unused kernel memory: 39424K May 13 00:27:05.909428 kernel: Run /init as init process May 13 00:27:05.909435 kernel: with arguments: May 13 00:27:05.909444 kernel: /init May 13 00:27:05.909451 kernel: with environment: May 13 00:27:05.909458 kernel: HOME=/ May 13 00:27:05.909465 kernel: TERM=linux May 13 00:27:05.909472 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:27:05.909481 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:27:05.909490 systemd[1]: Detected virtualization kvm. May 13 00:27:05.909499 systemd[1]: Detected architecture arm64. May 13 00:27:05.909507 systemd[1]: Running in initrd. May 13 00:27:05.909514 systemd[1]: No hostname configured, using default hostname. May 13 00:27:05.909522 systemd[1]: Hostname set to . May 13 00:27:05.909530 systemd[1]: Initializing machine ID from VM UUID. May 13 00:27:05.909537 systemd[1]: Queued start job for default target initrd.target. May 13 00:27:05.909545 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:27:05.909553 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:27:05.909562 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 00:27:05.909570 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:27:05.909578 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 00:27:05.909586 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 00:27:05.909596 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 00:27:05.909604 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 00:27:05.909612 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:27:05.909621 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:27:05.909629 systemd[1]: Reached target paths.target - Path Units. May 13 00:27:05.909637 systemd[1]: Reached target slices.target - Slice Units. May 13 00:27:05.909644 systemd[1]: Reached target swap.target - Swaps. May 13 00:27:05.909652 systemd[1]: Reached target timers.target - Timer Units. May 13 00:27:05.909660 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:27:05.909668 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:27:05.909676 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:27:05.909684 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:27:05.909693 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:27:05.909701 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:27:05.909709 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:27:05.909716 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:27:05.909724 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 00:27:05.909732 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:27:05.909740 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 00:27:05.909748 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:27:05.909757 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:27:05.909765 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:27:05.909772 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:27:05.909780 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 00:27:05.909788 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:27:05.909796 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:27:05.909805 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:27:05.909829 systemd-journald[238]: Collecting audit messages is disabled. May 13 00:27:05.909848 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:27:05.909859 systemd-journald[238]: Journal started May 13 00:27:05.909877 systemd-journald[238]: Runtime Journal (/run/log/journal/61b7cd729f204a658e9fde278505841a) is 5.9M, max 47.3M, 41.4M free. May 13 00:27:05.904297 systemd-modules-load[239]: Inserted module 'overlay' May 13 00:27:05.913158 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:27:05.915096 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:27:05.917424 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:27:05.919042 systemd-modules-load[239]: Inserted module 'br_netfilter' May 13 00:27:05.919953 kernel: Bridge firewalling registered May 13 00:27:05.925548 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:27:05.927260 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:27:05.929360 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:27:05.931099 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:27:05.935594 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:27:05.939461 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:27:05.940868 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:27:05.947904 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:27:05.960640 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:27:05.961820 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:27:05.964393 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 00:27:05.977207 dracut-cmdline[279]: dracut-dracut-053 May 13 00:27:05.979610 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c683f9f6a9915f3c14a7bce5c93750f29fcd5cf6eb0774e11e882c5681cc19c0 May 13 00:27:05.991349 systemd-resolved[275]: Positive Trust Anchors: May 13 00:27:05.991368 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:27:05.991400 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:27:05.996042 systemd-resolved[275]: Defaulting to hostname 'linux'. May 13 00:27:05.996949 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:27:06.000529 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:27:06.043432 kernel: SCSI subsystem initialized May 13 00:27:06.047429 kernel: Loading iSCSI transport class v2.0-870. May 13 00:27:06.055432 kernel: iscsi: registered transport (tcp) May 13 00:27:06.068441 kernel: iscsi: registered transport (qla4xxx) May 13 00:27:06.068456 kernel: QLogic iSCSI HBA Driver May 13 00:27:06.110557 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 00:27:06.122548 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 00:27:06.139516 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:27:06.139571 kernel: device-mapper: uevent: version 1.0.3 May 13 00:27:06.140619 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 00:27:06.187443 kernel: raid6: neonx8 gen() 15739 MB/s May 13 00:27:06.204428 kernel: raid6: neonx4 gen() 15632 MB/s May 13 00:27:06.221439 kernel: raid6: neonx2 gen() 13214 MB/s May 13 00:27:06.238433 kernel: raid6: neonx1 gen() 10461 MB/s May 13 00:27:06.255431 kernel: raid6: int64x8 gen() 6969 MB/s May 13 00:27:06.272437 kernel: raid6: int64x4 gen() 7341 MB/s May 13 00:27:06.289434 kernel: raid6: int64x2 gen() 6123 MB/s May 13 00:27:06.306557 kernel: raid6: int64x1 gen() 5052 MB/s May 13 00:27:06.306578 kernel: raid6: using algorithm neonx8 gen() 15739 MB/s May 13 00:27:06.324608 kernel: raid6: .... xor() 11914 MB/s, rmw enabled May 13 00:27:06.324633 kernel: raid6: using neon recovery algorithm May 13 00:27:06.329429 kernel: xor: measuring software checksum speed May 13 00:27:06.330829 kernel: 8regs : 17088 MB/sec May 13 00:27:06.330841 kernel: 32regs : 19013 MB/sec May 13 00:27:06.331500 kernel: arm64_neon : 26857 MB/sec May 13 00:27:06.331512 kernel: xor: using function: arm64_neon (26857 MB/sec) May 13 00:27:06.383444 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 00:27:06.394243 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 00:27:06.405555 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:27:06.418261 systemd-udevd[460]: Using default interface naming scheme 'v255'. May 13 00:27:06.421395 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:27:06.431790 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 00:27:06.442903 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation May 13 00:27:06.468803 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:27:06.476572 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:27:06.515214 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:27:06.522576 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 00:27:06.533726 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 00:27:06.537354 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:27:06.538759 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:27:06.541240 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:27:06.550569 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 00:27:06.561721 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 00:27:06.569450 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 00:27:06.569620 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:27:06.575241 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:27:06.575282 kernel: GPT:9289727 != 19775487 May 13 00:27:06.575292 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:27:06.575301 kernel: GPT:9289727 != 19775487 May 13 00:27:06.575317 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:27:06.576435 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:27:06.577877 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:27:06.577993 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:27:06.583577 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:27:06.584630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:27:06.584777 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:27:06.587204 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:27:06.600659 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:27:06.606069 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (506) May 13 00:27:06.606093 kernel: BTRFS: device fsid ffc5eb33-beca-4ca0-9735-b9a50e66f21e devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (507) May 13 00:27:06.613397 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 00:27:06.615465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:27:06.626080 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 00:27:06.630754 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:27:06.634634 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 00:27:06.635825 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 00:27:06.644574 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 00:27:06.646318 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:27:06.652296 disk-uuid[550]: Primary Header is updated. May 13 00:27:06.652296 disk-uuid[550]: Secondary Entries is updated. May 13 00:27:06.652296 disk-uuid[550]: Secondary Header is updated. May 13 00:27:06.655426 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:27:06.667536 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:27:06.672273 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:27:07.667427 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:27:07.668386 disk-uuid[551]: The operation has completed successfully. May 13 00:27:07.688510 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:27:07.688606 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 00:27:07.709554 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 00:27:07.712777 sh[574]: Success May 13 00:27:07.731136 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 00:27:07.772883 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 00:27:07.774812 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 00:27:07.776447 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 00:27:07.787572 kernel: BTRFS info (device dm-0): first mount of filesystem ffc5eb33-beca-4ca0-9735-b9a50e66f21e May 13 00:27:07.787612 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 00:27:07.787623 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 00:27:07.789441 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 00:27:07.789468 kernel: BTRFS info (device dm-0): using free space tree May 13 00:27:07.799365 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 00:27:07.800673 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 00:27:07.807583 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 00:27:07.809236 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 00:27:07.817426 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:27:07.817463 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:27:07.819067 kernel: BTRFS info (device vda6): using free space tree May 13 00:27:07.821435 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:27:07.828595 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:27:07.830420 kernel: BTRFS info (device vda6): last unmount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:27:07.836073 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 00:27:07.846580 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 00:27:07.903240 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:27:07.915571 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:27:07.937247 systemd-networkd[765]: lo: Link UP May 13 00:27:07.937258 systemd-networkd[765]: lo: Gained carrier May 13 00:27:07.937926 systemd-networkd[765]: Enumeration completed May 13 00:27:07.938601 ignition[667]: Ignition 2.19.0 May 13 00:27:07.938030 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:27:07.938608 ignition[667]: Stage: fetch-offline May 13 00:27:07.938621 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:27:07.938640 ignition[667]: no configs at "/usr/lib/ignition/base.d" May 13 00:27:07.938625 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:27:07.938649 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:27:07.939601 systemd-networkd[765]: eth0: Link UP May 13 00:27:07.938830 ignition[667]: parsed url from cmdline: "" May 13 00:27:07.939604 systemd-networkd[765]: eth0: Gained carrier May 13 00:27:07.938833 ignition[667]: no config URL provided May 13 00:27:07.939610 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:27:07.938837 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:27:07.939976 systemd[1]: Reached target network.target - Network. May 13 00:27:07.938844 ignition[667]: no config at "/usr/lib/ignition/user.ign" May 13 00:27:07.938870 ignition[667]: op(1): [started] loading QEMU firmware config module May 13 00:27:07.938879 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:27:07.958447 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:27:07.953766 ignition[667]: op(1): [finished] loading QEMU firmware config module May 13 00:27:07.996197 ignition[667]: parsing config with SHA512: 9882edfb71907f3dfea5791f7e690c6947f2d18ea8ebee0de04fe95b2a6c24198e163b70daa2b7b4934aaa2f00bc99d1cd8fee79fdf3a4eb4735d39b5ded84a8 May 13 00:27:08.000093 unknown[667]: fetched base config from "system" May 13 00:27:08.000104 unknown[667]: fetched user config from "qemu" May 13 00:27:08.000538 ignition[667]: fetch-offline: fetch-offline passed May 13 00:27:08.001676 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:27:08.000599 ignition[667]: Ignition finished successfully May 13 00:27:08.003450 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:27:08.009548 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 00:27:08.020701 ignition[771]: Ignition 2.19.0 May 13 00:27:08.020710 ignition[771]: Stage: kargs May 13 00:27:08.020879 ignition[771]: no configs at "/usr/lib/ignition/base.d" May 13 00:27:08.020889 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:27:08.021756 ignition[771]: kargs: kargs passed May 13 00:27:08.024467 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 00:27:08.021799 ignition[771]: Ignition finished successfully May 13 00:27:08.026466 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 00:27:08.039443 ignition[779]: Ignition 2.19.0 May 13 00:27:08.039454 ignition[779]: Stage: disks May 13 00:27:08.039620 ignition[779]: no configs at "/usr/lib/ignition/base.d" May 13 00:27:08.039630 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:27:08.042220 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 00:27:08.040494 ignition[779]: disks: disks passed May 13 00:27:08.043466 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 00:27:08.040539 ignition[779]: Ignition finished successfully May 13 00:27:08.046218 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:27:08.048092 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:27:08.050069 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:27:08.051741 systemd[1]: Reached target basic.target - Basic System. May 13 00:27:08.068598 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 00:27:08.079336 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 00:27:08.083439 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 00:27:08.086384 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 00:27:08.136238 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 00:27:08.137761 kernel: EXT4-fs (vda9): mounted filesystem 9903c37e-4e5a-41d4-80e5-5c3428d04b7e r/w with ordered data mode. Quota mode: none. May 13 00:27:08.137532 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 00:27:08.146522 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:27:08.148259 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 00:27:08.149444 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 00:27:08.149494 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:27:08.149516 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:27:08.158384 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (797) May 13 00:27:08.156153 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 00:27:08.162738 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:27:08.162757 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:27:08.162767 kernel: BTRFS info (device vda6): using free space tree May 13 00:27:08.158334 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 00:27:08.166469 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:27:08.167425 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:27:08.200575 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:27:08.205739 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory May 13 00:27:08.209688 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:27:08.213472 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:27:08.284271 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 00:27:08.292553 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 00:27:08.294883 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 00:27:08.300425 kernel: BTRFS info (device vda6): last unmount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:27:08.316865 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 00:27:08.319714 ignition[912]: INFO : Ignition 2.19.0 May 13 00:27:08.319714 ignition[912]: INFO : Stage: mount May 13 00:27:08.322098 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:27:08.322098 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:27:08.322098 ignition[912]: INFO : mount: mount passed May 13 00:27:08.322098 ignition[912]: INFO : Ignition finished successfully May 13 00:27:08.322813 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 00:27:08.334540 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 00:27:08.786313 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 00:27:08.799706 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:27:08.805420 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (924) May 13 00:27:08.805447 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:27:08.807431 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:27:08.807446 kernel: BTRFS info (device vda6): using free space tree May 13 00:27:08.814437 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:27:08.814992 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:27:08.836998 ignition[941]: INFO : Ignition 2.19.0 May 13 00:27:08.836998 ignition[941]: INFO : Stage: files May 13 00:27:08.838737 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:27:08.838737 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:27:08.838737 ignition[941]: DEBUG : files: compiled without relabeling support, skipping May 13 00:27:08.842254 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:27:08.842254 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:27:08.842254 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:27:08.846263 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:27:08.846263 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:27:08.846263 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 13 00:27:08.846263 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 13 00:27:08.843869 unknown[941]: wrote ssh authorized keys file for user: core May 13 00:27:08.894021 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 00:27:08.972630 systemd-networkd[765]: eth0: Gained IPv6LL May 13 00:27:09.161613 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 13 00:27:09.161613 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 00:27:09.165398 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:27:09.165398 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:27:09.165398 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:27:09.165398 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:27:09.165398 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:27:09.165398 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:27:09.165398 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:27:09.165398 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:27:09.165398 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:27:09.165398 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 00:27:09.165398 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 00:27:09.165398 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 00:27:09.165398 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 13 00:27:09.480510 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 00:27:09.829879 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 00:27:09.829879 ignition[941]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 00:27:09.833307 ignition[941]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:27:09.833307 ignition[941]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:27:09.833307 ignition[941]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 00:27:09.833307 ignition[941]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 13 00:27:09.833307 ignition[941]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:27:09.833307 ignition[941]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:27:09.833307 ignition[941]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 13 00:27:09.833307 ignition[941]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:27:09.851674 ignition[941]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:27:09.854925 ignition[941]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:27:09.856712 ignition[941]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:27:09.856712 ignition[941]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 13 00:27:09.856712 ignition[941]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:27:09.856712 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:27:09.856712 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:27:09.856712 ignition[941]: INFO : files: files passed May 13 00:27:09.856712 ignition[941]: INFO : Ignition finished successfully May 13 00:27:09.857060 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 00:27:09.870576 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 00:27:09.873735 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 00:27:09.875015 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:27:09.875097 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 00:27:09.880919 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory May 13 00:27:09.883575 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:27:09.883575 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 00:27:09.886499 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:27:09.886113 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:27:09.890644 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 00:27:09.901541 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 00:27:09.919266 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:27:09.919365 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 00:27:09.922681 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 00:27:09.924425 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 00:27:09.926228 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 00:27:09.926935 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 00:27:09.942056 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:27:09.952558 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 00:27:09.959716 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 00:27:09.960912 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:27:09.962890 systemd[1]: Stopped target timers.target - Timer Units. May 13 00:27:09.964618 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:27:09.964726 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:27:09.967150 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 00:27:09.968257 systemd[1]: Stopped target basic.target - Basic System. May 13 00:27:09.970044 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 00:27:09.971825 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:27:09.973561 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 00:27:09.975424 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 00:27:09.977290 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:27:09.979296 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 00:27:09.981035 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 00:27:09.982921 systemd[1]: Stopped target swap.target - Swaps. May 13 00:27:09.984432 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:27:09.984541 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 00:27:09.986839 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 00:27:09.988653 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:27:09.990509 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 00:27:09.991485 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:27:09.993566 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:27:09.993669 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 00:27:09.996424 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:27:09.996533 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:27:09.998470 systemd[1]: Stopped target paths.target - Path Units. May 13 00:27:10.000043 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:27:10.001470 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:27:10.003007 systemd[1]: Stopped target slices.target - Slice Units. May 13 00:27:10.004460 systemd[1]: Stopped target sockets.target - Socket Units. May 13 00:27:10.006163 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:27:10.006259 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:27:10.008375 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:27:10.008470 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:27:10.010035 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:27:10.010137 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:27:10.011836 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:27:10.011931 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 00:27:10.028569 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 00:27:10.030141 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 00:27:10.031097 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:27:10.031229 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:27:10.033132 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:27:10.033240 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:27:10.038933 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:27:10.039023 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 00:27:10.043131 ignition[995]: INFO : Ignition 2.19.0 May 13 00:27:10.043131 ignition[995]: INFO : Stage: umount May 13 00:27:10.043131 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:27:10.043131 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:27:10.043131 ignition[995]: INFO : umount: umount passed May 13 00:27:10.043131 ignition[995]: INFO : Ignition finished successfully May 13 00:27:10.043495 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:27:10.044465 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 00:27:10.045926 systemd[1]: Stopped target network.target - Network. May 13 00:27:10.047583 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:27:10.047638 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 00:27:10.049605 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:27:10.049652 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 00:27:10.051196 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:27:10.051240 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 00:27:10.052968 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 00:27:10.053012 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 00:27:10.054842 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 00:27:10.056570 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 00:27:10.058910 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:27:10.059390 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:27:10.059505 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 00:27:10.061264 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:27:10.061350 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 00:27:10.062932 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:27:10.063036 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 00:27:10.065380 systemd-networkd[765]: eth0: DHCPv6 lease lost May 13 00:27:10.066109 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 00:27:10.066198 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:27:10.067749 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:27:10.069454 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 00:27:10.071838 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:27:10.071898 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 00:27:10.086513 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 00:27:10.087799 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:27:10.087860 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:27:10.089819 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:27:10.089860 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:27:10.091595 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:27:10.091635 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 00:27:10.093552 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:27:10.104720 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:27:10.104848 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 00:27:10.108957 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:27:10.109088 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:27:10.111305 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:27:10.111341 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 00:27:10.113148 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:27:10.113186 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:27:10.114959 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:27:10.115002 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 00:27:10.117578 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:27:10.117620 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 00:27:10.120248 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:27:10.120290 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:27:10.134547 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 00:27:10.135559 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:27:10.135612 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:27:10.137691 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 00:27:10.137732 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:27:10.139675 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:27:10.139715 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:27:10.141794 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:27:10.141835 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:27:10.143960 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:27:10.145447 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 00:27:10.147290 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 00:27:10.149755 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 00:27:10.158308 systemd[1]: Switching root. May 13 00:27:10.191304 systemd-journald[238]: Journal stopped May 13 00:27:10.949568 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 13 00:27:10.949630 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:27:10.949644 kernel: SELinux: policy capability open_perms=1 May 13 00:27:10.949659 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:27:10.949670 kernel: SELinux: policy capability always_check_network=0 May 13 00:27:10.949684 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:27:10.949695 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:27:10.949705 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:27:10.949718 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:27:10.949728 kernel: audit: type=1403 audit(1747096030.336:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:27:10.949739 systemd[1]: Successfully loaded SELinux policy in 34.892ms. May 13 00:27:10.949754 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.409ms. May 13 00:27:10.949769 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:27:10.949781 systemd[1]: Detected virtualization kvm. May 13 00:27:10.949792 systemd[1]: Detected architecture arm64. May 13 00:27:10.949804 systemd[1]: Detected first boot. May 13 00:27:10.949815 systemd[1]: Initializing machine ID from VM UUID. May 13 00:27:10.949829 zram_generator::config[1040]: No configuration found. May 13 00:27:10.949843 systemd[1]: Populated /etc with preset unit settings. May 13 00:27:10.949856 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:27:10.949868 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 00:27:10.949879 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:27:10.949895 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 00:27:10.949907 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 00:27:10.949919 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 00:27:10.949933 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 00:27:10.949945 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 00:27:10.949956 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 00:27:10.949967 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 00:27:10.949978 systemd[1]: Created slice user.slice - User and Session Slice. May 13 00:27:10.949988 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:27:10.949999 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:27:10.950010 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 00:27:10.950021 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 00:27:10.950034 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 00:27:10.950045 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:27:10.950056 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 00:27:10.950067 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:27:10.950078 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 00:27:10.950089 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 00:27:10.950101 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 00:27:10.950113 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 00:27:10.950124 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:27:10.950135 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:27:10.950147 systemd[1]: Reached target slices.target - Slice Units. May 13 00:27:10.950157 systemd[1]: Reached target swap.target - Swaps. May 13 00:27:10.950168 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 00:27:10.950186 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 00:27:10.950199 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:27:10.950210 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:27:10.950222 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:27:10.950235 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 00:27:10.950246 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 00:27:10.950257 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 00:27:10.950268 systemd[1]: Mounting media.mount - External Media Directory... May 13 00:27:10.950279 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 00:27:10.950289 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 00:27:10.950300 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 00:27:10.950312 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:27:10.950324 systemd[1]: Reached target machines.target - Containers. May 13 00:27:10.950336 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 00:27:10.950348 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:27:10.950359 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:27:10.950370 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 00:27:10.950381 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:27:10.950393 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:27:10.950404 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:27:10.950425 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 00:27:10.950439 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:27:10.950450 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:27:10.950461 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:27:10.950472 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 00:27:10.950483 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:27:10.950494 kernel: fuse: init (API version 7.39) May 13 00:27:10.950505 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:27:10.950516 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:27:10.950529 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:27:10.950539 kernel: loop: module loaded May 13 00:27:10.950550 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 00:27:10.950561 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 00:27:10.950571 kernel: ACPI: bus type drm_connector registered May 13 00:27:10.950582 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:27:10.950594 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:27:10.950605 systemd[1]: Stopped verity-setup.service. May 13 00:27:10.950615 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 00:27:10.950626 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 00:27:10.950639 systemd[1]: Mounted media.mount - External Media Directory. May 13 00:27:10.950650 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 00:27:10.950661 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 00:27:10.950691 systemd-journald[1111]: Collecting audit messages is disabled. May 13 00:27:10.950715 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 00:27:10.950726 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:27:10.950739 systemd-journald[1111]: Journal started May 13 00:27:10.950761 systemd-journald[1111]: Runtime Journal (/run/log/journal/61b7cd729f204a658e9fde278505841a) is 5.9M, max 47.3M, 41.4M free. May 13 00:27:10.722644 systemd[1]: Queued start job for default target multi-user.target. May 13 00:27:10.749875 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 00:27:10.750248 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:27:10.954807 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:27:10.955588 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 00:27:10.957059 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:27:10.958476 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 00:27:10.959870 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:27:10.960010 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:27:10.961390 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:27:10.961544 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:27:10.962812 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:27:10.962951 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:27:10.964399 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:27:10.964665 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 00:27:10.966046 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:27:10.967449 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:27:10.968720 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:27:10.970223 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 00:27:10.971766 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 00:27:10.984642 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 00:27:10.994495 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 00:27:10.996605 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 00:27:10.997751 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:27:10.997792 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:27:10.999753 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 13 00:27:11.002058 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 00:27:11.004191 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 00:27:11.005326 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:27:11.006858 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 00:27:11.008853 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 00:27:11.010142 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:27:11.013573 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 00:27:11.014854 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:27:11.016592 systemd-journald[1111]: Time spent on flushing to /var/log/journal/61b7cd729f204a658e9fde278505841a is 19.549ms for 855 entries. May 13 00:27:11.016592 systemd-journald[1111]: System Journal (/var/log/journal/61b7cd729f204a658e9fde278505841a) is 8.0M, max 195.6M, 187.6M free. May 13 00:27:11.054902 systemd-journald[1111]: Received client request to flush runtime journal. May 13 00:27:11.054958 kernel: loop0: detected capacity change from 0 to 114328 May 13 00:27:11.054984 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:27:11.016629 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:27:11.020648 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 00:27:11.023579 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:27:11.027532 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:27:11.028993 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 00:27:11.030485 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 00:27:11.031903 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 00:27:11.035544 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 00:27:11.041071 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 00:27:11.056069 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 13 00:27:11.058735 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 00:27:11.061814 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 00:27:11.065531 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:27:11.076953 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:27:11.077828 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 13 00:27:11.080165 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 00:27:11.083474 kernel: loop1: detected capacity change from 0 to 114432 May 13 00:27:11.083967 systemd-tmpfiles[1152]: ACLs are not supported, ignoring. May 13 00:27:11.083989 systemd-tmpfiles[1152]: ACLs are not supported, ignoring. May 13 00:27:11.088228 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:27:11.098597 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 00:27:11.116434 kernel: loop2: detected capacity change from 0 to 201592 May 13 00:27:11.119903 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 00:27:11.130592 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:27:11.146803 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. May 13 00:27:11.146819 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. May 13 00:27:11.151075 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:27:11.161476 kernel: loop3: detected capacity change from 0 to 114328 May 13 00:27:11.165530 kernel: loop4: detected capacity change from 0 to 114432 May 13 00:27:11.169493 kernel: loop5: detected capacity change from 0 to 201592 May 13 00:27:11.173255 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 00:27:11.173622 (sd-merge)[1180]: Merged extensions into '/usr'. May 13 00:27:11.176973 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... May 13 00:27:11.176984 systemd[1]: Reloading... May 13 00:27:11.238436 zram_generator::config[1206]: No configuration found. May 13 00:27:11.294524 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:27:11.340190 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:27:11.376250 systemd[1]: Reloading finished in 198 ms. May 13 00:27:11.404445 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 00:27:11.405974 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 00:27:11.427728 systemd[1]: Starting ensure-sysext.service... May 13 00:27:11.430606 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:27:11.440860 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... May 13 00:27:11.440877 systemd[1]: Reloading... May 13 00:27:11.451714 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:27:11.451962 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 00:27:11.452602 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:27:11.452803 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. May 13 00:27:11.452848 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. May 13 00:27:11.455704 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:27:11.455811 systemd-tmpfiles[1242]: Skipping /boot May 13 00:27:11.462542 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:27:11.462640 systemd-tmpfiles[1242]: Skipping /boot May 13 00:27:11.489477 zram_generator::config[1272]: No configuration found. May 13 00:27:11.564330 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:27:11.600077 systemd[1]: Reloading finished in 158 ms. May 13 00:27:11.613308 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 00:27:11.627901 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:27:11.635483 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:27:11.638059 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 00:27:11.640755 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 00:27:11.643738 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:27:11.646230 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:27:11.648672 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 00:27:11.654609 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:27:11.658714 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:27:11.661156 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:27:11.678101 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:27:11.680882 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:27:11.681830 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 00:27:11.684029 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:27:11.684464 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:27:11.686362 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:27:11.686496 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:27:11.687957 systemd-udevd[1311]: Using default interface naming scheme 'v255'. May 13 00:27:11.688428 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:27:11.688568 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:27:11.696619 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:27:11.706651 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:27:11.708853 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:27:11.711327 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:27:11.712481 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:27:11.714680 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 00:27:11.721722 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 00:27:11.723984 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:27:11.729491 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 00:27:11.732599 augenrules[1347]: No rules May 13 00:27:11.733016 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 00:27:11.735266 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:27:11.736829 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:27:11.736957 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:27:11.738698 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:27:11.740453 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:27:11.742095 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:27:11.742229 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:27:11.744000 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 00:27:11.756306 systemd[1]: Finished ensure-sysext.service. May 13 00:27:11.760854 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 00:27:11.763285 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:27:11.772431 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1345) May 13 00:27:11.774704 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:27:11.781135 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:27:11.787612 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:27:11.792589 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:27:11.793744 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:27:11.797821 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:27:11.801000 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 00:27:11.802318 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:27:11.802895 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:27:11.804430 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:27:11.805969 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:27:11.806102 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:27:11.807563 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:27:11.807698 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:27:11.809788 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:27:11.809919 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:27:11.811530 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 00:27:11.820449 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:27:11.831557 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 00:27:11.832729 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:27:11.832794 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:27:11.851189 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 00:27:11.864995 systemd-resolved[1310]: Positive Trust Anchors: May 13 00:27:11.865013 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:27:11.865048 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:27:11.871292 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 00:27:11.872927 systemd[1]: Reached target time-set.target - System Time Set. May 13 00:27:11.878508 systemd-resolved[1310]: Defaulting to hostname 'linux'. May 13 00:27:11.888037 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:27:11.889385 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:27:11.905733 systemd-networkd[1383]: lo: Link UP May 13 00:27:11.905741 systemd-networkd[1383]: lo: Gained carrier May 13 00:27:11.906468 systemd-networkd[1383]: Enumeration completed May 13 00:27:11.906937 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:27:11.910267 systemd[1]: Reached target network.target - Network. May 13 00:27:11.915648 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:27:11.915658 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:27:11.919793 systemd-networkd[1383]: eth0: Link UP May 13 00:27:11.919804 systemd-networkd[1383]: eth0: Gained carrier May 13 00:27:11.919820 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:27:11.922651 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 00:27:11.925075 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:27:11.936796 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 00:27:11.940770 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 00:27:11.946492 systemd-networkd[1383]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:27:11.947592 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. May 13 00:27:11.949135 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:27:11.949209 systemd-timesyncd[1385]: Initial clock synchronization to Tue 2025-05-13 00:27:11.743477 UTC. May 13 00:27:11.959299 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:27:11.975565 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:27:11.986907 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 00:27:11.989494 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:27:11.990626 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:27:11.991779 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 00:27:11.993033 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 00:27:11.994523 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 00:27:11.995728 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 00:27:11.997007 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 00:27:11.998275 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:27:11.998323 systemd[1]: Reached target paths.target - Path Units. May 13 00:27:11.999248 systemd[1]: Reached target timers.target - Timer Units. May 13 00:27:12.004901 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 00:27:12.007565 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 00:27:12.019768 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 00:27:12.022134 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 00:27:12.023739 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 00:27:12.024876 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:27:12.025827 systemd[1]: Reached target basic.target - Basic System. May 13 00:27:12.026754 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 00:27:12.026785 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 00:27:12.027740 systemd[1]: Starting containerd.service - containerd container runtime... May 13 00:27:12.029733 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 00:27:12.031280 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:27:12.033542 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 00:27:12.038713 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 00:27:12.041224 jq[1414]: false May 13 00:27:12.041418 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 00:27:12.042693 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 00:27:12.047541 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 00:27:12.050256 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 00:27:12.055589 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 00:27:12.062317 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 00:27:12.065774 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:27:12.066181 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:27:12.067066 dbus-daemon[1413]: [system] SELinux support is enabled May 13 00:27:12.067721 systemd[1]: Starting update-engine.service - Update Engine... May 13 00:27:12.071711 extend-filesystems[1415]: Found loop3 May 13 00:27:12.071711 extend-filesystems[1415]: Found loop4 May 13 00:27:12.071711 extend-filesystems[1415]: Found loop5 May 13 00:27:12.071711 extend-filesystems[1415]: Found vda May 13 00:27:12.071711 extend-filesystems[1415]: Found vda1 May 13 00:27:12.071711 extend-filesystems[1415]: Found vda2 May 13 00:27:12.071711 extend-filesystems[1415]: Found vda3 May 13 00:27:12.071711 extend-filesystems[1415]: Found usr May 13 00:27:12.071711 extend-filesystems[1415]: Found vda4 May 13 00:27:12.071711 extend-filesystems[1415]: Found vda6 May 13 00:27:12.071711 extend-filesystems[1415]: Found vda7 May 13 00:27:12.071711 extend-filesystems[1415]: Found vda9 May 13 00:27:12.071711 extend-filesystems[1415]: Checking size of /dev/vda9 May 13 00:27:12.071678 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 00:27:12.073501 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 00:27:12.079502 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 00:27:12.090774 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:27:12.090943 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 00:27:12.091185 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:27:12.091310 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 00:27:12.092403 extend-filesystems[1415]: Resized partition /dev/vda9 May 13 00:27:12.095970 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:27:12.096104 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 00:27:12.104455 jq[1430]: true May 13 00:27:12.106776 (ntainerd)[1440]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 00:27:12.109722 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1360) May 13 00:27:12.116235 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:27:12.116273 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 00:27:12.119306 extend-filesystems[1442]: resize2fs 1.47.1 (20-May-2024) May 13 00:27:12.120261 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:27:12.120281 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 00:27:12.125442 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:27:12.131008 update_engine[1428]: I20250513 00:27:12.130592 1428 main.cc:92] Flatcar Update Engine starting May 13 00:27:12.132387 tar[1437]: linux-arm64/LICENSE May 13 00:27:12.132649 tar[1437]: linux-arm64/helm May 13 00:27:12.132953 systemd[1]: Started update-engine.service - Update Engine. May 13 00:27:12.133041 update_engine[1428]: I20250513 00:27:12.133000 1428 update_check_scheduler.cc:74] Next update check in 9m3s May 13 00:27:12.143684 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 00:27:12.150927 jq[1444]: true May 13 00:27:12.154483 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (Power Button) May 13 00:27:12.155539 systemd-logind[1424]: New seat seat0. May 13 00:27:12.159023 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:27:12.156867 systemd[1]: Started systemd-logind.service - User Login Management. May 13 00:27:12.174648 extend-filesystems[1442]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:27:12.174648 extend-filesystems[1442]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:27:12.174648 extend-filesystems[1442]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:27:12.179255 extend-filesystems[1415]: Resized filesystem in /dev/vda9 May 13 00:27:12.179236 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:27:12.179420 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 00:27:12.218178 bash[1474]: Updated "/home/core/.ssh/authorized_keys" May 13 00:27:12.219691 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 00:27:12.222095 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 00:27:12.226951 locksmithd[1451]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:27:12.322370 containerd[1440]: time="2025-05-13T00:27:12.322111479Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 13 00:27:12.351133 containerd[1440]: time="2025-05-13T00:27:12.350908159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:27:12.352463 containerd[1440]: time="2025-05-13T00:27:12.352425113Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:27:12.352463 containerd[1440]: time="2025-05-13T00:27:12.352459876Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:27:12.352538 containerd[1440]: time="2025-05-13T00:27:12.352475465Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:27:12.352681 containerd[1440]: time="2025-05-13T00:27:12.352634511Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 00:27:12.352681 containerd[1440]: time="2025-05-13T00:27:12.352652867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 00:27:12.352738 containerd[1440]: time="2025-05-13T00:27:12.352705401Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:27:12.352738 containerd[1440]: time="2025-05-13T00:27:12.352717522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:27:12.352900 containerd[1440]: time="2025-05-13T00:27:12.352865460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:27:12.352900 containerd[1440]: time="2025-05-13T00:27:12.352898236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:27:12.352963 containerd[1440]: time="2025-05-13T00:27:12.352911291Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:27:12.352963 containerd[1440]: time="2025-05-13T00:27:12.352921580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:27:12.353002 containerd[1440]: time="2025-05-13T00:27:12.352992704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:27:12.353199 containerd[1440]: time="2025-05-13T00:27:12.353177550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:27:12.353304 containerd[1440]: time="2025-05-13T00:27:12.353284022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:27:12.353304 containerd[1440]: time="2025-05-13T00:27:12.353301364Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:27:12.353399 containerd[1440]: time="2025-05-13T00:27:12.353374437Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:27:12.353492 containerd[1440]: time="2025-05-13T00:27:12.353443574Z" level=info msg="metadata content store policy set" policy=shared May 13 00:27:12.356841 containerd[1440]: time="2025-05-13T00:27:12.356808434Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:27:12.356909 containerd[1440]: time="2025-05-13T00:27:12.356861164Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:27:12.356909 containerd[1440]: time="2025-05-13T00:27:12.356877844Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 00:27:12.356909 containerd[1440]: time="2025-05-13T00:27:12.356893549Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 00:27:12.356909 containerd[1440]: time="2025-05-13T00:27:12.356907073Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:27:12.357191 containerd[1440]: time="2025-05-13T00:27:12.357048503Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:27:12.357302 containerd[1440]: time="2025-05-13T00:27:12.357284090Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:27:12.357420 containerd[1440]: time="2025-05-13T00:27:12.357392822Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 00:27:12.357535 containerd[1440]: time="2025-05-13T00:27:12.357516676Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 00:27:12.357535 containerd[1440]: time="2025-05-13T00:27:12.357535110Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 00:27:12.357587 containerd[1440]: time="2025-05-13T00:27:12.357551127Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:27:12.357587 containerd[1440]: time="2025-05-13T00:27:12.357566015Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:27:12.357587 containerd[1440]: time="2025-05-13T00:27:12.357579616Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:27:12.357658 containerd[1440]: time="2025-05-13T00:27:12.357598518Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:27:12.357658 containerd[1440]: time="2025-05-13T00:27:12.357613327Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:27:12.357658 containerd[1440]: time="2025-05-13T00:27:12.357625214Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:27:12.357658 containerd[1440]: time="2025-05-13T00:27:12.357638854Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:27:12.357658 containerd[1440]: time="2025-05-13T00:27:12.357650351Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:27:12.357755 containerd[1440]: time="2025-05-13T00:27:12.357671240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:27:12.357755 containerd[1440]: time="2025-05-13T00:27:12.357684685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:27:12.357755 containerd[1440]: time="2025-05-13T00:27:12.357696961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:27:12.357755 containerd[1440]: time="2025-05-13T00:27:12.357708419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:27:12.357755 containerd[1440]: time="2025-05-13T00:27:12.357719448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:27:12.357755 containerd[1440]: time="2025-05-13T00:27:12.357731218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:27:12.357755 containerd[1440]: time="2025-05-13T00:27:12.357750509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:27:12.357881 containerd[1440]: time="2025-05-13T00:27:12.357763487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:27:12.357881 containerd[1440]: time="2025-05-13T00:27:12.357776504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 00:27:12.357881 containerd[1440]: time="2025-05-13T00:27:12.357791001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 00:27:12.357881 containerd[1440]: time="2025-05-13T00:27:12.357804291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:27:12.357881 containerd[1440]: time="2025-05-13T00:27:12.357815749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 00:27:12.357881 containerd[1440]: time="2025-05-13T00:27:12.357827830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:27:12.357881 containerd[1440]: time="2025-05-13T00:27:12.357847823Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 00:27:12.357881 containerd[1440]: time="2025-05-13T00:27:12.357872726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 00:27:12.357881 containerd[1440]: time="2025-05-13T00:27:12.357883989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:27:12.358033 containerd[1440]: time="2025-05-13T00:27:12.357894823Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:27:12.358599 containerd[1440]: time="2025-05-13T00:27:12.358543789Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:27:12.358761 containerd[1440]: time="2025-05-13T00:27:12.358579877Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 00:27:12.358761 containerd[1440]: time="2025-05-13T00:27:12.358735649Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:27:12.358761 containerd[1440]: time="2025-05-13T00:27:12.358749562Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 00:27:12.358761 containerd[1440]: time="2025-05-13T00:27:12.358759811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:27:12.358861 containerd[1440]: time="2025-05-13T00:27:12.358771620Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 00:27:12.358861 containerd[1440]: time="2025-05-13T00:27:12.358781597Z" level=info msg="NRI interface is disabled by configuration." May 13 00:27:12.358861 containerd[1440]: time="2025-05-13T00:27:12.358791613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:27:12.359174 containerd[1440]: time="2025-05-13T00:27:12.359118784Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:27:12.359293 containerd[1440]: time="2025-05-13T00:27:12.359180750Z" level=info msg="Connect containerd service" May 13 00:27:12.359293 containerd[1440]: time="2025-05-13T00:27:12.359209005Z" level=info msg="using legacy CRI server" May 13 00:27:12.359293 containerd[1440]: time="2025-05-13T00:27:12.359216254Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 00:27:12.359362 containerd[1440]: time="2025-05-13T00:27:12.359293185Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:27:12.360067 containerd[1440]: time="2025-05-13T00:27:12.360033150Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:27:12.360265 containerd[1440]: time="2025-05-13T00:27:12.360232259Z" level=info msg="Start subscribing containerd event" May 13 00:27:12.360301 containerd[1440]: time="2025-05-13T00:27:12.360280468Z" level=info msg="Start recovering state" May 13 00:27:12.360364 containerd[1440]: time="2025-05-13T00:27:12.360337913Z" level=info msg="Start event monitor" May 13 00:27:12.360364 containerd[1440]: time="2025-05-13T00:27:12.360354437Z" level=info msg="Start snapshots syncer" May 13 00:27:12.360364 containerd[1440]: time="2025-05-13T00:27:12.360364765Z" level=info msg="Start cni network conf syncer for default" May 13 00:27:12.360461 containerd[1440]: time="2025-05-13T00:27:12.360372014Z" level=info msg="Start streaming server" May 13 00:27:12.361103 containerd[1440]: time="2025-05-13T00:27:12.361079320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:27:12.361150 containerd[1440]: time="2025-05-13T00:27:12.361125463Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:27:12.361268 systemd[1]: Started containerd.service - containerd container runtime. May 13 00:27:12.364769 containerd[1440]: time="2025-05-13T00:27:12.364270715Z" level=info msg="containerd successfully booted in 0.043403s" May 13 00:27:12.518301 tar[1437]: linux-arm64/README.md May 13 00:27:12.531865 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 00:27:12.908843 sshd_keygen[1432]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:27:12.927250 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 00:27:12.938717 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 00:27:12.944201 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:27:12.944452 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 00:27:12.947290 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 00:27:12.959341 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 00:27:12.980791 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 00:27:12.983234 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 00:27:12.984655 systemd[1]: Reached target getty.target - Login Prompts. May 13 00:27:13.260586 systemd-networkd[1383]: eth0: Gained IPv6LL May 13 00:27:13.266174 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 00:27:13.268026 systemd[1]: Reached target network-online.target - Network is Online. May 13 00:27:13.284672 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 00:27:13.287353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:27:13.289513 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 00:27:13.303517 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:27:13.303749 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 00:27:13.305401 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 00:27:13.311764 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 00:27:13.822793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:27:13.824322 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 00:27:13.826593 systemd[1]: Startup finished in 567ms (kernel) + 4.632s (initrd) + 3.524s (userspace) = 8.725s. May 13 00:27:13.826654 (kubelet)[1527]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:27:14.230998 kubelet[1527]: E0513 00:27:14.230877 1527 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:27:14.233380 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:27:14.233553 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:27:18.447112 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 00:27:18.448204 systemd[1]: Started sshd@0-10.0.0.91:22-10.0.0.1:53124.service - OpenSSH per-connection server daemon (10.0.0.1:53124). May 13 00:27:18.510379 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 53124 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:27:18.514099 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:27:18.526774 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 00:27:18.527143 systemd-logind[1424]: New session 1 of user core. May 13 00:27:18.541877 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 00:27:18.550644 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 00:27:18.553008 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 00:27:18.559536 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:27:18.633421 systemd[1544]: Queued start job for default target default.target. May 13 00:27:18.648437 systemd[1544]: Created slice app.slice - User Application Slice. May 13 00:27:18.648463 systemd[1544]: Reached target paths.target - Paths. May 13 00:27:18.648475 systemd[1544]: Reached target timers.target - Timers. May 13 00:27:18.650354 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 00:27:18.660668 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 00:27:18.660732 systemd[1544]: Reached target sockets.target - Sockets. May 13 00:27:18.660744 systemd[1544]: Reached target basic.target - Basic System. May 13 00:27:18.660791 systemd[1544]: Reached target default.target - Main User Target. May 13 00:27:18.660817 systemd[1544]: Startup finished in 96ms. May 13 00:27:18.661577 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 00:27:18.662817 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 00:27:18.728820 systemd[1]: Started sshd@1-10.0.0.91:22-10.0.0.1:53126.service - OpenSSH per-connection server daemon (10.0.0.1:53126). May 13 00:27:18.767036 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 53126 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:27:18.768879 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:27:18.773393 systemd-logind[1424]: New session 2 of user core. May 13 00:27:18.785699 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 00:27:18.839654 sshd[1555]: pam_unix(sshd:session): session closed for user core May 13 00:27:18.850931 systemd[1]: sshd@1-10.0.0.91:22-10.0.0.1:53126.service: Deactivated successfully. May 13 00:27:18.853677 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:27:18.855261 systemd-logind[1424]: Session 2 logged out. Waiting for processes to exit. May 13 00:27:18.864855 systemd[1]: Started sshd@2-10.0.0.91:22-10.0.0.1:53136.service - OpenSSH per-connection server daemon (10.0.0.1:53136). May 13 00:27:18.866076 systemd-logind[1424]: Removed session 2. May 13 00:27:18.906894 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 53136 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:27:18.908417 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:27:18.913533 systemd-logind[1424]: New session 3 of user core. May 13 00:27:18.919579 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 00:27:18.970060 sshd[1562]: pam_unix(sshd:session): session closed for user core May 13 00:27:18.978824 systemd[1]: sshd@2-10.0.0.91:22-10.0.0.1:53136.service: Deactivated successfully. May 13 00:27:18.982533 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:27:18.984264 systemd-logind[1424]: Session 3 logged out. Waiting for processes to exit. May 13 00:27:18.993743 systemd[1]: Started sshd@3-10.0.0.91:22-10.0.0.1:53146.service - OpenSSH per-connection server daemon (10.0.0.1:53146). May 13 00:27:18.994722 systemd-logind[1424]: Removed session 3. May 13 00:27:19.041079 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 53146 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:27:19.042384 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:27:19.046597 systemd-logind[1424]: New session 4 of user core. May 13 00:27:19.059580 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 00:27:19.115808 sshd[1569]: pam_unix(sshd:session): session closed for user core May 13 00:27:19.133092 systemd[1]: sshd@3-10.0.0.91:22-10.0.0.1:53146.service: Deactivated successfully. May 13 00:27:19.134904 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:27:19.139576 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit. May 13 00:27:19.152701 systemd[1]: Started sshd@4-10.0.0.91:22-10.0.0.1:53160.service - OpenSSH per-connection server daemon (10.0.0.1:53160). May 13 00:27:19.153873 systemd-logind[1424]: Removed session 4. May 13 00:27:19.188063 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 53160 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:27:19.189455 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:27:19.193635 systemd-logind[1424]: New session 5 of user core. May 13 00:27:19.209595 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 00:27:19.270878 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 00:27:19.271663 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:27:19.285182 sudo[1579]: pam_unix(sudo:session): session closed for user root May 13 00:27:19.288515 sshd[1576]: pam_unix(sshd:session): session closed for user core May 13 00:27:19.305663 systemd[1]: sshd@4-10.0.0.91:22-10.0.0.1:53160.service: Deactivated successfully. May 13 00:27:19.306951 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:27:19.311116 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit. May 13 00:27:19.320660 systemd[1]: Started sshd@5-10.0.0.91:22-10.0.0.1:53172.service - OpenSSH per-connection server daemon (10.0.0.1:53172). May 13 00:27:19.321634 systemd-logind[1424]: Removed session 5. May 13 00:27:19.359531 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 53172 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:27:19.360607 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:27:19.364458 systemd-logind[1424]: New session 6 of user core. May 13 00:27:19.371548 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 00:27:19.422128 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 00:27:19.422433 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:27:19.425423 sudo[1588]: pam_unix(sudo:session): session closed for user root May 13 00:27:19.429756 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 13 00:27:19.430014 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:27:19.445638 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 13 00:27:19.447195 auditctl[1591]: No rules May 13 00:27:19.448836 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:27:19.449043 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 13 00:27:19.450701 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:27:19.476462 augenrules[1609]: No rules May 13 00:27:19.479474 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:27:19.480925 sudo[1587]: pam_unix(sudo:session): session closed for user root May 13 00:27:19.482498 sshd[1584]: pam_unix(sshd:session): session closed for user core May 13 00:27:19.497155 systemd[1]: sshd@5-10.0.0.91:22-10.0.0.1:53172.service: Deactivated successfully. May 13 00:27:19.498874 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:27:19.502055 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit. May 13 00:27:19.527787 systemd[1]: Started sshd@6-10.0.0.91:22-10.0.0.1:53186.service - OpenSSH per-connection server daemon (10.0.0.1:53186). May 13 00:27:19.529971 systemd-logind[1424]: Removed session 6. May 13 00:27:19.562419 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 53186 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:27:19.563972 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:27:19.570084 systemd-logind[1424]: New session 7 of user core. May 13 00:27:19.578569 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 00:27:19.630006 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:27:19.633396 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:27:19.939630 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 00:27:19.939724 (dockerd)[1639]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 00:27:20.229997 dockerd[1639]: time="2025-05-13T00:27:20.229863787Z" level=info msg="Starting up" May 13 00:27:20.413511 dockerd[1639]: time="2025-05-13T00:27:20.413472391Z" level=info msg="Loading containers: start." May 13 00:27:20.533428 kernel: Initializing XFRM netlink socket May 13 00:27:20.611814 systemd-networkd[1383]: docker0: Link UP May 13 00:27:20.636870 dockerd[1639]: time="2025-05-13T00:27:20.636821917Z" level=info msg="Loading containers: done." May 13 00:27:20.647749 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2381113787-merged.mount: Deactivated successfully. May 13 00:27:20.651783 dockerd[1639]: time="2025-05-13T00:27:20.651731350Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:27:20.651865 dockerd[1639]: time="2025-05-13T00:27:20.651846524Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 13 00:27:20.651986 dockerd[1639]: time="2025-05-13T00:27:20.651953610Z" level=info msg="Daemon has completed initialization" May 13 00:27:20.683498 dockerd[1639]: time="2025-05-13T00:27:20.683321358Z" level=info msg="API listen on /run/docker.sock" May 13 00:27:20.683726 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 00:27:21.292428 containerd[1440]: time="2025-05-13T00:27:21.292362435Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 00:27:21.870154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2155336732.mount: Deactivated successfully. May 13 00:27:22.838703 containerd[1440]: time="2025-05-13T00:27:22.838645933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:22.839716 containerd[1440]: time="2025-05-13T00:27:22.839688756Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 13 00:27:22.839770 containerd[1440]: time="2025-05-13T00:27:22.839751768Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:22.843098 containerd[1440]: time="2025-05-13T00:27:22.843063072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:22.844269 containerd[1440]: time="2025-05-13T00:27:22.844226792Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 1.551812749s" May 13 00:27:22.844317 containerd[1440]: time="2025-05-13T00:27:22.844265210Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 13 00:27:22.845370 containerd[1440]: time="2025-05-13T00:27:22.845334215Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 00:27:24.068062 containerd[1440]: time="2025-05-13T00:27:24.068010148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:24.068814 containerd[1440]: time="2025-05-13T00:27:24.068779506Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 13 00:27:24.069431 containerd[1440]: time="2025-05-13T00:27:24.069384797Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:24.072247 containerd[1440]: time="2025-05-13T00:27:24.072212168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:24.074508 containerd[1440]: time="2025-05-13T00:27:24.073968909Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.228599797s" May 13 00:27:24.074508 containerd[1440]: time="2025-05-13T00:27:24.074008105Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 13 00:27:24.074935 containerd[1440]: time="2025-05-13T00:27:24.074904920Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 00:27:24.483761 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:27:24.494576 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:27:24.595583 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:27:24.599117 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:27:24.630503 kubelet[1854]: E0513 00:27:24.630453 1854 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:27:24.633658 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:27:24.633793 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:27:25.365446 containerd[1440]: time="2025-05-13T00:27:25.365329144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:25.366492 containerd[1440]: time="2025-05-13T00:27:25.366463783Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 13 00:27:25.367452 containerd[1440]: time="2025-05-13T00:27:25.366967451Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:25.372161 containerd[1440]: time="2025-05-13T00:27:25.372115393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:25.373662 containerd[1440]: time="2025-05-13T00:27:25.373505032Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.298500064s" May 13 00:27:25.373662 containerd[1440]: time="2025-05-13T00:27:25.373544213Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 13 00:27:25.374336 containerd[1440]: time="2025-05-13T00:27:25.374302046Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 00:27:26.333433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974408490.mount: Deactivated successfully. May 13 00:27:26.583795 containerd[1440]: time="2025-05-13T00:27:26.583729172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:26.585056 containerd[1440]: time="2025-05-13T00:27:26.584640387Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 13 00:27:26.585864 containerd[1440]: time="2025-05-13T00:27:26.585826389Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:26.588510 containerd[1440]: time="2025-05-13T00:27:26.588476646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:26.591357 containerd[1440]: time="2025-05-13T00:27:26.589146063Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.214809488s" May 13 00:27:26.591357 containerd[1440]: time="2025-05-13T00:27:26.589299253Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 13 00:27:26.591810 containerd[1440]: time="2025-05-13T00:27:26.591780941Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 00:27:27.185433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount336432208.mount: Deactivated successfully. May 13 00:27:28.001275 containerd[1440]: time="2025-05-13T00:27:28.001222474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:28.002590 containerd[1440]: time="2025-05-13T00:27:28.002556533Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 13 00:27:28.004016 containerd[1440]: time="2025-05-13T00:27:28.003973740Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:28.006495 containerd[1440]: time="2025-05-13T00:27:28.006468469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:28.007772 containerd[1440]: time="2025-05-13T00:27:28.007725963Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.415895164s" May 13 00:27:28.007815 containerd[1440]: time="2025-05-13T00:27:28.007773498Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 13 00:27:28.008191 containerd[1440]: time="2025-05-13T00:27:28.008164708Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 00:27:28.442561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1182220315.mount: Deactivated successfully. May 13 00:27:28.446314 containerd[1440]: time="2025-05-13T00:27:28.446264593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:28.446745 containerd[1440]: time="2025-05-13T00:27:28.446710915Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 13 00:27:28.447555 containerd[1440]: time="2025-05-13T00:27:28.447519853Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:28.450710 containerd[1440]: time="2025-05-13T00:27:28.449690009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:28.450710 containerd[1440]: time="2025-05-13T00:27:28.450588436Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 442.389233ms" May 13 00:27:28.450710 containerd[1440]: time="2025-05-13T00:27:28.450619102Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 13 00:27:28.451231 containerd[1440]: time="2025-05-13T00:27:28.451192358Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 00:27:28.952066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1686985719.mount: Deactivated successfully. May 13 00:27:30.391065 containerd[1440]: time="2025-05-13T00:27:30.390999067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:30.391559 containerd[1440]: time="2025-05-13T00:27:30.391524005Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 13 00:27:30.392554 containerd[1440]: time="2025-05-13T00:27:30.392518050Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:30.395756 containerd[1440]: time="2025-05-13T00:27:30.395722071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:30.397145 containerd[1440]: time="2025-05-13T00:27:30.397110000Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 1.945881346s" May 13 00:27:30.397183 containerd[1440]: time="2025-05-13T00:27:30.397147672Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 13 00:27:34.884133 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:27:34.893848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:27:35.006592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:27:35.009355 (kubelet)[2016]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:27:35.042681 kubelet[2016]: E0513 00:27:35.042628 2016 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:27:35.045292 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:27:35.045562 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:27:36.820732 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:27:36.840649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:27:36.860917 systemd[1]: Reloading requested from client PID 2031 ('systemctl') (unit session-7.scope)... May 13 00:27:36.861072 systemd[1]: Reloading... May 13 00:27:36.924440 zram_generator::config[2071]: No configuration found. May 13 00:27:37.048832 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:27:37.103176 systemd[1]: Reloading finished in 241 ms. May 13 00:27:37.143170 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 00:27:37.143232 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 00:27:37.143604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:27:37.146010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:27:37.246594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:27:37.251051 (kubelet)[2116]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:27:37.288487 kubelet[2116]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:27:37.288487 kubelet[2116]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 00:27:37.288487 kubelet[2116]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:27:37.288901 kubelet[2116]: I0513 00:27:37.288552 2116 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:27:38.022380 kubelet[2116]: I0513 00:27:38.022331 2116 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 00:27:38.022380 kubelet[2116]: I0513 00:27:38.022365 2116 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:27:38.022685 kubelet[2116]: I0513 00:27:38.022645 2116 server.go:954] "Client rotation is on, will bootstrap in background" May 13 00:27:38.069769 kubelet[2116]: E0513 00:27:38.069732 2116 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" May 13 00:27:38.074472 kubelet[2116]: I0513 00:27:38.073630 2116 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:27:38.081114 kubelet[2116]: E0513 00:27:38.081077 2116 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:27:38.081114 kubelet[2116]: I0513 00:27:38.081105 2116 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:27:38.083692 kubelet[2116]: I0513 00:27:38.083667 2116 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:27:38.083927 kubelet[2116]: I0513 00:27:38.083890 2116 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:27:38.084084 kubelet[2116]: I0513 00:27:38.083920 2116 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:27:38.084157 kubelet[2116]: I0513 00:27:38.084144 2116 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:27:38.084157 kubelet[2116]: I0513 00:27:38.084152 2116 container_manager_linux.go:304] "Creating device plugin manager" May 13 00:27:38.084351 kubelet[2116]: I0513 00:27:38.084337 2116 state_mem.go:36] "Initialized new in-memory state store" May 13 00:27:38.086804 kubelet[2116]: I0513 00:27:38.086752 2116 kubelet.go:446] "Attempting to sync node with API server" May 13 00:27:38.086804 kubelet[2116]: I0513 00:27:38.086780 2116 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:27:38.086877 kubelet[2116]: I0513 00:27:38.086821 2116 kubelet.go:352] "Adding apiserver pod source" May 13 00:27:38.086877 kubelet[2116]: I0513 00:27:38.086831 2116 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:27:38.092647 kubelet[2116]: I0513 00:27:38.092522 2116 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:27:38.092647 kubelet[2116]: W0513 00:27:38.092534 2116 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused May 13 00:27:38.092647 kubelet[2116]: E0513 00:27:38.092606 2116 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" May 13 00:27:38.092780 kubelet[2116]: W0513 00:27:38.092643 2116 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused May 13 00:27:38.092780 kubelet[2116]: E0513 00:27:38.092689 2116 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" May 13 00:27:38.093189 kubelet[2116]: I0513 00:27:38.093172 2116 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:27:38.093303 kubelet[2116]: W0513 00:27:38.093292 2116 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:27:38.094276 kubelet[2116]: I0513 00:27:38.094259 2116 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 00:27:38.094332 kubelet[2116]: I0513 00:27:38.094297 2116 server.go:1287] "Started kubelet" May 13 00:27:38.096200 kubelet[2116]: I0513 00:27:38.094507 2116 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:27:38.096200 kubelet[2116]: I0513 00:27:38.095401 2116 server.go:490] "Adding debug handlers to kubelet server" May 13 00:27:38.101768 kubelet[2116]: I0513 00:27:38.101077 2116 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:27:38.101768 kubelet[2116]: I0513 00:27:38.101342 2116 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:27:38.101768 kubelet[2116]: E0513 00:27:38.101167 2116 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.91:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.91:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eee9bbb31ca6c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:27:38.09427518 +0000 UTC m=+0.839898149,LastTimestamp:2025-05-13 00:27:38.09427518 +0000 UTC m=+0.839898149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:27:38.101768 kubelet[2116]: I0513 00:27:38.101605 2116 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:27:38.101950 kubelet[2116]: I0513 00:27:38.101862 2116 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:27:38.103139 kubelet[2116]: E0513 00:27:38.103115 2116 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:27:38.103488 kubelet[2116]: I0513 00:27:38.103125 2116 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 00:27:38.103838 kubelet[2116]: E0513 00:27:38.103558 2116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="200ms" May 13 00:27:38.103838 kubelet[2116]: W0513 00:27:38.103553 2116 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused May 13 00:27:38.103838 kubelet[2116]: E0513 00:27:38.103676 2116 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" May 13 00:27:38.103838 kubelet[2116]: I0513 00:27:38.103599 2116 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:27:38.103838 kubelet[2116]: I0513 00:27:38.103723 2116 reconciler.go:26] "Reconciler: start to sync state" May 13 00:27:38.103838 kubelet[2116]: I0513 00:27:38.103753 2116 factory.go:221] Registration of the systemd container factory successfully May 13 00:27:38.103838 kubelet[2116]: I0513 00:27:38.103840 2116 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:27:38.104785 kubelet[2116]: E0513 00:27:38.104518 2116 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:27:38.105470 kubelet[2116]: I0513 00:27:38.104789 2116 factory.go:221] Registration of the containerd container factory successfully May 13 00:27:38.118389 kubelet[2116]: I0513 00:27:38.117711 2116 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 00:27:38.118389 kubelet[2116]: I0513 00:27:38.117729 2116 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 00:27:38.118389 kubelet[2116]: I0513 00:27:38.117764 2116 state_mem.go:36] "Initialized new in-memory state store" May 13 00:27:38.120748 kubelet[2116]: I0513 00:27:38.120698 2116 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:27:38.122193 kubelet[2116]: I0513 00:27:38.122161 2116 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:27:38.122193 kubelet[2116]: I0513 00:27:38.122196 2116 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 00:27:38.122295 kubelet[2116]: I0513 00:27:38.122215 2116 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 00:27:38.122295 kubelet[2116]: I0513 00:27:38.122222 2116 kubelet.go:2388] "Starting kubelet main sync loop" May 13 00:27:38.122404 kubelet[2116]: E0513 00:27:38.122373 2116 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:27:38.122853 kubelet[2116]: W0513 00:27:38.122826 2116 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused May 13 00:27:38.122890 kubelet[2116]: E0513 00:27:38.122862 2116 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" May 13 00:27:38.187992 kubelet[2116]: I0513 00:27:38.187961 2116 policy_none.go:49] "None policy: Start" May 13 00:27:38.187992 kubelet[2116]: I0513 00:27:38.187992 2116 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 00:27:38.188141 kubelet[2116]: I0513 00:27:38.188007 2116 state_mem.go:35] "Initializing new in-memory state store" May 13 00:27:38.197351 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 00:27:38.204226 kubelet[2116]: E0513 00:27:38.204187 2116 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:27:38.213822 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 00:27:38.217056 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 00:27:38.222962 kubelet[2116]: E0513 00:27:38.222929 2116 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:27:38.229335 kubelet[2116]: I0513 00:27:38.229315 2116 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:27:38.229550 kubelet[2116]: I0513 00:27:38.229535 2116 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:27:38.229593 kubelet[2116]: I0513 00:27:38.229553 2116 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:27:38.229810 kubelet[2116]: I0513 00:27:38.229736 2116 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:27:38.230901 kubelet[2116]: E0513 00:27:38.230877 2116 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 00:27:38.230989 kubelet[2116]: E0513 00:27:38.230925 2116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:27:38.305142 kubelet[2116]: E0513 00:27:38.305028 2116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="400ms" May 13 00:27:38.331144 kubelet[2116]: I0513 00:27:38.331120 2116 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:27:38.331553 kubelet[2116]: E0513 00:27:38.331527 2116 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" May 13 00:27:38.433243 systemd[1]: Created slice kubepods-burstable-pod3a4dc029290daab0c5d6d596e16ffcf0.slice - libcontainer container kubepods-burstable-pod3a4dc029290daab0c5d6d596e16ffcf0.slice. May 13 00:27:38.454456 kubelet[2116]: E0513 00:27:38.454106 2116 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:27:38.457489 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 13 00:27:38.467747 kubelet[2116]: E0513 00:27:38.467716 2116 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:27:38.470708 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 13 00:27:38.472364 kubelet[2116]: E0513 00:27:38.472340 2116 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:27:38.533646 kubelet[2116]: I0513 00:27:38.533617 2116 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:27:38.533972 kubelet[2116]: E0513 00:27:38.533946 2116 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" May 13 00:27:38.607150 kubelet[2116]: I0513 00:27:38.607044 2116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:27:38.607150 kubelet[2116]: I0513 00:27:38.607082 2116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a4dc029290daab0c5d6d596e16ffcf0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3a4dc029290daab0c5d6d596e16ffcf0\") " pod="kube-system/kube-apiserver-localhost" May 13 00:27:38.607150 kubelet[2116]: I0513 00:27:38.607115 2116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a4dc029290daab0c5d6d596e16ffcf0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3a4dc029290daab0c5d6d596e16ffcf0\") " pod="kube-system/kube-apiserver-localhost" May 13 00:27:38.607150 kubelet[2116]: I0513 00:27:38.607132 2116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:27:38.607150 kubelet[2116]: I0513 00:27:38.607153 2116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:27:38.607341 kubelet[2116]: I0513 00:27:38.607169 2116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:27:38.607341 kubelet[2116]: I0513 00:27:38.607201 2116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 00:27:38.607341 kubelet[2116]: I0513 00:27:38.607216 2116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a4dc029290daab0c5d6d596e16ffcf0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3a4dc029290daab0c5d6d596e16ffcf0\") " pod="kube-system/kube-apiserver-localhost" May 13 00:27:38.607341 kubelet[2116]: I0513 00:27:38.607231 2116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:27:38.705586 kubelet[2116]: E0513 00:27:38.705547 2116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="800ms" May 13 00:27:38.754960 kubelet[2116]: E0513 00:27:38.754916 2116 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:38.755614 containerd[1440]: time="2025-05-13T00:27:38.755575816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3a4dc029290daab0c5d6d596e16ffcf0,Namespace:kube-system,Attempt:0,}" May 13 00:27:38.768919 kubelet[2116]: E0513 00:27:38.768885 2116 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:38.769378 containerd[1440]: time="2025-05-13T00:27:38.769327273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 13 00:27:38.773887 kubelet[2116]: E0513 00:27:38.773626 2116 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:38.774010 containerd[1440]: time="2025-05-13T00:27:38.773976279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 13 00:27:38.933606 kubelet[2116]: W0513 00:27:38.933485 2116 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused May 13 00:27:38.933898 kubelet[2116]: E0513 00:27:38.933765 2116 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" May 13 00:27:38.935929 kubelet[2116]: I0513 00:27:38.935892 2116 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:27:38.936288 kubelet[2116]: E0513 00:27:38.936233 2116 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" May 13 00:27:39.019275 kubelet[2116]: W0513 00:27:39.019209 2116 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused May 13 00:27:39.019275 kubelet[2116]: E0513 00:27:39.019276 2116 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" May 13 00:27:39.325750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2447909501.mount: Deactivated successfully. May 13 00:27:39.333524 containerd[1440]: time="2025-05-13T00:27:39.333460434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:27:39.334433 containerd[1440]: time="2025-05-13T00:27:39.334387586Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:27:39.334996 containerd[1440]: time="2025-05-13T00:27:39.334966341Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:27:39.335609 containerd[1440]: time="2025-05-13T00:27:39.335585948Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 13 00:27:39.336327 containerd[1440]: time="2025-05-13T00:27:39.336284740Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:27:39.337192 containerd[1440]: time="2025-05-13T00:27:39.337069472Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:27:39.337926 containerd[1440]: time="2025-05-13T00:27:39.337645749Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:27:39.343218 containerd[1440]: time="2025-05-13T00:27:39.343166211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:27:39.344455 containerd[1440]: time="2025-05-13T00:27:39.344421734Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 575.00545ms" May 13 00:27:39.345161 containerd[1440]: time="2025-05-13T00:27:39.345109413Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 571.080055ms" May 13 00:27:39.347794 containerd[1440]: time="2025-05-13T00:27:39.347562659Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.909623ms" May 13 00:27:39.483652 kubelet[2116]: W0513 00:27:39.483538 2116 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused May 13 00:27:39.483652 kubelet[2116]: E0513 00:27:39.483614 2116 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" May 13 00:27:39.492302 containerd[1440]: time="2025-05-13T00:27:39.492110326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:27:39.492448 containerd[1440]: time="2025-05-13T00:27:39.492365387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:27:39.492448 containerd[1440]: time="2025-05-13T00:27:39.492385973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:27:39.492858 containerd[1440]: time="2025-05-13T00:27:39.492753036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:27:39.493503 containerd[1440]: time="2025-05-13T00:27:39.493342345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:27:39.493503 containerd[1440]: time="2025-05-13T00:27:39.493415653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:27:39.493503 containerd[1440]: time="2025-05-13T00:27:39.493431203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:27:39.493940 containerd[1440]: time="2025-05-13T00:27:39.493877131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:27:39.494087 containerd[1440]: time="2025-05-13T00:27:39.494013316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:27:39.494087 containerd[1440]: time="2025-05-13T00:27:39.494074153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:27:39.494523 containerd[1440]: time="2025-05-13T00:27:39.494089702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:27:39.494523 containerd[1440]: time="2025-05-13T00:27:39.494170726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:27:39.506529 kubelet[2116]: E0513 00:27:39.506474 2116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="1.6s" May 13 00:27:39.518699 systemd[1]: Started cri-containerd-260959304103902ceafd672413ecec223bb9cd58056db0e964244d5ea26da917.scope - libcontainer container 260959304103902ceafd672413ecec223bb9cd58056db0e964244d5ea26da917. May 13 00:27:39.520220 systemd[1]: Started cri-containerd-9fe07492169ab73220537c12155778e05c0b6c3891d10677b952b5decb30cb7e.scope - libcontainer container 9fe07492169ab73220537c12155778e05c0b6c3891d10677b952b5decb30cb7e. May 13 00:27:39.524616 systemd[1]: Started cri-containerd-87e68f5cfe4500ceffe52e18ee5210ce591932b95c3367aeb52c8b4fea0464a0.scope - libcontainer container 87e68f5cfe4500ceffe52e18ee5210ce591932b95c3367aeb52c8b4fea0464a0. May 13 00:27:39.551891 kubelet[2116]: W0513 00:27:39.551853 2116 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused May 13 00:27:39.552021 kubelet[2116]: E0513 00:27:39.551895 2116 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" May 13 00:27:39.557068 containerd[1440]: time="2025-05-13T00:27:39.557030158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fe07492169ab73220537c12155778e05c0b6c3891d10677b952b5decb30cb7e\"" May 13 00:27:39.558254 kubelet[2116]: E0513 00:27:39.558218 2116 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:39.560220 containerd[1440]: time="2025-05-13T00:27:39.560183075Z" level=info msg="CreateContainer within sandbox \"9fe07492169ab73220537c12155778e05c0b6c3891d10677b952b5decb30cb7e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:27:39.562016 containerd[1440]: time="2025-05-13T00:27:39.561977261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"87e68f5cfe4500ceffe52e18ee5210ce591932b95c3367aeb52c8b4fea0464a0\"" May 13 00:27:39.563052 kubelet[2116]: E0513 00:27:39.563029 2116 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:39.564977 containerd[1440]: time="2025-05-13T00:27:39.564866242Z" level=info msg="CreateContainer within sandbox \"87e68f5cfe4500ceffe52e18ee5210ce591932b95c3367aeb52c8b4fea0464a0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:27:39.566430 containerd[1440]: time="2025-05-13T00:27:39.566376387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3a4dc029290daab0c5d6d596e16ffcf0,Namespace:kube-system,Attempt:0,} returns sandbox id \"260959304103902ceafd672413ecec223bb9cd58056db0e964244d5ea26da917\"" May 13 00:27:39.566967 kubelet[2116]: E0513 00:27:39.566941 2116 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:39.569495 containerd[1440]: time="2025-05-13T00:27:39.568738896Z" level=info msg="CreateContainer within sandbox \"260959304103902ceafd672413ecec223bb9cd58056db0e964244d5ea26da917\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:27:39.585760 containerd[1440]: time="2025-05-13T00:27:39.583574728Z" level=info msg="CreateContainer within sandbox \"87e68f5cfe4500ceffe52e18ee5210ce591932b95c3367aeb52c8b4fea0464a0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c5be8984b7086aeeb7e879a080d426b0f18122dcce71aa137fa886b239bbc910\"" May 13 00:27:39.586161 containerd[1440]: time="2025-05-13T00:27:39.586131621Z" level=info msg="StartContainer for \"c5be8984b7086aeeb7e879a080d426b0f18122dcce71aa137fa886b239bbc910\"" May 13 00:27:39.586983 containerd[1440]: time="2025-05-13T00:27:39.586950729Z" level=info msg="CreateContainer within sandbox \"9fe07492169ab73220537c12155778e05c0b6c3891d10677b952b5decb30cb7e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f56864fa1062ec76b1ff3cf3920885d09d7011407bc41cfb8f9a22233a69f999\"" May 13 00:27:39.587347 containerd[1440]: time="2025-05-13T00:27:39.587322030Z" level=info msg="StartContainer for \"f56864fa1062ec76b1ff3cf3920885d09d7011407bc41cfb8f9a22233a69f999\"" May 13 00:27:39.587876 containerd[1440]: time="2025-05-13T00:27:39.587847462Z" level=info msg="CreateContainer within sandbox \"260959304103902ceafd672413ecec223bb9cd58056db0e964244d5ea26da917\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"73397c4be82affce0b0fb0bf6cdde685f91bebd4128cc9895f51327a657a757a\"" May 13 00:27:39.588311 containerd[1440]: time="2025-05-13T00:27:39.588282878Z" level=info msg="StartContainer for \"73397c4be82affce0b0fb0bf6cdde685f91bebd4128cc9895f51327a657a757a\"" May 13 00:27:39.615587 systemd[1]: Started cri-containerd-73397c4be82affce0b0fb0bf6cdde685f91bebd4128cc9895f51327a657a757a.scope - libcontainer container 73397c4be82affce0b0fb0bf6cdde685f91bebd4128cc9895f51327a657a757a. May 13 00:27:39.620014 systemd[1]: Started cri-containerd-c5be8984b7086aeeb7e879a080d426b0f18122dcce71aa137fa886b239bbc910.scope - libcontainer container c5be8984b7086aeeb7e879a080d426b0f18122dcce71aa137fa886b239bbc910. May 13 00:27:39.620930 systemd[1]: Started cri-containerd-f56864fa1062ec76b1ff3cf3920885d09d7011407bc41cfb8f9a22233a69f999.scope - libcontainer container f56864fa1062ec76b1ff3cf3920885d09d7011407bc41cfb8f9a22233a69f999. May 13 00:27:39.657425 containerd[1440]: time="2025-05-13T00:27:39.657283499Z" level=info msg="StartContainer for \"73397c4be82affce0b0fb0bf6cdde685f91bebd4128cc9895f51327a657a757a\" returns successfully" May 13 00:27:39.660749 containerd[1440]: time="2025-05-13T00:27:39.660595544Z" level=info msg="StartContainer for \"c5be8984b7086aeeb7e879a080d426b0f18122dcce71aa137fa886b239bbc910\" returns successfully" May 13 00:27:39.700678 containerd[1440]: time="2025-05-13T00:27:39.700616377Z" level=info msg="StartContainer for \"f56864fa1062ec76b1ff3cf3920885d09d7011407bc41cfb8f9a22233a69f999\" returns successfully" May 13 00:27:39.738148 kubelet[2116]: I0513 00:27:39.737899 2116 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:27:39.738403 kubelet[2116]: E0513 00:27:39.738314 2116 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" May 13 00:27:40.129057 kubelet[2116]: E0513 00:27:40.129022 2116 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:27:40.129202 kubelet[2116]: E0513 00:27:40.129154 2116 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:40.130283 kubelet[2116]: E0513 00:27:40.130263 2116 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:27:40.130383 kubelet[2116]: E0513 00:27:40.130368 2116 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:40.132500 kubelet[2116]: E0513 00:27:40.132479 2116 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:27:40.132601 kubelet[2116]: E0513 00:27:40.132584 2116 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:41.134127 kubelet[2116]: E0513 00:27:41.134081 2116 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:27:41.134579 kubelet[2116]: E0513 00:27:41.134194 2116 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:27:41.134579 kubelet[2116]: E0513 00:27:41.134206 2116 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:41.134579 kubelet[2116]: E0513 00:27:41.134295 2116 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:41.340524 kubelet[2116]: I0513 00:27:41.340490 2116 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:27:41.516693 kubelet[2116]: E0513 00:27:41.516583 2116 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 00:27:41.669631 kubelet[2116]: I0513 00:27:41.669587 2116 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 00:27:41.704010 kubelet[2116]: I0513 00:27:41.703952 2116 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 00:27:41.709500 kubelet[2116]: E0513 00:27:41.709467 2116 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 00:27:41.709500 kubelet[2116]: I0513 00:27:41.709493 2116 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 00:27:41.711267 kubelet[2116]: E0513 00:27:41.711244 2116 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 13 00:27:41.711267 kubelet[2116]: I0513 00:27:41.711269 2116 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:27:41.712854 kubelet[2116]: E0513 00:27:41.712830 2116 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 00:27:42.091782 kubelet[2116]: I0513 00:27:42.091687 2116 apiserver.go:52] "Watching apiserver" May 13 00:27:42.104210 kubelet[2116]: I0513 00:27:42.104157 2116 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:27:42.134847 kubelet[2116]: I0513 00:27:42.134624 2116 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:27:42.136809 kubelet[2116]: E0513 00:27:42.136593 2116 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 00:27:42.136809 kubelet[2116]: E0513 00:27:42.136746 2116 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:42.941962 kubelet[2116]: I0513 00:27:42.941923 2116 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 00:27:42.946551 kubelet[2116]: E0513 00:27:42.946525 2116 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:43.137446 kubelet[2116]: E0513 00:27:43.135994 2116 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:43.714776 systemd[1]: Reloading requested from client PID 2398 ('systemctl') (unit session-7.scope)... May 13 00:27:43.714791 systemd[1]: Reloading... May 13 00:27:43.773490 zram_generator::config[2437]: No configuration found. May 13 00:27:43.854924 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:27:43.920956 systemd[1]: Reloading finished in 205 ms. May 13 00:27:43.951637 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:27:43.970754 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:27:43.970987 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:27:43.971045 systemd[1]: kubelet.service: Consumed 1.271s CPU time, 123.8M memory peak, 0B memory swap peak. May 13 00:27:43.982663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:27:44.077327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:27:44.081170 (kubelet)[2479]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:27:44.119991 kubelet[2479]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:27:44.119991 kubelet[2479]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 00:27:44.119991 kubelet[2479]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:27:44.119991 kubelet[2479]: I0513 00:27:44.119684 2479 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:27:44.127556 kubelet[2479]: I0513 00:27:44.127512 2479 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 00:27:44.127556 kubelet[2479]: I0513 00:27:44.127543 2479 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:27:44.127791 kubelet[2479]: I0513 00:27:44.127766 2479 server.go:954] "Client rotation is on, will bootstrap in background" May 13 00:27:44.129038 kubelet[2479]: I0513 00:27:44.129007 2479 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:27:44.131412 kubelet[2479]: I0513 00:27:44.131378 2479 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:27:44.134175 kubelet[2479]: E0513 00:27:44.134111 2479 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:27:44.134175 kubelet[2479]: I0513 00:27:44.134174 2479 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:27:44.141566 kubelet[2479]: I0513 00:27:44.137095 2479 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:27:44.141566 kubelet[2479]: I0513 00:27:44.137351 2479 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:27:44.141566 kubelet[2479]: I0513 00:27:44.137382 2479 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:27:44.141566 kubelet[2479]: I0513 00:27:44.137696 2479 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:27:44.141781 kubelet[2479]: I0513 00:27:44.137706 2479 container_manager_linux.go:304] "Creating device plugin manager" May 13 00:27:44.141781 kubelet[2479]: I0513 00:27:44.137765 2479 state_mem.go:36] "Initialized new in-memory state store" May 13 00:27:44.141781 kubelet[2479]: I0513 00:27:44.137902 2479 kubelet.go:446] "Attempting to sync node with API server" May 13 00:27:44.141781 kubelet[2479]: I0513 00:27:44.137916 2479 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:27:44.141781 kubelet[2479]: I0513 00:27:44.137936 2479 kubelet.go:352] "Adding apiserver pod source" May 13 00:27:44.141781 kubelet[2479]: I0513 00:27:44.137950 2479 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:27:44.141781 kubelet[2479]: I0513 00:27:44.139235 2479 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:27:44.141781 kubelet[2479]: I0513 00:27:44.140156 2479 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:27:44.141781 kubelet[2479]: I0513 00:27:44.140925 2479 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 00:27:44.141781 kubelet[2479]: I0513 00:27:44.140959 2479 server.go:1287] "Started kubelet" May 13 00:27:44.141781 kubelet[2479]: I0513 00:27:44.141344 2479 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:27:44.141781 kubelet[2479]: I0513 00:27:44.141683 2479 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:27:44.142050 kubelet[2479]: I0513 00:27:44.141930 2479 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:27:44.145019 kubelet[2479]: I0513 00:27:44.143954 2479 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:27:44.150437 kubelet[2479]: I0513 00:27:44.147698 2479 server.go:490] "Adding debug handlers to kubelet server" May 13 00:27:44.151499 kubelet[2479]: I0513 00:27:44.151352 2479 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:27:44.153047 kubelet[2479]: I0513 00:27:44.153025 2479 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 00:27:44.153249 kubelet[2479]: E0513 00:27:44.153227 2479 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:27:44.154854 kubelet[2479]: I0513 00:27:44.154824 2479 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:27:44.154964 kubelet[2479]: I0513 00:27:44.154951 2479 reconciler.go:26] "Reconciler: start to sync state" May 13 00:27:44.156685 kubelet[2479]: I0513 00:27:44.156657 2479 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:27:44.161092 kubelet[2479]: E0513 00:27:44.161038 2479 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:27:44.161842 kubelet[2479]: I0513 00:27:44.161760 2479 factory.go:221] Registration of the systemd container factory successfully May 13 00:27:44.161966 kubelet[2479]: I0513 00:27:44.161868 2479 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:27:44.161966 kubelet[2479]: I0513 00:27:44.161886 2479 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 00:27:44.161966 kubelet[2479]: I0513 00:27:44.161901 2479 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 00:27:44.161966 kubelet[2479]: I0513 00:27:44.161909 2479 kubelet.go:2388] "Starting kubelet main sync loop" May 13 00:27:44.161966 kubelet[2479]: E0513 00:27:44.161948 2479 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:27:44.163259 kubelet[2479]: I0513 00:27:44.163080 2479 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:27:44.165931 kubelet[2479]: I0513 00:27:44.165782 2479 factory.go:221] Registration of the containerd container factory successfully May 13 00:27:44.202196 kubelet[2479]: I0513 00:27:44.202164 2479 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 00:27:44.202196 kubelet[2479]: I0513 00:27:44.202191 2479 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 00:27:44.202308 kubelet[2479]: I0513 00:27:44.202211 2479 state_mem.go:36] "Initialized new in-memory state store" May 13 00:27:44.202388 kubelet[2479]: I0513 00:27:44.202369 2479 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:27:44.202438 kubelet[2479]: I0513 00:27:44.202387 2479 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:27:44.202438 kubelet[2479]: I0513 00:27:44.202418 2479 policy_none.go:49] "None policy: Start" May 13 00:27:44.202438 kubelet[2479]: I0513 00:27:44.202429 2479 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 00:27:44.202438 kubelet[2479]: I0513 00:27:44.202439 2479 state_mem.go:35] "Initializing new in-memory state store" May 13 00:27:44.202547 kubelet[2479]: I0513 00:27:44.202533 2479 state_mem.go:75] "Updated machine memory state" May 13 00:27:44.206062 kubelet[2479]: I0513 00:27:44.206028 2479 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:27:44.206211 kubelet[2479]: I0513 00:27:44.206194 2479 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:27:44.206244 kubelet[2479]: I0513 00:27:44.206212 2479 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:27:44.206993 kubelet[2479]: I0513 00:27:44.206958 2479 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:27:44.208439 kubelet[2479]: E0513 00:27:44.207915 2479 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 00:27:44.263561 kubelet[2479]: I0513 00:27:44.263434 2479 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:27:44.263561 kubelet[2479]: I0513 00:27:44.263449 2479 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 00:27:44.263561 kubelet[2479]: I0513 00:27:44.263538 2479 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 00:27:44.272971 kubelet[2479]: E0513 00:27:44.272936 2479 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:27:44.310286 kubelet[2479]: I0513 00:27:44.310256 2479 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:27:44.317216 kubelet[2479]: I0513 00:27:44.316590 2479 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 13 00:27:44.317216 kubelet[2479]: I0513 00:27:44.316663 2479 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 00:27:44.356189 kubelet[2479]: I0513 00:27:44.355973 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a4dc029290daab0c5d6d596e16ffcf0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3a4dc029290daab0c5d6d596e16ffcf0\") " pod="kube-system/kube-apiserver-localhost" May 13 00:27:44.356189 kubelet[2479]: I0513 00:27:44.356035 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a4dc029290daab0c5d6d596e16ffcf0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3a4dc029290daab0c5d6d596e16ffcf0\") " pod="kube-system/kube-apiserver-localhost" May 13 00:27:44.356189 kubelet[2479]: I0513 00:27:44.356057 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:27:44.356189 kubelet[2479]: I0513 00:27:44.356074 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:27:44.356189 kubelet[2479]: I0513 00:27:44.356089 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:27:44.356436 kubelet[2479]: I0513 00:27:44.356105 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a4dc029290daab0c5d6d596e16ffcf0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3a4dc029290daab0c5d6d596e16ffcf0\") " pod="kube-system/kube-apiserver-localhost" May 13 00:27:44.356436 kubelet[2479]: I0513 00:27:44.356120 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:27:44.356436 kubelet[2479]: I0513 00:27:44.356135 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:27:44.356436 kubelet[2479]: I0513 00:27:44.356149 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 00:27:44.573435 kubelet[2479]: E0513 00:27:44.573242 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:44.573435 kubelet[2479]: E0513 00:27:44.573315 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:44.574703 kubelet[2479]: E0513 00:27:44.573447 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:45.140400 kubelet[2479]: I0513 00:27:45.140346 2479 apiserver.go:52] "Watching apiserver" May 13 00:27:45.155831 kubelet[2479]: I0513 00:27:45.155787 2479 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:27:45.180437 kubelet[2479]: E0513 00:27:45.180391 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:45.180547 kubelet[2479]: I0513 00:27:45.180398 2479 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 00:27:45.180631 kubelet[2479]: E0513 00:27:45.180607 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:45.185008 kubelet[2479]: E0513 00:27:45.184980 2479 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:27:45.186448 kubelet[2479]: E0513 00:27:45.185144 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:45.200226 kubelet[2479]: I0513 00:27:45.200139 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.200123503 podStartE2EDuration="3.200123503s" podCreationTimestamp="2025-05-13 00:27:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:27:45.200120303 +0000 UTC m=+1.116046835" watchObservedRunningTime="2025-05-13 00:27:45.200123503 +0000 UTC m=+1.116050035" May 13 00:27:45.207580 kubelet[2479]: I0513 00:27:45.207520 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.207507446 podStartE2EDuration="1.207507446s" podCreationTimestamp="2025-05-13 00:27:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:27:45.207426684 +0000 UTC m=+1.123353256" watchObservedRunningTime="2025-05-13 00:27:45.207507446 +0000 UTC m=+1.123433978" May 13 00:27:46.182063 kubelet[2479]: E0513 00:27:46.181753 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:46.182063 kubelet[2479]: E0513 00:27:46.181911 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:47.183474 kubelet[2479]: E0513 00:27:47.183440 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:47.819786 kubelet[2479]: E0513 00:27:47.819756 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:48.943394 sudo[1621]: pam_unix(sudo:session): session closed for user root May 13 00:27:48.951729 sshd[1618]: pam_unix(sshd:session): session closed for user core May 13 00:27:48.957156 systemd[1]: sshd@6-10.0.0.91:22-10.0.0.1:53186.service: Deactivated successfully. May 13 00:27:48.959269 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:27:48.959544 systemd[1]: session-7.scope: Consumed 8.344s CPU time, 156.9M memory peak, 0B memory swap peak. May 13 00:27:48.960075 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit. May 13 00:27:48.960967 systemd-logind[1424]: Removed session 7. May 13 00:27:49.049055 kubelet[2479]: I0513 00:27:49.049020 2479 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:27:49.049485 containerd[1440]: time="2025-05-13T00:27:49.049397354Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:27:49.049867 kubelet[2479]: I0513 00:27:49.049597 2479 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:27:49.734635 kubelet[2479]: I0513 00:27:49.733937 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.733918872 podStartE2EDuration="5.733918872s" podCreationTimestamp="2025-05-13 00:27:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:27:45.21525204 +0000 UTC m=+1.131178572" watchObservedRunningTime="2025-05-13 00:27:49.733918872 +0000 UTC m=+5.649845404" May 13 00:27:49.743879 systemd[1]: Created slice kubepods-besteffort-pod0062b820_1ac8_44dc_ad8c_7709c29bbab2.slice - libcontainer container kubepods-besteffort-pod0062b820_1ac8_44dc_ad8c_7709c29bbab2.slice. May 13 00:27:49.797018 kubelet[2479]: I0513 00:27:49.796969 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0062b820-1ac8-44dc-ad8c-7709c29bbab2-kube-proxy\") pod \"kube-proxy-qf79c\" (UID: \"0062b820-1ac8-44dc-ad8c-7709c29bbab2\") " pod="kube-system/kube-proxy-qf79c" May 13 00:27:49.797171 kubelet[2479]: I0513 00:27:49.797061 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0062b820-1ac8-44dc-ad8c-7709c29bbab2-lib-modules\") pod \"kube-proxy-qf79c\" (UID: \"0062b820-1ac8-44dc-ad8c-7709c29bbab2\") " pod="kube-system/kube-proxy-qf79c" May 13 00:27:49.797171 kubelet[2479]: I0513 00:27:49.797083 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs7ps\" (UniqueName: \"kubernetes.io/projected/0062b820-1ac8-44dc-ad8c-7709c29bbab2-kube-api-access-rs7ps\") pod \"kube-proxy-qf79c\" (UID: \"0062b820-1ac8-44dc-ad8c-7709c29bbab2\") " pod="kube-system/kube-proxy-qf79c" May 13 00:27:49.797171 kubelet[2479]: I0513 00:27:49.797138 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0062b820-1ac8-44dc-ad8c-7709c29bbab2-xtables-lock\") pod \"kube-proxy-qf79c\" (UID: \"0062b820-1ac8-44dc-ad8c-7709c29bbab2\") " pod="kube-system/kube-proxy-qf79c" May 13 00:27:50.024233 kubelet[2479]: E0513 00:27:50.024113 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:50.052810 kubelet[2479]: E0513 00:27:50.052761 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:50.053665 containerd[1440]: time="2025-05-13T00:27:50.053610942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qf79c,Uid:0062b820-1ac8-44dc-ad8c-7709c29bbab2,Namespace:kube-system,Attempt:0,}" May 13 00:27:50.096659 containerd[1440]: time="2025-05-13T00:27:50.096489079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:27:50.096856 containerd[1440]: time="2025-05-13T00:27:50.096823647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:27:50.096856 containerd[1440]: time="2025-05-13T00:27:50.096841407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:27:50.097078 containerd[1440]: time="2025-05-13T00:27:50.096925569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:27:50.122606 systemd[1]: Started cri-containerd-44e1370eed4403d27cedb26afb3a4ecdff4fe23cbe7e992985dbb4ba02311406.scope - libcontainer container 44e1370eed4403d27cedb26afb3a4ecdff4fe23cbe7e992985dbb4ba02311406. May 13 00:27:50.160975 systemd[1]: Created slice kubepods-besteffort-podb6344999_0401_40a5_9dd0_c356496dd9b9.slice - libcontainer container kubepods-besteffort-podb6344999_0401_40a5_9dd0_c356496dd9b9.slice. May 13 00:27:50.164560 containerd[1440]: time="2025-05-13T00:27:50.164519348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qf79c,Uid:0062b820-1ac8-44dc-ad8c-7709c29bbab2,Namespace:kube-system,Attempt:0,} returns sandbox id \"44e1370eed4403d27cedb26afb3a4ecdff4fe23cbe7e992985dbb4ba02311406\"" May 13 00:27:50.165914 kubelet[2479]: E0513 00:27:50.165893 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:50.167901 containerd[1440]: time="2025-05-13T00:27:50.167716741Z" level=info msg="CreateContainer within sandbox \"44e1370eed4403d27cedb26afb3a4ecdff4fe23cbe7e992985dbb4ba02311406\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:27:50.184493 containerd[1440]: time="2025-05-13T00:27:50.184450122Z" level=info msg="CreateContainer within sandbox \"44e1370eed4403d27cedb26afb3a4ecdff4fe23cbe7e992985dbb4ba02311406\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9993368129847bfccba79f105bd2fb9fce32cb6f05c427de68e707d8e13df40a\"" May 13 00:27:50.185199 containerd[1440]: time="2025-05-13T00:27:50.185174499Z" level=info msg="StartContainer for \"9993368129847bfccba79f105bd2fb9fce32cb6f05c427de68e707d8e13df40a\"" May 13 00:27:50.190648 kubelet[2479]: E0513 00:27:50.190620 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:50.199228 kubelet[2479]: I0513 00:27:50.199179 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb988\" (UniqueName: \"kubernetes.io/projected/b6344999-0401-40a5-9dd0-c356496dd9b9-kube-api-access-pb988\") pod \"tigera-operator-789496d6f5-v9c7p\" (UID: \"b6344999-0401-40a5-9dd0-c356496dd9b9\") " pod="tigera-operator/tigera-operator-789496d6f5-v9c7p" May 13 00:27:50.199321 kubelet[2479]: I0513 00:27:50.199237 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b6344999-0401-40a5-9dd0-c356496dd9b9-var-lib-calico\") pod \"tigera-operator-789496d6f5-v9c7p\" (UID: \"b6344999-0401-40a5-9dd0-c356496dd9b9\") " pod="tigera-operator/tigera-operator-789496d6f5-v9c7p" May 13 00:27:50.214581 systemd[1]: Started cri-containerd-9993368129847bfccba79f105bd2fb9fce32cb6f05c427de68e707d8e13df40a.scope - libcontainer container 9993368129847bfccba79f105bd2fb9fce32cb6f05c427de68e707d8e13df40a. May 13 00:27:50.242933 containerd[1440]: time="2025-05-13T00:27:50.242895214Z" level=info msg="StartContainer for \"9993368129847bfccba79f105bd2fb9fce32cb6f05c427de68e707d8e13df40a\" returns successfully" May 13 00:27:50.463986 containerd[1440]: time="2025-05-13T00:27:50.463935328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-v9c7p,Uid:b6344999-0401-40a5-9dd0-c356496dd9b9,Namespace:tigera-operator,Attempt:0,}" May 13 00:27:50.483650 containerd[1440]: time="2025-05-13T00:27:50.483571215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:27:50.483777 containerd[1440]: time="2025-05-13T00:27:50.483667378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:27:50.483777 containerd[1440]: time="2025-05-13T00:27:50.483694458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:27:50.483848 containerd[1440]: time="2025-05-13T00:27:50.483817261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:27:50.500583 systemd[1]: Started cri-containerd-9f8b493042d752ce5554244f6f516a194d315125c529649fa4794f6ac7943fad.scope - libcontainer container 9f8b493042d752ce5554244f6f516a194d315125c529649fa4794f6ac7943fad. May 13 00:27:50.526499 containerd[1440]: time="2025-05-13T00:27:50.526456712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-v9c7p,Uid:b6344999-0401-40a5-9dd0-c356496dd9b9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9f8b493042d752ce5554244f6f516a194d315125c529649fa4794f6ac7943fad\"" May 13 00:27:50.528461 containerd[1440]: time="2025-05-13T00:27:50.527984987Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 00:27:51.193140 kubelet[2479]: E0513 00:27:51.193108 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:52.196969 kubelet[2479]: E0513 00:27:52.196870 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:52.697607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1571519715.mount: Deactivated successfully. May 13 00:27:52.958027 containerd[1440]: time="2025-05-13T00:27:52.957451664Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:52.959241 containerd[1440]: time="2025-05-13T00:27:52.959080977Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 13 00:27:52.960019 containerd[1440]: time="2025-05-13T00:27:52.959985516Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:52.962444 containerd[1440]: time="2025-05-13T00:27:52.962342164Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:52.963486 containerd[1440]: time="2025-05-13T00:27:52.963444386Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.435428599s" May 13 00:27:52.963534 containerd[1440]: time="2025-05-13T00:27:52.963488947Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 13 00:27:52.969125 containerd[1440]: time="2025-05-13T00:27:52.969080061Z" level=info msg="CreateContainer within sandbox \"9f8b493042d752ce5554244f6f516a194d315125c529649fa4794f6ac7943fad\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 00:27:52.979013 containerd[1440]: time="2025-05-13T00:27:52.978816380Z" level=info msg="CreateContainer within sandbox \"9f8b493042d752ce5554244f6f516a194d315125c529649fa4794f6ac7943fad\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7f7d15b37408b7c343dc46fd84235b5c78d666ca97f162a6865d78423d6d8bf0\"" May 13 00:27:52.980232 containerd[1440]: time="2025-05-13T00:27:52.979438873Z" level=info msg="StartContainer for \"7f7d15b37408b7c343dc46fd84235b5c78d666ca97f162a6865d78423d6d8bf0\"" May 13 00:27:53.026627 systemd[1]: Started cri-containerd-7f7d15b37408b7c343dc46fd84235b5c78d666ca97f162a6865d78423d6d8bf0.scope - libcontainer container 7f7d15b37408b7c343dc46fd84235b5c78d666ca97f162a6865d78423d6d8bf0. May 13 00:27:53.089138 containerd[1440]: time="2025-05-13T00:27:53.089057658Z" level=info msg="StartContainer for \"7f7d15b37408b7c343dc46fd84235b5c78d666ca97f162a6865d78423d6d8bf0\" returns successfully" May 13 00:27:53.209141 kubelet[2479]: I0513 00:27:53.208541 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qf79c" podStartSLOduration=4.208523131 podStartE2EDuration="4.208523131s" podCreationTimestamp="2025-05-13 00:27:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:27:51.201943576 +0000 UTC m=+7.117870108" watchObservedRunningTime="2025-05-13 00:27:53.208523131 +0000 UTC m=+9.124449663" May 13 00:27:53.209141 kubelet[2479]: I0513 00:27:53.208690 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-v9c7p" podStartSLOduration=0.768218632 podStartE2EDuration="3.208683374s" podCreationTimestamp="2025-05-13 00:27:50 +0000 UTC" firstStartedPulling="2025-05-13 00:27:50.527531777 +0000 UTC m=+6.443458309" lastFinishedPulling="2025-05-13 00:27:52.967996559 +0000 UTC m=+8.883923051" observedRunningTime="2025-05-13 00:27:53.208073122 +0000 UTC m=+9.123999654" watchObservedRunningTime="2025-05-13 00:27:53.208683374 +0000 UTC m=+9.124609986" May 13 00:27:56.128323 kubelet[2479]: E0513 00:27:56.128055 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:56.205444 kubelet[2479]: E0513 00:27:56.205292 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:56.520885 systemd[1]: Created slice kubepods-besteffort-pod0694d7b2_9074_4a4b_b5d5_c075b46735d0.slice - libcontainer container kubepods-besteffort-pod0694d7b2_9074_4a4b_b5d5_c075b46735d0.slice. May 13 00:27:56.544785 kubelet[2479]: I0513 00:27:56.544743 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0694d7b2-9074-4a4b-b5d5-c075b46735d0-typha-certs\") pod \"calico-typha-58d8c8554f-6xjvl\" (UID: \"0694d7b2-9074-4a4b-b5d5-c075b46735d0\") " pod="calico-system/calico-typha-58d8c8554f-6xjvl" May 13 00:27:56.545050 kubelet[2479]: I0513 00:27:56.544967 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0694d7b2-9074-4a4b-b5d5-c075b46735d0-tigera-ca-bundle\") pod \"calico-typha-58d8c8554f-6xjvl\" (UID: \"0694d7b2-9074-4a4b-b5d5-c075b46735d0\") " pod="calico-system/calico-typha-58d8c8554f-6xjvl" May 13 00:27:56.545050 kubelet[2479]: I0513 00:27:56.545016 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcmxc\" (UniqueName: \"kubernetes.io/projected/0694d7b2-9074-4a4b-b5d5-c075b46735d0-kube-api-access-rcmxc\") pod \"calico-typha-58d8c8554f-6xjvl\" (UID: \"0694d7b2-9074-4a4b-b5d5-c075b46735d0\") " pod="calico-system/calico-typha-58d8c8554f-6xjvl" May 13 00:27:56.573302 kubelet[2479]: I0513 00:27:56.573016 2479 status_manager.go:890] "Failed to get status for pod" podUID="f1cc4169-0d95-470b-8ea0-ada1c453005f" pod="calico-system/calico-node-6kblv" err="pods \"calico-node-6kblv\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" May 13 00:27:56.577615 systemd[1]: Created slice kubepods-besteffort-podf1cc4169_0d95_470b_8ea0_ada1c453005f.slice - libcontainer container kubepods-besteffort-podf1cc4169_0d95_470b_8ea0_ada1c453005f.slice. May 13 00:27:56.645494 kubelet[2479]: I0513 00:27:56.645456 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1cc4169-0d95-470b-8ea0-ada1c453005f-lib-modules\") pod \"calico-node-6kblv\" (UID: \"f1cc4169-0d95-470b-8ea0-ada1c453005f\") " pod="calico-system/calico-node-6kblv" May 13 00:27:56.645494 kubelet[2479]: I0513 00:27:56.645499 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f1cc4169-0d95-470b-8ea0-ada1c453005f-cni-log-dir\") pod \"calico-node-6kblv\" (UID: \"f1cc4169-0d95-470b-8ea0-ada1c453005f\") " pod="calico-system/calico-node-6kblv" May 13 00:27:56.645663 kubelet[2479]: I0513 00:27:56.645516 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f1cc4169-0d95-470b-8ea0-ada1c453005f-var-run-calico\") pod \"calico-node-6kblv\" (UID: \"f1cc4169-0d95-470b-8ea0-ada1c453005f\") " pod="calico-system/calico-node-6kblv" May 13 00:27:56.645663 kubelet[2479]: I0513 00:27:56.645534 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1cc4169-0d95-470b-8ea0-ada1c453005f-xtables-lock\") pod \"calico-node-6kblv\" (UID: \"f1cc4169-0d95-470b-8ea0-ada1c453005f\") " pod="calico-system/calico-node-6kblv" May 13 00:27:56.645663 kubelet[2479]: I0513 00:27:56.645549 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1cc4169-0d95-470b-8ea0-ada1c453005f-tigera-ca-bundle\") pod \"calico-node-6kblv\" (UID: \"f1cc4169-0d95-470b-8ea0-ada1c453005f\") " pod="calico-system/calico-node-6kblv" May 13 00:27:56.645663 kubelet[2479]: I0513 00:27:56.645564 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f1cc4169-0d95-470b-8ea0-ada1c453005f-node-certs\") pod \"calico-node-6kblv\" (UID: \"f1cc4169-0d95-470b-8ea0-ada1c453005f\") " pod="calico-system/calico-node-6kblv" May 13 00:27:56.645663 kubelet[2479]: I0513 00:27:56.645579 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f1cc4169-0d95-470b-8ea0-ada1c453005f-cni-net-dir\") pod \"calico-node-6kblv\" (UID: \"f1cc4169-0d95-470b-8ea0-ada1c453005f\") " pod="calico-system/calico-node-6kblv" May 13 00:27:56.645780 kubelet[2479]: I0513 00:27:56.645616 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f1cc4169-0d95-470b-8ea0-ada1c453005f-var-lib-calico\") pod \"calico-node-6kblv\" (UID: \"f1cc4169-0d95-470b-8ea0-ada1c453005f\") " pod="calico-system/calico-node-6kblv" May 13 00:27:56.645780 kubelet[2479]: I0513 00:27:56.645643 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr7lk\" (UniqueName: \"kubernetes.io/projected/f1cc4169-0d95-470b-8ea0-ada1c453005f-kube-api-access-mr7lk\") pod \"calico-node-6kblv\" (UID: \"f1cc4169-0d95-470b-8ea0-ada1c453005f\") " pod="calico-system/calico-node-6kblv" May 13 00:27:56.645780 kubelet[2479]: I0513 00:27:56.645661 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f1cc4169-0d95-470b-8ea0-ada1c453005f-policysync\") pod \"calico-node-6kblv\" (UID: \"f1cc4169-0d95-470b-8ea0-ada1c453005f\") " pod="calico-system/calico-node-6kblv" May 13 00:27:56.645780 kubelet[2479]: I0513 00:27:56.645676 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f1cc4169-0d95-470b-8ea0-ada1c453005f-cni-bin-dir\") pod \"calico-node-6kblv\" (UID: \"f1cc4169-0d95-470b-8ea0-ada1c453005f\") " pod="calico-system/calico-node-6kblv" May 13 00:27:56.645780 kubelet[2479]: I0513 00:27:56.645692 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f1cc4169-0d95-470b-8ea0-ada1c453005f-flexvol-driver-host\") pod \"calico-node-6kblv\" (UID: \"f1cc4169-0d95-470b-8ea0-ada1c453005f\") " pod="calico-system/calico-node-6kblv" May 13 00:27:56.688853 kubelet[2479]: E0513 00:27:56.688800 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgptb" podUID="5e18653e-7d13-4d2f-8b0d-991f11e13bcd" May 13 00:27:56.746393 kubelet[2479]: I0513 00:27:56.746318 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5e18653e-7d13-4d2f-8b0d-991f11e13bcd-registration-dir\") pod \"csi-node-driver-bgptb\" (UID: \"5e18653e-7d13-4d2f-8b0d-991f11e13bcd\") " pod="calico-system/csi-node-driver-bgptb" May 13 00:27:56.746393 kubelet[2479]: I0513 00:27:56.746395 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb9xf\" (UniqueName: \"kubernetes.io/projected/5e18653e-7d13-4d2f-8b0d-991f11e13bcd-kube-api-access-vb9xf\") pod \"csi-node-driver-bgptb\" (UID: \"5e18653e-7d13-4d2f-8b0d-991f11e13bcd\") " pod="calico-system/csi-node-driver-bgptb" May 13 00:27:56.746581 kubelet[2479]: I0513 00:27:56.746437 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5e18653e-7d13-4d2f-8b0d-991f11e13bcd-varrun\") pod \"csi-node-driver-bgptb\" (UID: \"5e18653e-7d13-4d2f-8b0d-991f11e13bcd\") " pod="calico-system/csi-node-driver-bgptb" May 13 00:27:56.746581 kubelet[2479]: I0513 00:27:56.746484 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e18653e-7d13-4d2f-8b0d-991f11e13bcd-kubelet-dir\") pod \"csi-node-driver-bgptb\" (UID: \"5e18653e-7d13-4d2f-8b0d-991f11e13bcd\") " pod="calico-system/csi-node-driver-bgptb" May 13 00:27:56.746581 kubelet[2479]: I0513 00:27:56.746503 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5e18653e-7d13-4d2f-8b0d-991f11e13bcd-socket-dir\") pod \"csi-node-driver-bgptb\" (UID: \"5e18653e-7d13-4d2f-8b0d-991f11e13bcd\") " pod="calico-system/csi-node-driver-bgptb" May 13 00:27:56.749486 kubelet[2479]: E0513 00:27:56.749360 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.749486 kubelet[2479]: W0513 00:27:56.749380 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.752746 kubelet[2479]: E0513 00:27:56.752543 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.753169 kubelet[2479]: W0513 00:27:56.752921 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.753572 kubelet[2479]: E0513 00:27:56.753488 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.753572 kubelet[2479]: W0513 00:27:56.753539 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.755455 kubelet[2479]: E0513 00:27:56.755430 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.755669 kubelet[2479]: E0513 00:27:56.755456 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.755669 kubelet[2479]: W0513 00:27:56.755598 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.755669 kubelet[2479]: E0513 00:27:56.755616 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.755669 kubelet[2479]: E0513 00:27:56.755460 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.755942 kubelet[2479]: E0513 00:27:56.755392 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.756300 kubelet[2479]: E0513 00:27:56.756287 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.756300 kubelet[2479]: W0513 00:27:56.756300 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.756528 kubelet[2479]: E0513 00:27:56.756322 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.756647 kubelet[2479]: E0513 00:27:56.756632 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.756710 kubelet[2479]: W0513 00:27:56.756698 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.756782 kubelet[2479]: E0513 00:27:56.756770 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.757082 kubelet[2479]: E0513 00:27:56.756991 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.757082 kubelet[2479]: W0513 00:27:56.757004 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.757082 kubelet[2479]: E0513 00:27:56.757015 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.758030 kubelet[2479]: E0513 00:27:56.757903 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.758030 kubelet[2479]: W0513 00:27:56.757917 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.758030 kubelet[2479]: E0513 00:27:56.757935 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.758216 kubelet[2479]: E0513 00:27:56.758192 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.758292 kubelet[2479]: W0513 00:27:56.758280 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.758385 kubelet[2479]: E0513 00:27:56.758374 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.758606 kubelet[2479]: E0513 00:27:56.758591 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.758606 kubelet[2479]: W0513 00:27:56.758605 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.758706 kubelet[2479]: E0513 00:27:56.758617 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.758919 kubelet[2479]: E0513 00:27:56.758905 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.758919 kubelet[2479]: W0513 00:27:56.758918 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.759557 kubelet[2479]: E0513 00:27:56.758933 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.759557 kubelet[2479]: E0513 00:27:56.759096 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.759557 kubelet[2479]: W0513 00:27:56.759103 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.759557 kubelet[2479]: E0513 00:27:56.759112 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.759557 kubelet[2479]: E0513 00:27:56.759315 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.759557 kubelet[2479]: W0513 00:27:56.759323 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.759557 kubelet[2479]: E0513 00:27:56.759333 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.759734 kubelet[2479]: E0513 00:27:56.759582 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.759734 kubelet[2479]: W0513 00:27:56.759593 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.759734 kubelet[2479]: E0513 00:27:56.759611 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.760106 kubelet[2479]: E0513 00:27:56.759946 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.760240 kubelet[2479]: W0513 00:27:56.760111 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.760240 kubelet[2479]: E0513 00:27:56.760134 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.760735 kubelet[2479]: E0513 00:27:56.760716 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.760735 kubelet[2479]: W0513 00:27:56.760732 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.760938 kubelet[2479]: E0513 00:27:56.760811 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.760991 kubelet[2479]: E0513 00:27:56.760942 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.760991 kubelet[2479]: W0513 00:27:56.760950 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.761243 kubelet[2479]: E0513 00:27:56.761068 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.761723 kubelet[2479]: E0513 00:27:56.761703 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.761723 kubelet[2479]: W0513 00:27:56.761720 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.761886 kubelet[2479]: E0513 00:27:56.761764 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.761940 kubelet[2479]: E0513 00:27:56.761904 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.761940 kubelet[2479]: W0513 00:27:56.761911 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.762031 kubelet[2479]: E0513 00:27:56.761986 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.765981 kubelet[2479]: E0513 00:27:56.763662 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.765981 kubelet[2479]: W0513 00:27:56.763712 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.765981 kubelet[2479]: E0513 00:27:56.763789 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.765981 kubelet[2479]: E0513 00:27:56.763999 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.765981 kubelet[2479]: W0513 00:27:56.764010 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.765981 kubelet[2479]: E0513 00:27:56.764067 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.765981 kubelet[2479]: E0513 00:27:56.764190 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.765981 kubelet[2479]: W0513 00:27:56.764198 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.765981 kubelet[2479]: E0513 00:27:56.764241 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.765981 kubelet[2479]: E0513 00:27:56.764327 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.766240 kubelet[2479]: W0513 00:27:56.764334 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.766240 kubelet[2479]: E0513 00:27:56.764374 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.766240 kubelet[2479]: E0513 00:27:56.764516 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.766240 kubelet[2479]: W0513 00:27:56.764524 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.766240 kubelet[2479]: E0513 00:27:56.764535 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.766240 kubelet[2479]: E0513 00:27:56.764708 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.766240 kubelet[2479]: W0513 00:27:56.764717 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.766240 kubelet[2479]: E0513 00:27:56.764727 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.766240 kubelet[2479]: E0513 00:27:56.765101 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.766240 kubelet[2479]: W0513 00:27:56.765113 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.766473 kubelet[2479]: E0513 00:27:56.765124 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.766473 kubelet[2479]: E0513 00:27:56.765753 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.766473 kubelet[2479]: W0513 00:27:56.765765 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.766473 kubelet[2479]: E0513 00:27:56.765778 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.766473 kubelet[2479]: E0513 00:27:56.766181 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.766473 kubelet[2479]: W0513 00:27:56.766192 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.766473 kubelet[2479]: E0513 00:27:56.766209 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.769125 kubelet[2479]: E0513 00:27:56.767309 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.769270 kubelet[2479]: W0513 00:27:56.769164 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.769270 kubelet[2479]: E0513 00:27:56.769226 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.826871 kubelet[2479]: E0513 00:27:56.826838 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:56.827320 containerd[1440]: time="2025-05-13T00:27:56.827260683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58d8c8554f-6xjvl,Uid:0694d7b2-9074-4a4b-b5d5-c075b46735d0,Namespace:calico-system,Attempt:0,}" May 13 00:27:56.846346 containerd[1440]: time="2025-05-13T00:27:56.846276398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:27:56.846346 containerd[1440]: time="2025-05-13T00:27:56.846324079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:27:56.846346 containerd[1440]: time="2025-05-13T00:27:56.846334919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:27:56.846567 containerd[1440]: time="2025-05-13T00:27:56.846420600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:27:56.846968 kubelet[2479]: E0513 00:27:56.846940 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.846968 kubelet[2479]: W0513 00:27:56.846959 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.847061 kubelet[2479]: E0513 00:27:56.846986 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.847220 kubelet[2479]: E0513 00:27:56.847208 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.847220 kubelet[2479]: W0513 00:27:56.847219 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.847284 kubelet[2479]: E0513 00:27:56.847230 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.847401 kubelet[2479]: E0513 00:27:56.847389 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.847401 kubelet[2479]: W0513 00:27:56.847400 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.847401 kubelet[2479]: E0513 00:27:56.847430 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.849824 kubelet[2479]: E0513 00:27:56.847601 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.849824 kubelet[2479]: W0513 00:27:56.847612 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.849824 kubelet[2479]: E0513 00:27:56.847620 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.850157 kubelet[2479]: E0513 00:27:56.850022 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.850157 kubelet[2479]: W0513 00:27:56.850040 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.850157 kubelet[2479]: E0513 00:27:56.850061 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.850358 kubelet[2479]: E0513 00:27:56.850345 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.850434 kubelet[2479]: W0513 00:27:56.850422 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.850531 kubelet[2479]: E0513 00:27:56.850509 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.850746 kubelet[2479]: E0513 00:27:56.850731 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.850813 kubelet[2479]: W0513 00:27:56.850801 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.850938 kubelet[2479]: E0513 00:27:56.850870 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.851056 kubelet[2479]: E0513 00:27:56.851043 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.851114 kubelet[2479]: W0513 00:27:56.851103 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.851188 kubelet[2479]: E0513 00:27:56.851171 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.851363 kubelet[2479]: E0513 00:27:56.851350 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.851590 kubelet[2479]: W0513 00:27:56.851487 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.851590 kubelet[2479]: E0513 00:27:56.851531 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.851745 kubelet[2479]: E0513 00:27:56.851731 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.851803 kubelet[2479]: W0513 00:27:56.851791 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.851883 kubelet[2479]: E0513 00:27:56.851864 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.852080 kubelet[2479]: E0513 00:27:56.852065 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.852143 kubelet[2479]: W0513 00:27:56.852132 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.852267 kubelet[2479]: E0513 00:27:56.852206 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.852533 kubelet[2479]: E0513 00:27:56.852442 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.852533 kubelet[2479]: W0513 00:27:56.852454 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.852533 kubelet[2479]: E0513 00:27:56.852480 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.852689 kubelet[2479]: E0513 00:27:56.852676 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.852741 kubelet[2479]: W0513 00:27:56.852730 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.852826 kubelet[2479]: E0513 00:27:56.852807 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.853120 kubelet[2479]: E0513 00:27:56.853035 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.853120 kubelet[2479]: W0513 00:27:56.853048 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.853120 kubelet[2479]: E0513 00:27:56.853071 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.853272 kubelet[2479]: E0513 00:27:56.853259 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.853328 kubelet[2479]: W0513 00:27:56.853317 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.853422 kubelet[2479]: E0513 00:27:56.853391 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.853663 kubelet[2479]: E0513 00:27:56.853649 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.853724 kubelet[2479]: W0513 00:27:56.853713 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.853941 kubelet[2479]: E0513 00:27:56.853876 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.854161 kubelet[2479]: E0513 00:27:56.854148 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.854226 kubelet[2479]: W0513 00:27:56.854214 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.854626 kubelet[2479]: E0513 00:27:56.854557 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.854626 kubelet[2479]: W0513 00:27:56.854575 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.854626 kubelet[2479]: E0513 00:27:56.854591 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.854626 kubelet[2479]: E0513 00:27:56.854611 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.854758 kubelet[2479]: E0513 00:27:56.854748 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.854758 kubelet[2479]: W0513 00:27:56.854757 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.855517 kubelet[2479]: E0513 00:27:56.855469 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.855813 kubelet[2479]: E0513 00:27:56.855637 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.855813 kubelet[2479]: W0513 00:27:56.855651 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.855813 kubelet[2479]: E0513 00:27:56.855696 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.855916 kubelet[2479]: E0513 00:27:56.855826 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.855916 kubelet[2479]: W0513 00:27:56.855835 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.855916 kubelet[2479]: E0513 00:27:56.855892 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.855999 kubelet[2479]: E0513 00:27:56.855984 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.855999 kubelet[2479]: W0513 00:27:56.855992 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.856278 kubelet[2479]: E0513 00:27:56.856096 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.856387 kubelet[2479]: E0513 00:27:56.856282 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.856387 kubelet[2479]: W0513 00:27:56.856294 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.856387 kubelet[2479]: E0513 00:27:56.856312 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.857757 kubelet[2479]: E0513 00:27:56.857530 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.857757 kubelet[2479]: W0513 00:27:56.857691 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.857757 kubelet[2479]: E0513 00:27:56.857715 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.859186 kubelet[2479]: E0513 00:27:56.859090 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.859186 kubelet[2479]: W0513 00:27:56.859115 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.859186 kubelet[2479]: E0513 00:27:56.859130 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.865567 systemd[1]: Started cri-containerd-5333b49adc10bffd15b1589af3210225f5737ff6f1021b87f19d7fde65f73960.scope - libcontainer container 5333b49adc10bffd15b1589af3210225f5737ff6f1021b87f19d7fde65f73960. May 13 00:27:56.870141 kubelet[2479]: E0513 00:27:56.870115 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:56.870141 kubelet[2479]: W0513 00:27:56.870136 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:56.870456 kubelet[2479]: E0513 00:27:56.870154 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:56.880710 kubelet[2479]: E0513 00:27:56.880670 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:56.881130 containerd[1440]: time="2025-05-13T00:27:56.881091774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6kblv,Uid:f1cc4169-0d95-470b-8ea0-ada1c453005f,Namespace:calico-system,Attempt:0,}" May 13 00:27:56.896976 containerd[1440]: time="2025-05-13T00:27:56.896687552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58d8c8554f-6xjvl,Uid:0694d7b2-9074-4a4b-b5d5-c075b46735d0,Namespace:calico-system,Attempt:0,} returns sandbox id \"5333b49adc10bffd15b1589af3210225f5737ff6f1021b87f19d7fde65f73960\"" May 13 00:27:56.897689 kubelet[2479]: E0513 00:27:56.897668 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:56.920231 containerd[1440]: time="2025-05-13T00:27:56.920192781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 00:27:56.935803 containerd[1440]: time="2025-05-13T00:27:56.935392232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:27:56.935803 containerd[1440]: time="2025-05-13T00:27:56.935774198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:27:56.935803 containerd[1440]: time="2025-05-13T00:27:56.935786758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:27:56.935972 containerd[1440]: time="2025-05-13T00:27:56.935864920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:27:56.951629 systemd[1]: Started cri-containerd-7aefe0f9bad83cd53db3f8592d35ed9869340201f894e69567ba1512b0e04273.scope - libcontainer container 7aefe0f9bad83cd53db3f8592d35ed9869340201f894e69567ba1512b0e04273. May 13 00:27:56.978562 containerd[1440]: time="2025-05-13T00:27:56.978522065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6kblv,Uid:f1cc4169-0d95-470b-8ea0-ada1c453005f,Namespace:calico-system,Attempt:0,} returns sandbox id \"7aefe0f9bad83cd53db3f8592d35ed9869340201f894e69567ba1512b0e04273\"" May 13 00:27:56.979364 kubelet[2479]: E0513 00:27:56.979325 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:57.102531 update_engine[1428]: I20250513 00:27:57.102049 1428 update_attempter.cc:509] Updating boot flags... May 13 00:27:57.140451 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3016) May 13 00:27:57.173057 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3016) May 13 00:27:57.827482 kubelet[2479]: E0513 00:27:57.827455 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:57.851378 kubelet[2479]: E0513 00:27:57.851347 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.851378 kubelet[2479]: W0513 00:27:57.851370 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.851552 kubelet[2479]: E0513 00:27:57.851390 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.851582 kubelet[2479]: E0513 00:27:57.851551 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.854435 kubelet[2479]: W0513 00:27:57.851559 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.854435 kubelet[2479]: E0513 00:27:57.854437 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.854696 kubelet[2479]: E0513 00:27:57.854673 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.854696 kubelet[2479]: W0513 00:27:57.854686 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.854696 kubelet[2479]: E0513 00:27:57.854696 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.854857 kubelet[2479]: E0513 00:27:57.854837 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.854857 kubelet[2479]: W0513 00:27:57.854850 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.854909 kubelet[2479]: E0513 00:27:57.854858 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.855203 kubelet[2479]: E0513 00:27:57.855184 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.855203 kubelet[2479]: W0513 00:27:57.855200 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.855270 kubelet[2479]: E0513 00:27:57.855214 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.855424 kubelet[2479]: E0513 00:27:57.855390 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.855424 kubelet[2479]: W0513 00:27:57.855400 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.855424 kubelet[2479]: E0513 00:27:57.855415 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.855575 kubelet[2479]: E0513 00:27:57.855564 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.855575 kubelet[2479]: W0513 00:27:57.855574 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.855624 kubelet[2479]: E0513 00:27:57.855582 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.855732 kubelet[2479]: E0513 00:27:57.855722 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.855766 kubelet[2479]: W0513 00:27:57.855732 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.855766 kubelet[2479]: E0513 00:27:57.855740 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.855900 kubelet[2479]: E0513 00:27:57.855887 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.855900 kubelet[2479]: W0513 00:27:57.855898 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.855952 kubelet[2479]: E0513 00:27:57.855906 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.856048 kubelet[2479]: E0513 00:27:57.856037 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.856048 kubelet[2479]: W0513 00:27:57.856046 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.856096 kubelet[2479]: E0513 00:27:57.856054 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.856192 kubelet[2479]: E0513 00:27:57.856176 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.856216 kubelet[2479]: W0513 00:27:57.856192 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.856216 kubelet[2479]: E0513 00:27:57.856200 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.856327 kubelet[2479]: E0513 00:27:57.856318 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.856352 kubelet[2479]: W0513 00:27:57.856326 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.856352 kubelet[2479]: E0513 00:27:57.856334 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.856485 kubelet[2479]: E0513 00:27:57.856474 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.856518 kubelet[2479]: W0513 00:27:57.856485 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.856518 kubelet[2479]: E0513 00:27:57.856493 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.856625 kubelet[2479]: E0513 00:27:57.856615 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.856625 kubelet[2479]: W0513 00:27:57.856624 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.856672 kubelet[2479]: E0513 00:27:57.856631 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.856758 kubelet[2479]: E0513 00:27:57.856749 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.856780 kubelet[2479]: W0513 00:27:57.856758 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.856780 kubelet[2479]: E0513 00:27:57.856765 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.856890 kubelet[2479]: E0513 00:27:57.856881 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.856916 kubelet[2479]: W0513 00:27:57.856890 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.856916 kubelet[2479]: E0513 00:27:57.856898 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.857038 kubelet[2479]: E0513 00:27:57.857027 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.857038 kubelet[2479]: W0513 00:27:57.857037 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.857091 kubelet[2479]: E0513 00:27:57.857045 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.857219 kubelet[2479]: E0513 00:27:57.857198 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.857219 kubelet[2479]: W0513 00:27:57.857208 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.857272 kubelet[2479]: E0513 00:27:57.857219 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.857349 kubelet[2479]: E0513 00:27:57.857340 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.857375 kubelet[2479]: W0513 00:27:57.857349 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.857375 kubelet[2479]: E0513 00:27:57.857356 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.857517 kubelet[2479]: E0513 00:27:57.857507 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.857544 kubelet[2479]: W0513 00:27:57.857516 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.857544 kubelet[2479]: E0513 00:27:57.857524 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.857709 kubelet[2479]: E0513 00:27:57.857699 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.857731 kubelet[2479]: W0513 00:27:57.857708 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.857731 kubelet[2479]: E0513 00:27:57.857716 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.857846 kubelet[2479]: E0513 00:27:57.857836 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.857868 kubelet[2479]: W0513 00:27:57.857846 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.857868 kubelet[2479]: E0513 00:27:57.857853 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.857986 kubelet[2479]: E0513 00:27:57.857968 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.858012 kubelet[2479]: W0513 00:27:57.857986 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.858012 kubelet[2479]: E0513 00:27:57.857993 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.858120 kubelet[2479]: E0513 00:27:57.858110 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.858120 kubelet[2479]: W0513 00:27:57.858119 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.858165 kubelet[2479]: E0513 00:27:57.858136 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:57.858295 kubelet[2479]: E0513 00:27:57.858284 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:57.858318 kubelet[2479]: W0513 00:27:57.858294 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:57.858318 kubelet[2479]: E0513 00:27:57.858303 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:58.163603 kubelet[2479]: E0513 00:27:58.163493 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgptb" podUID="5e18653e-7d13-4d2f-8b0d-991f11e13bcd" May 13 00:27:58.456738 containerd[1440]: time="2025-05-13T00:27:58.456614842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:58.457335 containerd[1440]: time="2025-05-13T00:27:58.457300892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 13 00:27:58.458329 containerd[1440]: time="2025-05-13T00:27:58.458289347Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:58.460717 containerd[1440]: time="2025-05-13T00:27:58.460667382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:58.461715 containerd[1440]: time="2025-05-13T00:27:58.461354632Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.541110131s" May 13 00:27:58.461715 containerd[1440]: time="2025-05-13T00:27:58.461404193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 13 00:27:58.467330 containerd[1440]: time="2025-05-13T00:27:58.467287521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 00:27:58.488076 containerd[1440]: time="2025-05-13T00:27:58.488023471Z" level=info msg="CreateContainer within sandbox \"5333b49adc10bffd15b1589af3210225f5737ff6f1021b87f19d7fde65f73960\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 00:27:58.499329 containerd[1440]: time="2025-05-13T00:27:58.499285879Z" level=info msg="CreateContainer within sandbox \"5333b49adc10bffd15b1589af3210225f5737ff6f1021b87f19d7fde65f73960\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b707756b553158948b5dd4b6a11502a76f42bfce997980a5611c787c44730471\"" May 13 00:27:58.501259 containerd[1440]: time="2025-05-13T00:27:58.501222308Z" level=info msg="StartContainer for \"b707756b553158948b5dd4b6a11502a76f42bfce997980a5611c787c44730471\"" May 13 00:27:58.530616 systemd[1]: Started cri-containerd-b707756b553158948b5dd4b6a11502a76f42bfce997980a5611c787c44730471.scope - libcontainer container b707756b553158948b5dd4b6a11502a76f42bfce997980a5611c787c44730471. May 13 00:27:58.563847 containerd[1440]: time="2025-05-13T00:27:58.563795363Z" level=info msg="StartContainer for \"b707756b553158948b5dd4b6a11502a76f42bfce997980a5611c787c44730471\" returns successfully" May 13 00:27:59.232328 kubelet[2479]: E0513 00:27:59.232208 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:27:59.265363 kubelet[2479]: E0513 00:27:59.265315 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.265363 kubelet[2479]: W0513 00:27:59.265337 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.265824 kubelet[2479]: E0513 00:27:59.265788 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.266038 kubelet[2479]: E0513 00:27:59.266013 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.266038 kubelet[2479]: W0513 00:27:59.266026 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.266038 kubelet[2479]: E0513 00:27:59.266037 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.266292 kubelet[2479]: E0513 00:27:59.266278 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.266292 kubelet[2479]: W0513 00:27:59.266289 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.266364 kubelet[2479]: E0513 00:27:59.266299 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.266488 kubelet[2479]: E0513 00:27:59.266477 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.266488 kubelet[2479]: W0513 00:27:59.266487 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.266549 kubelet[2479]: E0513 00:27:59.266496 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.266674 kubelet[2479]: E0513 00:27:59.266660 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.266674 kubelet[2479]: W0513 00:27:59.266670 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.266756 kubelet[2479]: E0513 00:27:59.266679 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.266816 kubelet[2479]: E0513 00:27:59.266805 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.266816 kubelet[2479]: W0513 00:27:59.266815 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.266876 kubelet[2479]: E0513 00:27:59.266823 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.266952 kubelet[2479]: E0513 00:27:59.266943 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.266952 kubelet[2479]: W0513 00:27:59.266952 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.267022 kubelet[2479]: E0513 00:27:59.266960 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.267099 kubelet[2479]: E0513 00:27:59.267089 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.267099 kubelet[2479]: W0513 00:27:59.267098 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.267158 kubelet[2479]: E0513 00:27:59.267106 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.267286 kubelet[2479]: E0513 00:27:59.267276 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.267286 kubelet[2479]: W0513 00:27:59.267286 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.267350 kubelet[2479]: E0513 00:27:59.267294 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.267487 kubelet[2479]: E0513 00:27:59.267477 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.267487 kubelet[2479]: W0513 00:27:59.267487 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.267551 kubelet[2479]: E0513 00:27:59.267498 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.267651 kubelet[2479]: E0513 00:27:59.267641 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.267651 kubelet[2479]: W0513 00:27:59.267650 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.267716 kubelet[2479]: E0513 00:27:59.267659 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.267808 kubelet[2479]: E0513 00:27:59.267798 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.268274 kubelet[2479]: W0513 00:27:59.268236 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.268274 kubelet[2479]: E0513 00:27:59.268274 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.268519 kubelet[2479]: E0513 00:27:59.268503 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.268519 kubelet[2479]: W0513 00:27:59.268518 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.268603 kubelet[2479]: E0513 00:27:59.268528 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.269222 kubelet[2479]: E0513 00:27:59.269200 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.269222 kubelet[2479]: W0513 00:27:59.269214 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.269291 kubelet[2479]: E0513 00:27:59.269233 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.269436 kubelet[2479]: E0513 00:27:59.269422 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.269436 kubelet[2479]: W0513 00:27:59.269432 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.269498 kubelet[2479]: E0513 00:27:59.269442 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.272425 kubelet[2479]: E0513 00:27:59.271360 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.272425 kubelet[2479]: W0513 00:27:59.271382 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.272425 kubelet[2479]: E0513 00:27:59.271395 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.272425 kubelet[2479]: E0513 00:27:59.271658 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.272425 kubelet[2479]: W0513 00:27:59.271667 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.272425 kubelet[2479]: E0513 00:27:59.271681 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.272425 kubelet[2479]: E0513 00:27:59.271863 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.272425 kubelet[2479]: W0513 00:27:59.271872 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.272425 kubelet[2479]: E0513 00:27:59.271881 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.272425 kubelet[2479]: E0513 00:27:59.272086 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.272734 kubelet[2479]: W0513 00:27:59.272095 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.272734 kubelet[2479]: E0513 00:27:59.272104 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.272734 kubelet[2479]: E0513 00:27:59.272264 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.272734 kubelet[2479]: W0513 00:27:59.272271 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.272734 kubelet[2479]: E0513 00:27:59.272280 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.272734 kubelet[2479]: E0513 00:27:59.272433 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.272734 kubelet[2479]: W0513 00:27:59.272441 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.272734 kubelet[2479]: E0513 00:27:59.272451 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.272960 kubelet[2479]: E0513 00:27:59.272810 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.272960 kubelet[2479]: W0513 00:27:59.272820 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.272960 kubelet[2479]: E0513 00:27:59.272830 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.273710 kubelet[2479]: E0513 00:27:59.273685 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.273710 kubelet[2479]: W0513 00:27:59.273700 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.273710 kubelet[2479]: E0513 00:27:59.273711 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.274345 kubelet[2479]: E0513 00:27:59.274291 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.274345 kubelet[2479]: W0513 00:27:59.274305 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.274345 kubelet[2479]: E0513 00:27:59.274320 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.274612 kubelet[2479]: E0513 00:27:59.274596 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.274612 kubelet[2479]: W0513 00:27:59.274609 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.274680 kubelet[2479]: E0513 00:27:59.274624 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.274788 kubelet[2479]: E0513 00:27:59.274776 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.274826 kubelet[2479]: W0513 00:27:59.274788 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.274826 kubelet[2479]: E0513 00:27:59.274802 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.275322 kubelet[2479]: E0513 00:27:59.275308 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.275322 kubelet[2479]: W0513 00:27:59.275323 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.275322 kubelet[2479]: E0513 00:27:59.275340 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.275596 kubelet[2479]: E0513 00:27:59.275576 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.275596 kubelet[2479]: W0513 00:27:59.275595 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.275655 kubelet[2479]: E0513 00:27:59.275618 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.275820 kubelet[2479]: E0513 00:27:59.275771 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.275820 kubelet[2479]: W0513 00:27:59.275782 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.275820 kubelet[2479]: E0513 00:27:59.275792 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.275949 kubelet[2479]: E0513 00:27:59.275927 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.275949 kubelet[2479]: W0513 00:27:59.275934 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.275949 kubelet[2479]: E0513 00:27:59.275943 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.276133 kubelet[2479]: E0513 00:27:59.276121 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.276133 kubelet[2479]: W0513 00:27:59.276133 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.276190 kubelet[2479]: E0513 00:27:59.276143 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.276608 kubelet[2479]: E0513 00:27:59.276595 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.276644 kubelet[2479]: W0513 00:27:59.276608 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.276644 kubelet[2479]: E0513 00:27:59.276630 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.276840 kubelet[2479]: E0513 00:27:59.276792 2479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:27:59.276840 kubelet[2479]: W0513 00:27:59.276803 2479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:27:59.276840 kubelet[2479]: E0513 00:27:59.276816 2479 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:27:59.629164 containerd[1440]: time="2025-05-13T00:27:59.628290816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:59.629164 containerd[1440]: time="2025-05-13T00:27:59.628710982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 13 00:27:59.629730 containerd[1440]: time="2025-05-13T00:27:59.629701756Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:59.631839 containerd[1440]: time="2025-05-13T00:27:59.631803306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:27:59.632475 containerd[1440]: time="2025-05-13T00:27:59.632443515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.165124154s" May 13 00:27:59.632475 containerd[1440]: time="2025-05-13T00:27:59.632472755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 13 00:27:59.637114 containerd[1440]: time="2025-05-13T00:27:59.637078861Z" level=info msg="CreateContainer within sandbox \"7aefe0f9bad83cd53db3f8592d35ed9869340201f894e69567ba1512b0e04273\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 00:27:59.658155 containerd[1440]: time="2025-05-13T00:27:59.658026599Z" level=info msg="CreateContainer within sandbox \"7aefe0f9bad83cd53db3f8592d35ed9869340201f894e69567ba1512b0e04273\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3b78d36146b9820dfccd342b89309ff7b8ae988820aa58b1c049b5ee6b8cbedb\"" May 13 00:27:59.658528 containerd[1440]: time="2025-05-13T00:27:59.658498925Z" level=info msg="StartContainer for \"3b78d36146b9820dfccd342b89309ff7b8ae988820aa58b1c049b5ee6b8cbedb\"" May 13 00:27:59.688601 systemd[1]: Started cri-containerd-3b78d36146b9820dfccd342b89309ff7b8ae988820aa58b1c049b5ee6b8cbedb.scope - libcontainer container 3b78d36146b9820dfccd342b89309ff7b8ae988820aa58b1c049b5ee6b8cbedb. May 13 00:27:59.712171 containerd[1440]: time="2025-05-13T00:27:59.712116528Z" level=info msg="StartContainer for \"3b78d36146b9820dfccd342b89309ff7b8ae988820aa58b1c049b5ee6b8cbedb\" returns successfully" May 13 00:27:59.739082 systemd[1]: cri-containerd-3b78d36146b9820dfccd342b89309ff7b8ae988820aa58b1c049b5ee6b8cbedb.scope: Deactivated successfully. May 13 00:27:59.768002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b78d36146b9820dfccd342b89309ff7b8ae988820aa58b1c049b5ee6b8cbedb-rootfs.mount: Deactivated successfully. May 13 00:27:59.813427 containerd[1440]: time="2025-05-13T00:27:59.809966479Z" level=info msg="shim disconnected" id=3b78d36146b9820dfccd342b89309ff7b8ae988820aa58b1c049b5ee6b8cbedb namespace=k8s.io May 13 00:27:59.813427 containerd[1440]: time="2025-05-13T00:27:59.813429648Z" level=warning msg="cleaning up after shim disconnected" id=3b78d36146b9820dfccd342b89309ff7b8ae988820aa58b1c049b5ee6b8cbedb namespace=k8s.io May 13 00:27:59.813583 containerd[1440]: time="2025-05-13T00:27:59.813442808Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:28:00.162474 kubelet[2479]: E0513 00:28:00.162402 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgptb" podUID="5e18653e-7d13-4d2f-8b0d-991f11e13bcd" May 13 00:28:00.230539 kubelet[2479]: I0513 00:28:00.230511 2479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:28:00.230794 kubelet[2479]: E0513 00:28:00.230777 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:00.232166 kubelet[2479]: E0513 00:28:00.232067 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:00.232802 containerd[1440]: time="2025-05-13T00:28:00.232673372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 00:28:00.249213 kubelet[2479]: I0513 00:28:00.249167 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58d8c8554f-6xjvl" podStartSLOduration=2.6802449729999998 podStartE2EDuration="4.249154995s" podCreationTimestamp="2025-05-13 00:27:56 +0000 UTC" firstStartedPulling="2025-05-13 00:27:56.898203937 +0000 UTC m=+12.814130469" lastFinishedPulling="2025-05-13 00:27:58.467113959 +0000 UTC m=+14.383040491" observedRunningTime="2025-05-13 00:27:59.259011245 +0000 UTC m=+15.174937777" watchObservedRunningTime="2025-05-13 00:28:00.249154995 +0000 UTC m=+16.165081527" May 13 00:28:02.162966 kubelet[2479]: E0513 00:28:02.162909 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgptb" podUID="5e18653e-7d13-4d2f-8b0d-991f11e13bcd" May 13 00:28:03.774634 kubelet[2479]: I0513 00:28:03.774588 2479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:28:03.775054 kubelet[2479]: E0513 00:28:03.774940 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:04.164019 kubelet[2479]: E0513 00:28:04.163976 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgptb" podUID="5e18653e-7d13-4d2f-8b0d-991f11e13bcd" May 13 00:28:04.238230 kubelet[2479]: E0513 00:28:04.238198 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:04.545200 containerd[1440]: time="2025-05-13T00:28:04.545096757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:04.545761 containerd[1440]: time="2025-05-13T00:28:04.545730284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 13 00:28:04.546645 containerd[1440]: time="2025-05-13T00:28:04.546611134Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:04.554951 containerd[1440]: time="2025-05-13T00:28:04.554920067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:04.559300 containerd[1440]: time="2025-05-13T00:28:04.556366883Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 4.323443629s" May 13 00:28:04.559300 containerd[1440]: time="2025-05-13T00:28:04.556403684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 13 00:28:04.563737 containerd[1440]: time="2025-05-13T00:28:04.563704966Z" level=info msg="CreateContainer within sandbox \"7aefe0f9bad83cd53db3f8592d35ed9869340201f894e69567ba1512b0e04273\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 00:28:04.579880 containerd[1440]: time="2025-05-13T00:28:04.579775226Z" level=info msg="CreateContainer within sandbox \"7aefe0f9bad83cd53db3f8592d35ed9869340201f894e69567ba1512b0e04273\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"769a9b05ef3577ff80bc36f8d4f8cde12d5367cbacff7a86a50b7bb2e27953e1\"" May 13 00:28:04.581016 containerd[1440]: time="2025-05-13T00:28:04.580367713Z" level=info msg="StartContainer for \"769a9b05ef3577ff80bc36f8d4f8cde12d5367cbacff7a86a50b7bb2e27953e1\"" May 13 00:28:04.614592 systemd[1]: Started cri-containerd-769a9b05ef3577ff80bc36f8d4f8cde12d5367cbacff7a86a50b7bb2e27953e1.scope - libcontainer container 769a9b05ef3577ff80bc36f8d4f8cde12d5367cbacff7a86a50b7bb2e27953e1. May 13 00:28:04.652374 containerd[1440]: time="2025-05-13T00:28:04.652317760Z" level=info msg="StartContainer for \"769a9b05ef3577ff80bc36f8d4f8cde12d5367cbacff7a86a50b7bb2e27953e1\" returns successfully" May 13 00:28:05.216218 systemd[1]: cri-containerd-769a9b05ef3577ff80bc36f8d4f8cde12d5367cbacff7a86a50b7bb2e27953e1.scope: Deactivated successfully. May 13 00:28:05.233333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-769a9b05ef3577ff80bc36f8d4f8cde12d5367cbacff7a86a50b7bb2e27953e1-rootfs.mount: Deactivated successfully. May 13 00:28:05.241533 kubelet[2479]: E0513 00:28:05.241486 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:05.277399 kubelet[2479]: I0513 00:28:05.275548 2479 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 00:28:05.302441 containerd[1440]: time="2025-05-13T00:28:05.302370306Z" level=info msg="shim disconnected" id=769a9b05ef3577ff80bc36f8d4f8cde12d5367cbacff7a86a50b7bb2e27953e1 namespace=k8s.io May 13 00:28:05.302441 containerd[1440]: time="2025-05-13T00:28:05.302432547Z" level=warning msg="cleaning up after shim disconnected" id=769a9b05ef3577ff80bc36f8d4f8cde12d5367cbacff7a86a50b7bb2e27953e1 namespace=k8s.io May 13 00:28:05.302441 containerd[1440]: time="2025-05-13T00:28:05.302442707Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:28:05.335971 systemd[1]: Created slice kubepods-besteffort-pod66dabaa2_e4d2_4e87_bf56_94c73a7429d1.slice - libcontainer container kubepods-besteffort-pod66dabaa2_e4d2_4e87_bf56_94c73a7429d1.slice. May 13 00:28:05.342057 systemd[1]: Created slice kubepods-burstable-podf3494a81_13f5_44da_afd1_f8752f281b7f.slice - libcontainer container kubepods-burstable-podf3494a81_13f5_44da_afd1_f8752f281b7f.slice. May 13 00:28:05.351634 systemd[1]: Created slice kubepods-besteffort-pod6a95ccfc_05c7_4192_9854_3fc518d2c335.slice - libcontainer container kubepods-besteffort-pod6a95ccfc_05c7_4192_9854_3fc518d2c335.slice. May 13 00:28:05.360950 systemd[1]: Created slice kubepods-besteffort-pod3294737e_8c5e_4292_96da_dff74f2e17e9.slice - libcontainer container kubepods-besteffort-pod3294737e_8c5e_4292_96da_dff74f2e17e9.slice. May 13 00:28:05.368144 systemd[1]: Created slice kubepods-burstable-pod4cfd3c97_e6c2_46da_a620_1303e1ac26b6.slice - libcontainer container kubepods-burstable-pod4cfd3c97_e6c2_46da_a620_1303e1ac26b6.slice. May 13 00:28:05.415895 kubelet[2479]: I0513 00:28:05.415786 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66dabaa2-e4d2-4e87-bf56-94c73a7429d1-tigera-ca-bundle\") pod \"calico-kube-controllers-56cdf655d5-cvxnm\" (UID: \"66dabaa2-e4d2-4e87-bf56-94c73a7429d1\") " pod="calico-system/calico-kube-controllers-56cdf655d5-cvxnm" May 13 00:28:05.416098 kubelet[2479]: I0513 00:28:05.415905 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89gfv\" (UniqueName: \"kubernetes.io/projected/6a95ccfc-05c7-4192-9854-3fc518d2c335-kube-api-access-89gfv\") pod \"calico-apiserver-74cfd5766c-2dwxc\" (UID: \"6a95ccfc-05c7-4192-9854-3fc518d2c335\") " pod="calico-apiserver/calico-apiserver-74cfd5766c-2dwxc" May 13 00:28:05.416098 kubelet[2479]: I0513 00:28:05.415930 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbc87\" (UniqueName: \"kubernetes.io/projected/3294737e-8c5e-4292-96da-dff74f2e17e9-kube-api-access-lbc87\") pod \"calico-apiserver-74cfd5766c-hkfcg\" (UID: \"3294737e-8c5e-4292-96da-dff74f2e17e9\") " pod="calico-apiserver/calico-apiserver-74cfd5766c-hkfcg" May 13 00:28:05.416098 kubelet[2479]: I0513 00:28:05.415946 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rwc2\" (UniqueName: \"kubernetes.io/projected/66dabaa2-e4d2-4e87-bf56-94c73a7429d1-kube-api-access-9rwc2\") pod \"calico-kube-controllers-56cdf655d5-cvxnm\" (UID: \"66dabaa2-e4d2-4e87-bf56-94c73a7429d1\") " pod="calico-system/calico-kube-controllers-56cdf655d5-cvxnm" May 13 00:28:05.416098 kubelet[2479]: I0513 00:28:05.415967 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3294737e-8c5e-4292-96da-dff74f2e17e9-calico-apiserver-certs\") pod \"calico-apiserver-74cfd5766c-hkfcg\" (UID: \"3294737e-8c5e-4292-96da-dff74f2e17e9\") " pod="calico-apiserver/calico-apiserver-74cfd5766c-hkfcg" May 13 00:28:05.416098 kubelet[2479]: I0513 00:28:05.415983 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cfd3c97-e6c2-46da-a620-1303e1ac26b6-config-volume\") pod \"coredns-668d6bf9bc-x58pd\" (UID: \"4cfd3c97-e6c2-46da-a620-1303e1ac26b6\") " pod="kube-system/coredns-668d6bf9bc-x58pd" May 13 00:28:05.416292 kubelet[2479]: I0513 00:28:05.415999 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3494a81-13f5-44da-afd1-f8752f281b7f-config-volume\") pod \"coredns-668d6bf9bc-s4q64\" (UID: \"f3494a81-13f5-44da-afd1-f8752f281b7f\") " pod="kube-system/coredns-668d6bf9bc-s4q64" May 13 00:28:05.416292 kubelet[2479]: I0513 00:28:05.416014 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wt99\" (UniqueName: \"kubernetes.io/projected/f3494a81-13f5-44da-afd1-f8752f281b7f-kube-api-access-6wt99\") pod \"coredns-668d6bf9bc-s4q64\" (UID: \"f3494a81-13f5-44da-afd1-f8752f281b7f\") " pod="kube-system/coredns-668d6bf9bc-s4q64" May 13 00:28:05.416292 kubelet[2479]: I0513 00:28:05.416097 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6a95ccfc-05c7-4192-9854-3fc518d2c335-calico-apiserver-certs\") pod \"calico-apiserver-74cfd5766c-2dwxc\" (UID: \"6a95ccfc-05c7-4192-9854-3fc518d2c335\") " pod="calico-apiserver/calico-apiserver-74cfd5766c-2dwxc" May 13 00:28:05.416627 kubelet[2479]: I0513 00:28:05.416369 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c664s\" (UniqueName: \"kubernetes.io/projected/4cfd3c97-e6c2-46da-a620-1303e1ac26b6-kube-api-access-c664s\") pod \"coredns-668d6bf9bc-x58pd\" (UID: \"4cfd3c97-e6c2-46da-a620-1303e1ac26b6\") " pod="kube-system/coredns-668d6bf9bc-x58pd" May 13 00:28:05.641451 containerd[1440]: time="2025-05-13T00:28:05.641292863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56cdf655d5-cvxnm,Uid:66dabaa2-e4d2-4e87-bf56-94c73a7429d1,Namespace:calico-system,Attempt:0,}" May 13 00:28:05.647634 kubelet[2479]: E0513 00:28:05.647592 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:05.648478 containerd[1440]: time="2025-05-13T00:28:05.648172017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s4q64,Uid:f3494a81-13f5-44da-afd1-f8752f281b7f,Namespace:kube-system,Attempt:0,}" May 13 00:28:05.656928 containerd[1440]: time="2025-05-13T00:28:05.656874830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74cfd5766c-2dwxc,Uid:6a95ccfc-05c7-4192-9854-3fc518d2c335,Namespace:calico-apiserver,Attempt:0,}" May 13 00:28:05.665888 containerd[1440]: time="2025-05-13T00:28:05.665845087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74cfd5766c-hkfcg,Uid:3294737e-8c5e-4292-96da-dff74f2e17e9,Namespace:calico-apiserver,Attempt:0,}" May 13 00:28:05.672216 kubelet[2479]: E0513 00:28:05.671705 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:05.678391 containerd[1440]: time="2025-05-13T00:28:05.677568492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x58pd,Uid:4cfd3c97-e6c2-46da-a620-1303e1ac26b6,Namespace:kube-system,Attempt:0,}" May 13 00:28:05.997944 containerd[1440]: time="2025-05-13T00:28:05.997589126Z" level=error msg="Failed to destroy network for sandbox \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:05.998131 containerd[1440]: time="2025-05-13T00:28:05.998098532Z" level=error msg="encountered an error cleaning up failed sandbox \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:05.998176 containerd[1440]: time="2025-05-13T00:28:05.998161772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s4q64,Uid:f3494a81-13f5-44da-afd1-f8752f281b7f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:05.998424 kubelet[2479]: E0513 00:28:05.998377 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:05.998922 containerd[1440]: time="2025-05-13T00:28:05.998882100Z" level=error msg="Failed to destroy network for sandbox \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:05.999775 containerd[1440]: time="2025-05-13T00:28:05.999738149Z" level=error msg="encountered an error cleaning up failed sandbox \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:05.999775 containerd[1440]: time="2025-05-13T00:28:05.999782550Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x58pd,Uid:4cfd3c97-e6c2-46da-a620-1303e1ac26b6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.000918 kubelet[2479]: E0513 00:28:05.999932 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.005425 kubelet[2479]: E0513 00:28:06.004665 2479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x58pd" May 13 00:28:06.005425 kubelet[2479]: E0513 00:28:06.004712 2479 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x58pd" May 13 00:28:06.005425 kubelet[2479]: E0513 00:28:06.005185 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-x58pd_kube-system(4cfd3c97-e6c2-46da-a620-1303e1ac26b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-x58pd_kube-system(4cfd3c97-e6c2-46da-a620-1303e1ac26b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-x58pd" podUID="4cfd3c97-e6c2-46da-a620-1303e1ac26b6" May 13 00:28:06.005733 kubelet[2479]: E0513 00:28:06.005609 2479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s4q64" May 13 00:28:06.005733 kubelet[2479]: E0513 00:28:06.005650 2479 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s4q64" May 13 00:28:06.005733 kubelet[2479]: E0513 00:28:06.005695 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-s4q64_kube-system(f3494a81-13f5-44da-afd1-f8752f281b7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-s4q64_kube-system(f3494a81-13f5-44da-afd1-f8752f281b7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s4q64" podUID="f3494a81-13f5-44da-afd1-f8752f281b7f" May 13 00:28:06.006339 containerd[1440]: time="2025-05-13T00:28:06.006291977Z" level=error msg="Failed to destroy network for sandbox \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.006718 containerd[1440]: time="2025-05-13T00:28:06.006691741Z" level=error msg="Failed to destroy network for sandbox \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.007063 containerd[1440]: time="2025-05-13T00:28:06.007024384Z" level=error msg="encountered an error cleaning up failed sandbox \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.007238 containerd[1440]: time="2025-05-13T00:28:06.007148826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74cfd5766c-hkfcg,Uid:3294737e-8c5e-4292-96da-dff74f2e17e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.007334 kubelet[2479]: E0513 00:28:06.007309 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.007425 kubelet[2479]: E0513 00:28:06.007347 2479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74cfd5766c-hkfcg" May 13 00:28:06.007521 kubelet[2479]: E0513 00:28:06.007428 2479 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74cfd5766c-hkfcg" May 13 00:28:06.007521 kubelet[2479]: E0513 00:28:06.007464 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74cfd5766c-hkfcg_calico-apiserver(3294737e-8c5e-4292-96da-dff74f2e17e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74cfd5766c-hkfcg_calico-apiserver(3294737e-8c5e-4292-96da-dff74f2e17e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74cfd5766c-hkfcg" podUID="3294737e-8c5e-4292-96da-dff74f2e17e9" May 13 00:28:06.007845 containerd[1440]: time="2025-05-13T00:28:06.007795752Z" level=error msg="encountered an error cleaning up failed sandbox \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.008061 containerd[1440]: time="2025-05-13T00:28:06.007937074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56cdf655d5-cvxnm,Uid:66dabaa2-e4d2-4e87-bf56-94c73a7429d1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.008258 kubelet[2479]: E0513 00:28:06.008229 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.008335 kubelet[2479]: E0513 00:28:06.008267 2479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56cdf655d5-cvxnm" May 13 00:28:06.008335 kubelet[2479]: E0513 00:28:06.008283 2479 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56cdf655d5-cvxnm" May 13 00:28:06.008335 kubelet[2479]: E0513 00:28:06.008311 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-56cdf655d5-cvxnm_calico-system(66dabaa2-e4d2-4e87-bf56-94c73a7429d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-56cdf655d5-cvxnm_calico-system(66dabaa2-e4d2-4e87-bf56-94c73a7429d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56cdf655d5-cvxnm" podUID="66dabaa2-e4d2-4e87-bf56-94c73a7429d1" May 13 00:28:06.015642 containerd[1440]: time="2025-05-13T00:28:06.015606233Z" level=error msg="Failed to destroy network for sandbox \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.015922 containerd[1440]: time="2025-05-13T00:28:06.015897036Z" level=error msg="encountered an error cleaning up failed sandbox \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.015978 containerd[1440]: time="2025-05-13T00:28:06.015946396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74cfd5766c-2dwxc,Uid:6a95ccfc-05c7-4192-9854-3fc518d2c335,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.016132 kubelet[2479]: E0513 00:28:06.016103 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.016189 kubelet[2479]: E0513 00:28:06.016149 2479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74cfd5766c-2dwxc" May 13 00:28:06.016189 kubelet[2479]: E0513 00:28:06.016166 2479 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74cfd5766c-2dwxc" May 13 00:28:06.016287 kubelet[2479]: E0513 00:28:06.016204 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74cfd5766c-2dwxc_calico-apiserver(6a95ccfc-05c7-4192-9854-3fc518d2c335)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74cfd5766c-2dwxc_calico-apiserver(6a95ccfc-05c7-4192-9854-3fc518d2c335)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74cfd5766c-2dwxc" podUID="6a95ccfc-05c7-4192-9854-3fc518d2c335" May 13 00:28:06.169709 systemd[1]: Created slice kubepods-besteffort-pod5e18653e_7d13_4d2f_8b0d_991f11e13bcd.slice - libcontainer container kubepods-besteffort-pod5e18653e_7d13_4d2f_8b0d_991f11e13bcd.slice. May 13 00:28:06.172163 containerd[1440]: time="2025-05-13T00:28:06.172125680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bgptb,Uid:5e18653e-7d13-4d2f-8b0d-991f11e13bcd,Namespace:calico-system,Attempt:0,}" May 13 00:28:06.224758 containerd[1440]: time="2025-05-13T00:28:06.224637299Z" level=error msg="Failed to destroy network for sandbox \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.225281 containerd[1440]: time="2025-05-13T00:28:06.225108144Z" level=error msg="encountered an error cleaning up failed sandbox \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.225281 containerd[1440]: time="2025-05-13T00:28:06.225182185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bgptb,Uid:5e18653e-7d13-4d2f-8b0d-991f11e13bcd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.226464 kubelet[2479]: E0513 00:28:06.225496 2479 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.226464 kubelet[2479]: E0513 00:28:06.225543 2479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bgptb" May 13 00:28:06.226464 kubelet[2479]: E0513 00:28:06.225563 2479 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bgptb" May 13 00:28:06.226622 kubelet[2479]: E0513 00:28:06.225600 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bgptb_calico-system(5e18653e-7d13-4d2f-8b0d-991f11e13bcd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bgptb_calico-system(5e18653e-7d13-4d2f-8b0d-991f11e13bcd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bgptb" podUID="5e18653e-7d13-4d2f-8b0d-991f11e13bcd" May 13 00:28:06.244497 kubelet[2479]: I0513 00:28:06.244401 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" May 13 00:28:06.245724 containerd[1440]: time="2025-05-13T00:28:06.245149110Z" level=info msg="StopPodSandbox for \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\"" May 13 00:28:06.245724 containerd[1440]: time="2025-05-13T00:28:06.245307631Z" level=info msg="Ensure that sandbox 754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86 in task-service has been cleanup successfully" May 13 00:28:06.248319 kubelet[2479]: I0513 00:28:06.248079 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" May 13 00:28:06.251586 containerd[1440]: time="2025-05-13T00:28:06.250172881Z" level=info msg="StopPodSandbox for \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\"" May 13 00:28:06.251586 containerd[1440]: time="2025-05-13T00:28:06.250337083Z" level=info msg="Ensure that sandbox 70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8 in task-service has been cleanup successfully" May 13 00:28:06.251798 kubelet[2479]: I0513 00:28:06.251771 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" May 13 00:28:06.252445 containerd[1440]: time="2025-05-13T00:28:06.252221382Z" level=info msg="StopPodSandbox for \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\"" May 13 00:28:06.252506 containerd[1440]: time="2025-05-13T00:28:06.252467985Z" level=info msg="Ensure that sandbox 59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e in task-service has been cleanup successfully" May 13 00:28:06.255914 kubelet[2479]: I0513 00:28:06.255886 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" May 13 00:28:06.258047 containerd[1440]: time="2025-05-13T00:28:06.256851590Z" level=info msg="StopPodSandbox for \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\"" May 13 00:28:06.258047 containerd[1440]: time="2025-05-13T00:28:06.257015592Z" level=info msg="Ensure that sandbox 9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4 in task-service has been cleanup successfully" May 13 00:28:06.259688 kubelet[2479]: I0513 00:28:06.259329 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" May 13 00:28:06.261690 containerd[1440]: time="2025-05-13T00:28:06.261572158Z" level=info msg="StopPodSandbox for \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\"" May 13 00:28:06.262302 containerd[1440]: time="2025-05-13T00:28:06.261728840Z" level=info msg="Ensure that sandbox c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345 in task-service has been cleanup successfully" May 13 00:28:06.266716 kubelet[2479]: E0513 00:28:06.265835 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:06.267162 containerd[1440]: time="2025-05-13T00:28:06.266796692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 00:28:06.269816 kubelet[2479]: I0513 00:28:06.269790 2479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" May 13 00:28:06.272136 containerd[1440]: time="2025-05-13T00:28:06.271908945Z" level=info msg="StopPodSandbox for \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\"" May 13 00:28:06.272136 containerd[1440]: time="2025-05-13T00:28:06.272117827Z" level=info msg="Ensure that sandbox 1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c in task-service has been cleanup successfully" May 13 00:28:06.299326 containerd[1440]: time="2025-05-13T00:28:06.297809971Z" level=error msg="StopPodSandbox for \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\" failed" error="failed to destroy network for sandbox \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.299467 kubelet[2479]: E0513 00:28:06.298098 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" May 13 00:28:06.299467 kubelet[2479]: E0513 00:28:06.298158 2479 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86"} May 13 00:28:06.299467 kubelet[2479]: E0513 00:28:06.298216 2479 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5e18653e-7d13-4d2f-8b0d-991f11e13bcd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:28:06.299467 kubelet[2479]: E0513 00:28:06.298275 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5e18653e-7d13-4d2f-8b0d-991f11e13bcd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bgptb" podUID="5e18653e-7d13-4d2f-8b0d-991f11e13bcd" May 13 00:28:06.311590 containerd[1440]: time="2025-05-13T00:28:06.311532752Z" level=error msg="StopPodSandbox for \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\" failed" error="failed to destroy network for sandbox \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.311809 kubelet[2479]: E0513 00:28:06.311765 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" May 13 00:28:06.311861 kubelet[2479]: E0513 00:28:06.311817 2479 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e"} May 13 00:28:06.311861 kubelet[2479]: E0513 00:28:06.311851 2479 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6a95ccfc-05c7-4192-9854-3fc518d2c335\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:28:06.311953 kubelet[2479]: E0513 00:28:06.311872 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6a95ccfc-05c7-4192-9854-3fc518d2c335\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74cfd5766c-2dwxc" podUID="6a95ccfc-05c7-4192-9854-3fc518d2c335" May 13 00:28:06.316415 containerd[1440]: time="2025-05-13T00:28:06.316344681Z" level=error msg="StopPodSandbox for \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\" failed" error="failed to destroy network for sandbox \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.316694 kubelet[2479]: E0513 00:28:06.316605 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" May 13 00:28:06.316757 kubelet[2479]: E0513 00:28:06.316699 2479 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8"} May 13 00:28:06.316757 kubelet[2479]: E0513 00:28:06.316733 2479 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3294737e-8c5e-4292-96da-dff74f2e17e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:28:06.316845 kubelet[2479]: E0513 00:28:06.316762 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3294737e-8c5e-4292-96da-dff74f2e17e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74cfd5766c-hkfcg" podUID="3294737e-8c5e-4292-96da-dff74f2e17e9" May 13 00:28:06.329349 containerd[1440]: time="2025-05-13T00:28:06.329285614Z" level=error msg="StopPodSandbox for \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\" failed" error="failed to destroy network for sandbox \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.329539 kubelet[2479]: E0513 00:28:06.329505 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" May 13 00:28:06.329588 kubelet[2479]: E0513 00:28:06.329550 2479 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4"} May 13 00:28:06.329588 kubelet[2479]: E0513 00:28:06.329581 2479 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f3494a81-13f5-44da-afd1-f8752f281b7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:28:06.329711 kubelet[2479]: E0513 00:28:06.329604 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f3494a81-13f5-44da-afd1-f8752f281b7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s4q64" podUID="f3494a81-13f5-44da-afd1-f8752f281b7f" May 13 00:28:06.330989 containerd[1440]: time="2025-05-13T00:28:06.330946991Z" level=error msg="StopPodSandbox for \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\" failed" error="failed to destroy network for sandbox \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.331269 kubelet[2479]: E0513 00:28:06.331109 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" May 13 00:28:06.331269 kubelet[2479]: E0513 00:28:06.331144 2479 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345"} May 13 00:28:06.331269 kubelet[2479]: E0513 00:28:06.331169 2479 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"66dabaa2-e4d2-4e87-bf56-94c73a7429d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:28:06.331269 kubelet[2479]: E0513 00:28:06.331190 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"66dabaa2-e4d2-4e87-bf56-94c73a7429d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56cdf655d5-cvxnm" podUID="66dabaa2-e4d2-4e87-bf56-94c73a7429d1" May 13 00:28:06.332888 containerd[1440]: time="2025-05-13T00:28:06.332844890Z" level=error msg="StopPodSandbox for \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\" failed" error="failed to destroy network for sandbox \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:06.333091 kubelet[2479]: E0513 00:28:06.333052 2479 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" May 13 00:28:06.333091 kubelet[2479]: E0513 00:28:06.333088 2479 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c"} May 13 00:28:06.333164 kubelet[2479]: E0513 00:28:06.333122 2479 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4cfd3c97-e6c2-46da-a620-1303e1ac26b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:28:06.333164 kubelet[2479]: E0513 00:28:06.333139 2479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4cfd3c97-e6c2-46da-a620-1303e1ac26b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-x58pd" podUID="4cfd3c97-e6c2-46da-a620-1303e1ac26b6" May 13 00:28:06.575217 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4-shm.mount: Deactivated successfully. May 13 00:28:06.575309 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345-shm.mount: Deactivated successfully. May 13 00:28:10.123909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2199797857.mount: Deactivated successfully. May 13 00:28:10.396477 containerd[1440]: time="2025-05-13T00:28:10.396348072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:10.397123 containerd[1440]: time="2025-05-13T00:28:10.397043598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 13 00:28:10.397919 containerd[1440]: time="2025-05-13T00:28:10.397865005Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:10.400212 containerd[1440]: time="2025-05-13T00:28:10.400186945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:10.400772 containerd[1440]: time="2025-05-13T00:28:10.400747110Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 4.133900297s" May 13 00:28:10.400837 containerd[1440]: time="2025-05-13T00:28:10.400779310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 13 00:28:10.409363 containerd[1440]: time="2025-05-13T00:28:10.409331065Z" level=info msg="CreateContainer within sandbox \"7aefe0f9bad83cd53db3f8592d35ed9869340201f894e69567ba1512b0e04273\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 00:28:10.424737 containerd[1440]: time="2025-05-13T00:28:10.424688678Z" level=info msg="CreateContainer within sandbox \"7aefe0f9bad83cd53db3f8592d35ed9869340201f894e69567ba1512b0e04273\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3c38585de0051b61ed99669bf4887fca63caf377f1e2c220645e4d29b4c6e51e\"" May 13 00:28:10.425500 containerd[1440]: time="2025-05-13T00:28:10.425462485Z" level=info msg="StartContainer for \"3c38585de0051b61ed99669bf4887fca63caf377f1e2c220645e4d29b4c6e51e\"" May 13 00:28:10.478572 systemd[1]: Started cri-containerd-3c38585de0051b61ed99669bf4887fca63caf377f1e2c220645e4d29b4c6e51e.scope - libcontainer container 3c38585de0051b61ed99669bf4887fca63caf377f1e2c220645e4d29b4c6e51e. May 13 00:28:10.502960 containerd[1440]: time="2025-05-13T00:28:10.502554795Z" level=info msg="StartContainer for \"3c38585de0051b61ed99669bf4887fca63caf377f1e2c220645e4d29b4c6e51e\" returns successfully" May 13 00:28:10.660791 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 00:28:10.660907 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 00:28:11.280454 kubelet[2479]: E0513 00:28:11.280421 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:11.295821 kubelet[2479]: I0513 00:28:11.295755 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6kblv" podStartSLOduration=1.874630333 podStartE2EDuration="15.295739274s" podCreationTimestamp="2025-05-13 00:27:56 +0000 UTC" firstStartedPulling="2025-05-13 00:27:56.980400656 +0000 UTC m=+12.896327188" lastFinishedPulling="2025-05-13 00:28:10.401509597 +0000 UTC m=+26.317436129" observedRunningTime="2025-05-13 00:28:11.295464112 +0000 UTC m=+27.211390684" watchObservedRunningTime="2025-05-13 00:28:11.295739274 +0000 UTC m=+27.211665766" May 13 00:28:12.062446 kernel: bpftool[3859]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 00:28:12.238554 systemd-networkd[1383]: vxlan.calico: Link UP May 13 00:28:12.238562 systemd-networkd[1383]: vxlan.calico: Gained carrier May 13 00:28:12.282255 kubelet[2479]: E0513 00:28:12.282219 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:13.363303 systemd-networkd[1383]: vxlan.calico: Gained IPv6LL May 13 00:28:14.838694 systemd[1]: Started sshd@7-10.0.0.91:22-10.0.0.1:50716.service - OpenSSH per-connection server daemon (10.0.0.1:50716). May 13 00:28:14.889079 sshd[3985]: Accepted publickey for core from 10.0.0.1 port 50716 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:14.890612 sshd[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:14.894411 systemd-logind[1424]: New session 8 of user core. May 13 00:28:14.901566 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 00:28:15.072946 sshd[3985]: pam_unix(sshd:session): session closed for user core May 13 00:28:15.078813 systemd[1]: sshd@7-10.0.0.91:22-10.0.0.1:50716.service: Deactivated successfully. May 13 00:28:15.080702 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:28:15.082824 systemd-logind[1424]: Session 8 logged out. Waiting for processes to exit. May 13 00:28:15.083667 systemd-logind[1424]: Removed session 8. May 13 00:28:17.164576 containerd[1440]: time="2025-05-13T00:28:17.164119040Z" level=info msg="StopPodSandbox for \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\"" May 13 00:28:17.410648 containerd[1440]: 2025-05-13 00:28:17.241 [INFO][4015] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" May 13 00:28:17.410648 containerd[1440]: 2025-05-13 00:28:17.243 [INFO][4015] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" iface="eth0" netns="/var/run/netns/cni-27bb875c-dc06-0c50-54a9-01acfd1f12a7" May 13 00:28:17.410648 containerd[1440]: 2025-05-13 00:28:17.244 [INFO][4015] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" iface="eth0" netns="/var/run/netns/cni-27bb875c-dc06-0c50-54a9-01acfd1f12a7" May 13 00:28:17.410648 containerd[1440]: 2025-05-13 00:28:17.246 [INFO][4015] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" iface="eth0" netns="/var/run/netns/cni-27bb875c-dc06-0c50-54a9-01acfd1f12a7" May 13 00:28:17.410648 containerd[1440]: 2025-05-13 00:28:17.246 [INFO][4015] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" May 13 00:28:17.410648 containerd[1440]: 2025-05-13 00:28:17.246 [INFO][4015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" May 13 00:28:17.410648 containerd[1440]: 2025-05-13 00:28:17.394 [INFO][4023] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" HandleID="k8s-pod-network.9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" Workload="localhost-k8s-coredns--668d6bf9bc--s4q64-eth0" May 13 00:28:17.410648 containerd[1440]: 2025-05-13 00:28:17.395 [INFO][4023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:17.410648 containerd[1440]: 2025-05-13 00:28:17.395 [INFO][4023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:17.410648 containerd[1440]: 2025-05-13 00:28:17.405 [WARNING][4023] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" HandleID="k8s-pod-network.9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" Workload="localhost-k8s-coredns--668d6bf9bc--s4q64-eth0" May 13 00:28:17.410648 containerd[1440]: 2025-05-13 00:28:17.405 [INFO][4023] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" HandleID="k8s-pod-network.9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" Workload="localhost-k8s-coredns--668d6bf9bc--s4q64-eth0" May 13 00:28:17.410648 containerd[1440]: 2025-05-13 00:28:17.407 [INFO][4023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:17.410648 containerd[1440]: 2025-05-13 00:28:17.409 [INFO][4015] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" May 13 00:28:17.413016 containerd[1440]: time="2025-05-13T00:28:17.410760465Z" level=info msg="TearDown network for sandbox \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\" successfully" May 13 00:28:17.413016 containerd[1440]: time="2025-05-13T00:28:17.410799266Z" level=info msg="StopPodSandbox for \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\" returns successfully" May 13 00:28:17.413016 containerd[1440]: time="2025-05-13T00:28:17.411802912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s4q64,Uid:f3494a81-13f5-44da-afd1-f8752f281b7f,Namespace:kube-system,Attempt:1,}" May 13 00:28:17.413181 kubelet[2479]: E0513 00:28:17.411154 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:17.417972 systemd[1]: run-netns-cni\x2d27bb875c\x2ddc06\x2d0c50\x2d54a9\x2d01acfd1f12a7.mount: Deactivated successfully. May 13 00:28:17.561614 systemd-networkd[1383]: cali0bf40bf152d: Link UP May 13 00:28:17.561926 systemd-networkd[1383]: cali0bf40bf152d: Gained carrier May 13 00:28:17.578202 containerd[1440]: 2025-05-13 00:28:17.489 [INFO][4039] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--s4q64-eth0 coredns-668d6bf9bc- kube-system f3494a81-13f5-44da-afd1-f8752f281b7f 790 0 2025-05-13 00:27:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-s4q64 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0bf40bf152d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4q64" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s4q64-" May 13 00:28:17.578202 containerd[1440]: 2025-05-13 00:28:17.489 [INFO][4039] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4q64" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s4q64-eth0" May 13 00:28:17.578202 containerd[1440]: 2025-05-13 00:28:17.520 [INFO][4053] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434" HandleID="k8s-pod-network.bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434" Workload="localhost-k8s-coredns--668d6bf9bc--s4q64-eth0" May 13 00:28:17.578202 containerd[1440]: 2025-05-13 00:28:17.533 [INFO][4053] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434" HandleID="k8s-pod-network.bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434" Workload="localhost-k8s-coredns--668d6bf9bc--s4q64-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000305770), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-s4q64", "timestamp":"2025-05-13 00:28:17.520197404 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:28:17.578202 containerd[1440]: 2025-05-13 00:28:17.533 [INFO][4053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:17.578202 containerd[1440]: 2025-05-13 00:28:17.533 [INFO][4053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:17.578202 containerd[1440]: 2025-05-13 00:28:17.533 [INFO][4053] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:28:17.578202 containerd[1440]: 2025-05-13 00:28:17.535 [INFO][4053] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434" host="localhost" May 13 00:28:17.578202 containerd[1440]: 2025-05-13 00:28:17.540 [INFO][4053] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:28:17.578202 containerd[1440]: 2025-05-13 00:28:17.544 [INFO][4053] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:28:17.578202 containerd[1440]: 2025-05-13 00:28:17.546 [INFO][4053] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:28:17.578202 containerd[1440]: 2025-05-13 00:28:17.547 [INFO][4053] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:28:17.578202 containerd[1440]: 2025-05-13 00:28:17.547 [INFO][4053] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434" host="localhost" May 13 00:28:17.578202 containerd[1440]: 2025-05-13 00:28:17.549 [INFO][4053] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434 May 13 00:28:17.578202 containerd[1440]: 2025-05-13 00:28:17.552 [INFO][4053] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434" host="localhost" May 13 00:28:17.578202 containerd[1440]: 2025-05-13 00:28:17.557 [INFO][4053] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434" host="localhost" May 13 00:28:17.578202 containerd[1440]: 2025-05-13 00:28:17.557 [INFO][4053] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434" host="localhost" May 13 00:28:17.578202 containerd[1440]: 2025-05-13 00:28:17.557 [INFO][4053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:17.578202 containerd[1440]: 2025-05-13 00:28:17.557 [INFO][4053] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434" HandleID="k8s-pod-network.bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434" Workload="localhost-k8s-coredns--668d6bf9bc--s4q64-eth0" May 13 00:28:17.579518 containerd[1440]: 2025-05-13 00:28:17.559 [INFO][4039] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4q64" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s4q64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s4q64-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f3494a81-13f5-44da-afd1-f8752f281b7f", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-s4q64", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0bf40bf152d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:17.579518 containerd[1440]: 2025-05-13 00:28:17.559 [INFO][4039] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4q64" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s4q64-eth0" May 13 00:28:17.579518 containerd[1440]: 2025-05-13 00:28:17.559 [INFO][4039] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0bf40bf152d ContainerID="bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4q64" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s4q64-eth0" May 13 00:28:17.579518 containerd[1440]: 2025-05-13 00:28:17.561 [INFO][4039] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4q64" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s4q64-eth0" May 13 00:28:17.579518 containerd[1440]: 2025-05-13 00:28:17.562 [INFO][4039] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4q64" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s4q64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s4q64-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f3494a81-13f5-44da-afd1-f8752f281b7f", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434", Pod:"coredns-668d6bf9bc-s4q64", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0bf40bf152d", MAC:"86:26:a6:ac:df:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:17.579518 containerd[1440]: 2025-05-13 00:28:17.573 [INFO][4039] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434" Namespace="kube-system" Pod="coredns-668d6bf9bc-s4q64" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s4q64-eth0" May 13 00:28:17.614047 containerd[1440]: time="2025-05-13T00:28:17.613910237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:28:17.614047 containerd[1440]: time="2025-05-13T00:28:17.614015478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:28:17.614047 containerd[1440]: time="2025-05-13T00:28:17.614028198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:17.614245 containerd[1440]: time="2025-05-13T00:28:17.614118039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:17.635631 systemd[1]: Started cri-containerd-bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434.scope - libcontainer container bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434. May 13 00:28:17.646347 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:28:17.663583 containerd[1440]: time="2025-05-13T00:28:17.663538852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s4q64,Uid:f3494a81-13f5-44da-afd1-f8752f281b7f,Namespace:kube-system,Attempt:1,} returns sandbox id \"bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434\"" May 13 00:28:17.664567 kubelet[2479]: E0513 00:28:17.664544 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:17.668400 containerd[1440]: time="2025-05-13T00:28:17.668201604Z" level=info msg="CreateContainer within sandbox \"bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:28:17.681751 containerd[1440]: time="2025-05-13T00:28:17.681632415Z" level=info msg="CreateContainer within sandbox \"bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4acfe7df309158b657cf990aea4bd43138f3915fca564df15ba0202d05d571f0\"" May 13 00:28:17.683518 containerd[1440]: time="2025-05-13T00:28:17.682214819Z" level=info msg="StartContainer for \"4acfe7df309158b657cf990aea4bd43138f3915fca564df15ba0202d05d571f0\"" May 13 00:28:17.709712 systemd[1]: Started cri-containerd-4acfe7df309158b657cf990aea4bd43138f3915fca564df15ba0202d05d571f0.scope - libcontainer container 4acfe7df309158b657cf990aea4bd43138f3915fca564df15ba0202d05d571f0. May 13 00:28:17.734568 containerd[1440]: time="2025-05-13T00:28:17.734519212Z" level=info msg="StartContainer for \"4acfe7df309158b657cf990aea4bd43138f3915fca564df15ba0202d05d571f0\" returns successfully" May 13 00:28:18.163561 containerd[1440]: time="2025-05-13T00:28:18.163496514Z" level=info msg="StopPodSandbox for \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\"" May 13 00:28:18.163561 containerd[1440]: time="2025-05-13T00:28:18.163552435Z" level=info msg="StopPodSandbox for \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\"" May 13 00:28:18.245454 containerd[1440]: 2025-05-13 00:28:18.209 [INFO][4185] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" May 13 00:28:18.245454 containerd[1440]: 2025-05-13 00:28:18.209 [INFO][4185] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" iface="eth0" netns="/var/run/netns/cni-264c4495-5183-e010-b3f9-8956243919c2" May 13 00:28:18.245454 containerd[1440]: 2025-05-13 00:28:18.210 [INFO][4185] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" iface="eth0" netns="/var/run/netns/cni-264c4495-5183-e010-b3f9-8956243919c2" May 13 00:28:18.245454 containerd[1440]: 2025-05-13 00:28:18.210 [INFO][4185] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" iface="eth0" netns="/var/run/netns/cni-264c4495-5183-e010-b3f9-8956243919c2" May 13 00:28:18.245454 containerd[1440]: 2025-05-13 00:28:18.210 [INFO][4185] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" May 13 00:28:18.245454 containerd[1440]: 2025-05-13 00:28:18.210 [INFO][4185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" May 13 00:28:18.245454 containerd[1440]: 2025-05-13 00:28:18.229 [INFO][4201] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" HandleID="k8s-pod-network.59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" Workload="localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0" May 13 00:28:18.245454 containerd[1440]: 2025-05-13 00:28:18.229 [INFO][4201] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:18.245454 containerd[1440]: 2025-05-13 00:28:18.229 [INFO][4201] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:18.245454 containerd[1440]: 2025-05-13 00:28:18.238 [WARNING][4201] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" HandleID="k8s-pod-network.59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" Workload="localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0" May 13 00:28:18.245454 containerd[1440]: 2025-05-13 00:28:18.238 [INFO][4201] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" HandleID="k8s-pod-network.59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" Workload="localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0" May 13 00:28:18.245454 containerd[1440]: 2025-05-13 00:28:18.239 [INFO][4201] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:18.245454 containerd[1440]: 2025-05-13 00:28:18.241 [INFO][4185] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" May 13 00:28:18.246051 containerd[1440]: time="2025-05-13T00:28:18.245830413Z" level=info msg="TearDown network for sandbox \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\" successfully" May 13 00:28:18.246051 containerd[1440]: time="2025-05-13T00:28:18.245865053Z" level=info msg="StopPodSandbox for \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\" returns successfully" May 13 00:28:18.246726 containerd[1440]: time="2025-05-13T00:28:18.246699539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74cfd5766c-2dwxc,Uid:6a95ccfc-05c7-4192-9854-3fc518d2c335,Namespace:calico-apiserver,Attempt:1,}" May 13 00:28:18.252823 containerd[1440]: 2025-05-13 00:28:18.210 [INFO][4184] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" May 13 00:28:18.252823 containerd[1440]: 2025-05-13 00:28:18.210 [INFO][4184] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" iface="eth0" netns="/var/run/netns/cni-88662c4d-439f-7266-b9fa-02b5ec82b934" May 13 00:28:18.252823 containerd[1440]: 2025-05-13 00:28:18.210 [INFO][4184] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" iface="eth0" netns="/var/run/netns/cni-88662c4d-439f-7266-b9fa-02b5ec82b934" May 13 00:28:18.252823 containerd[1440]: 2025-05-13 00:28:18.210 [INFO][4184] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" iface="eth0" netns="/var/run/netns/cni-88662c4d-439f-7266-b9fa-02b5ec82b934" May 13 00:28:18.252823 containerd[1440]: 2025-05-13 00:28:18.210 [INFO][4184] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" May 13 00:28:18.252823 containerd[1440]: 2025-05-13 00:28:18.210 [INFO][4184] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" May 13 00:28:18.252823 containerd[1440]: 2025-05-13 00:28:18.230 [INFO][4203] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" HandleID="k8s-pod-network.70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" Workload="localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0" May 13 00:28:18.252823 containerd[1440]: 2025-05-13 00:28:18.230 [INFO][4203] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:18.252823 containerd[1440]: 2025-05-13 00:28:18.239 [INFO][4203] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:18.252823 containerd[1440]: 2025-05-13 00:28:18.248 [WARNING][4203] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" HandleID="k8s-pod-network.70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" Workload="localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0" May 13 00:28:18.252823 containerd[1440]: 2025-05-13 00:28:18.248 [INFO][4203] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" HandleID="k8s-pod-network.70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" Workload="localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0" May 13 00:28:18.252823 containerd[1440]: 2025-05-13 00:28:18.249 [INFO][4203] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:18.252823 containerd[1440]: 2025-05-13 00:28:18.251 [INFO][4184] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" May 13 00:28:18.253324 containerd[1440]: time="2025-05-13T00:28:18.253212261Z" level=info msg="TearDown network for sandbox \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\" successfully" May 13 00:28:18.253324 containerd[1440]: time="2025-05-13T00:28:18.253243061Z" level=info msg="StopPodSandbox for \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\" returns successfully" May 13 00:28:18.253836 containerd[1440]: time="2025-05-13T00:28:18.253793905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74cfd5766c-hkfcg,Uid:3294737e-8c5e-4292-96da-dff74f2e17e9,Namespace:calico-apiserver,Attempt:1,}" May 13 00:28:18.299834 kubelet[2479]: E0513 00:28:18.299504 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:18.322796 kubelet[2479]: I0513 00:28:18.317842 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s4q64" podStartSLOduration=28.317822604 podStartE2EDuration="28.317822604s" podCreationTimestamp="2025-05-13 00:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:28:18.31728152 +0000 UTC m=+34.233208052" watchObservedRunningTime="2025-05-13 00:28:18.317822604 +0000 UTC m=+34.233749136" May 13 00:28:18.415736 systemd[1]: run-netns-cni\x2d88662c4d\x2d439f\x2d7266\x2db9fa\x2d02b5ec82b934.mount: Deactivated successfully. May 13 00:28:18.415946 systemd[1]: run-netns-cni\x2d264c4495\x2d5183\x2de010\x2db3f9\x2d8956243919c2.mount: Deactivated successfully. May 13 00:28:18.475513 systemd-networkd[1383]: calic77a10915b6: Link UP May 13 00:28:18.476039 systemd-networkd[1383]: calic77a10915b6: Gained carrier May 13 00:28:18.489579 containerd[1440]: 2025-05-13 00:28:18.303 [INFO][4230] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0 calico-apiserver-74cfd5766c- calico-apiserver 3294737e-8c5e-4292-96da-dff74f2e17e9 807 0 2025-05-13 00:27:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74cfd5766c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-74cfd5766c-hkfcg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic77a10915b6 [] []}} ContainerID="2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404" Namespace="calico-apiserver" Pod="calico-apiserver-74cfd5766c-hkfcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-" May 13 00:28:18.489579 containerd[1440]: 2025-05-13 00:28:18.303 [INFO][4230] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404" Namespace="calico-apiserver" Pod="calico-apiserver-74cfd5766c-hkfcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0" May 13 00:28:18.489579 containerd[1440]: 2025-05-13 00:28:18.336 [INFO][4248] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404" HandleID="k8s-pod-network.2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404" Workload="localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0" May 13 00:28:18.489579 containerd[1440]: 2025-05-13 00:28:18.351 [INFO][4248] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404" HandleID="k8s-pod-network.2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404" Workload="localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011aa90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-74cfd5766c-hkfcg", "timestamp":"2025-05-13 00:28:18.336418285 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:28:18.489579 containerd[1440]: 2025-05-13 00:28:18.351 [INFO][4248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:18.489579 containerd[1440]: 2025-05-13 00:28:18.351 [INFO][4248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:18.489579 containerd[1440]: 2025-05-13 00:28:18.351 [INFO][4248] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:28:18.489579 containerd[1440]: 2025-05-13 00:28:18.355 [INFO][4248] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404" host="localhost" May 13 00:28:18.489579 containerd[1440]: 2025-05-13 00:28:18.449 [INFO][4248] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:28:18.489579 containerd[1440]: 2025-05-13 00:28:18.454 [INFO][4248] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:28:18.489579 containerd[1440]: 2025-05-13 00:28:18.456 [INFO][4248] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:28:18.489579 containerd[1440]: 2025-05-13 00:28:18.458 [INFO][4248] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:28:18.489579 containerd[1440]: 2025-05-13 00:28:18.458 [INFO][4248] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404" host="localhost" May 13 00:28:18.489579 containerd[1440]: 2025-05-13 00:28:18.460 [INFO][4248] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404 May 13 00:28:18.489579 containerd[1440]: 2025-05-13 00:28:18.464 [INFO][4248] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404" host="localhost" May 13 00:28:18.489579 containerd[1440]: 2025-05-13 00:28:18.469 [INFO][4248] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404" host="localhost" May 13 00:28:18.489579 containerd[1440]: 2025-05-13 00:28:18.469 [INFO][4248] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404" host="localhost" May 13 00:28:18.489579 containerd[1440]: 2025-05-13 00:28:18.469 [INFO][4248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:18.489579 containerd[1440]: 2025-05-13 00:28:18.470 [INFO][4248] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404" HandleID="k8s-pod-network.2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404" Workload="localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0" May 13 00:28:18.490366 containerd[1440]: 2025-05-13 00:28:18.471 [INFO][4230] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404" Namespace="calico-apiserver" Pod="calico-apiserver-74cfd5766c-hkfcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0", GenerateName:"calico-apiserver-74cfd5766c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3294737e-8c5e-4292-96da-dff74f2e17e9", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74cfd5766c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-74cfd5766c-hkfcg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic77a10915b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:18.490366 containerd[1440]: 2025-05-13 00:28:18.471 [INFO][4230] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404" Namespace="calico-apiserver" Pod="calico-apiserver-74cfd5766c-hkfcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0" May 13 00:28:18.490366 containerd[1440]: 2025-05-13 00:28:18.471 [INFO][4230] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic77a10915b6 ContainerID="2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404" Namespace="calico-apiserver" Pod="calico-apiserver-74cfd5766c-hkfcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0" May 13 00:28:18.490366 containerd[1440]: 2025-05-13 00:28:18.474 [INFO][4230] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404" Namespace="calico-apiserver" Pod="calico-apiserver-74cfd5766c-hkfcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0" May 13 00:28:18.490366 containerd[1440]: 2025-05-13 00:28:18.474 [INFO][4230] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404" Namespace="calico-apiserver" Pod="calico-apiserver-74cfd5766c-hkfcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0", GenerateName:"calico-apiserver-74cfd5766c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3294737e-8c5e-4292-96da-dff74f2e17e9", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74cfd5766c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404", Pod:"calico-apiserver-74cfd5766c-hkfcg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic77a10915b6", MAC:"32:f0:ed:a6:bd:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:18.490366 containerd[1440]: 2025-05-13 00:28:18.486 [INFO][4230] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404" Namespace="calico-apiserver" Pod="calico-apiserver-74cfd5766c-hkfcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0" May 13 00:28:18.507455 containerd[1440]: time="2025-05-13T00:28:18.507346683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:28:18.507590 containerd[1440]: time="2025-05-13T00:28:18.507448044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:28:18.507839 containerd[1440]: time="2025-05-13T00:28:18.507793846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:18.507922 containerd[1440]: time="2025-05-13T00:28:18.507892367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:18.521259 systemd[1]: run-containerd-runc-k8s.io-2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404-runc.g6ojI0.mount: Deactivated successfully. May 13 00:28:18.539637 systemd[1]: Started cri-containerd-2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404.scope - libcontainer container 2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404. May 13 00:28:18.552220 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:28:18.572997 containerd[1440]: time="2025-05-13T00:28:18.572952233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74cfd5766c-hkfcg,Uid:3294737e-8c5e-4292-96da-dff74f2e17e9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404\"" May 13 00:28:18.574960 containerd[1440]: time="2025-05-13T00:28:18.574924685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 00:28:18.580036 systemd-networkd[1383]: cali0a4ba22c273: Link UP May 13 00:28:18.580207 systemd-networkd[1383]: cali0a4ba22c273: Gained carrier May 13 00:28:18.597818 containerd[1440]: 2025-05-13 00:28:18.301 [INFO][4219] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0 calico-apiserver-74cfd5766c- calico-apiserver 6a95ccfc-05c7-4192-9854-3fc518d2c335 806 0 2025-05-13 00:27:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74cfd5766c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-74cfd5766c-2dwxc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0a4ba22c273 [] []}} ContainerID="17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166" Namespace="calico-apiserver" Pod="calico-apiserver-74cfd5766c-2dwxc" WorkloadEndpoint="localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-" May 13 00:28:18.597818 containerd[1440]: 2025-05-13 00:28:18.303 [INFO][4219] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166" Namespace="calico-apiserver" Pod="calico-apiserver-74cfd5766c-2dwxc" WorkloadEndpoint="localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0" May 13 00:28:18.597818 containerd[1440]: 2025-05-13 00:28:18.361 [INFO][4254] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166" HandleID="k8s-pod-network.17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166" Workload="localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0" May 13 00:28:18.597818 containerd[1440]: 2025-05-13 00:28:18.451 [INFO][4254] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166" HandleID="k8s-pod-network.17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166" Workload="localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000373e20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-74cfd5766c-2dwxc", "timestamp":"2025-05-13 00:28:18.36163121 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:28:18.597818 containerd[1440]: 2025-05-13 00:28:18.451 [INFO][4254] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:18.597818 containerd[1440]: 2025-05-13 00:28:18.470 [INFO][4254] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:18.597818 containerd[1440]: 2025-05-13 00:28:18.470 [INFO][4254] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:28:18.597818 containerd[1440]: 2025-05-13 00:28:18.473 [INFO][4254] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166" host="localhost" May 13 00:28:18.597818 containerd[1440]: 2025-05-13 00:28:18.551 [INFO][4254] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:28:18.597818 containerd[1440]: 2025-05-13 00:28:18.556 [INFO][4254] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:28:18.597818 containerd[1440]: 2025-05-13 00:28:18.558 [INFO][4254] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:28:18.597818 containerd[1440]: 2025-05-13 00:28:18.560 [INFO][4254] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:28:18.597818 containerd[1440]: 2025-05-13 00:28:18.560 [INFO][4254] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166" host="localhost" May 13 00:28:18.597818 containerd[1440]: 2025-05-13 00:28:18.563 [INFO][4254] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166 May 13 00:28:18.597818 containerd[1440]: 2025-05-13 00:28:18.566 [INFO][4254] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166" host="localhost" May 13 00:28:18.597818 containerd[1440]: 2025-05-13 00:28:18.575 [INFO][4254] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166" host="localhost" May 13 00:28:18.597818 containerd[1440]: 2025-05-13 00:28:18.575 [INFO][4254] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166" host="localhost" May 13 00:28:18.597818 containerd[1440]: 2025-05-13 00:28:18.575 [INFO][4254] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:18.597818 containerd[1440]: 2025-05-13 00:28:18.575 [INFO][4254] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166" HandleID="k8s-pod-network.17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166" Workload="localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0" May 13 00:28:18.598425 containerd[1440]: 2025-05-13 00:28:18.577 [INFO][4219] cni-plugin/k8s.go 386: Populated endpoint ContainerID="17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166" Namespace="calico-apiserver" Pod="calico-apiserver-74cfd5766c-2dwxc" WorkloadEndpoint="localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0", GenerateName:"calico-apiserver-74cfd5766c-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a95ccfc-05c7-4192-9854-3fc518d2c335", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74cfd5766c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-74cfd5766c-2dwxc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0a4ba22c273", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:18.598425 containerd[1440]: 2025-05-13 00:28:18.578 [INFO][4219] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166" Namespace="calico-apiserver" Pod="calico-apiserver-74cfd5766c-2dwxc" WorkloadEndpoint="localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0" May 13 00:28:18.598425 containerd[1440]: 2025-05-13 00:28:18.578 [INFO][4219] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a4ba22c273 ContainerID="17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166" Namespace="calico-apiserver" Pod="calico-apiserver-74cfd5766c-2dwxc" WorkloadEndpoint="localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0" May 13 00:28:18.598425 containerd[1440]: 2025-05-13 00:28:18.580 [INFO][4219] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166" Namespace="calico-apiserver" Pod="calico-apiserver-74cfd5766c-2dwxc" WorkloadEndpoint="localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0" May 13 00:28:18.598425 containerd[1440]: 2025-05-13 00:28:18.580 [INFO][4219] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166" Namespace="calico-apiserver" Pod="calico-apiserver-74cfd5766c-2dwxc" WorkloadEndpoint="localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0", GenerateName:"calico-apiserver-74cfd5766c-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a95ccfc-05c7-4192-9854-3fc518d2c335", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74cfd5766c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166", Pod:"calico-apiserver-74cfd5766c-2dwxc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0a4ba22c273", MAC:"ee:da:ec:d3:2b:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:18.598425 containerd[1440]: 2025-05-13 00:28:18.595 [INFO][4219] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166" Namespace="calico-apiserver" Pod="calico-apiserver-74cfd5766c-2dwxc" WorkloadEndpoint="localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0" May 13 00:28:18.615114 containerd[1440]: time="2025-05-13T00:28:18.615021868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:28:18.615114 containerd[1440]: time="2025-05-13T00:28:18.615088108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:28:18.615114 containerd[1440]: time="2025-05-13T00:28:18.615102468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:18.615359 containerd[1440]: time="2025-05-13T00:28:18.615181669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:18.635568 systemd[1]: Started cri-containerd-17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166.scope - libcontainer container 17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166. May 13 00:28:18.647400 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:28:18.664757 containerd[1440]: time="2025-05-13T00:28:18.664645232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74cfd5766c-2dwxc,Uid:6a95ccfc-05c7-4192-9854-3fc518d2c335,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166\"" May 13 00:28:19.304920 kubelet[2479]: E0513 00:28:19.304809 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:19.436641 systemd-networkd[1383]: cali0bf40bf152d: Gained IPv6LL May 13 00:28:19.885509 systemd-networkd[1383]: calic77a10915b6: Gained IPv6LL May 13 00:28:20.087114 systemd[1]: Started sshd@8-10.0.0.91:22-10.0.0.1:50728.service - OpenSSH per-connection server daemon (10.0.0.1:50728). May 13 00:28:20.153725 sshd[4381]: Accepted publickey for core from 10.0.0.1 port 50728 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:20.155656 sshd[4381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:20.160482 systemd-logind[1424]: New session 9 of user core. May 13 00:28:20.165532 containerd[1440]: time="2025-05-13T00:28:20.165369506Z" level=info msg="StopPodSandbox for \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\"" May 13 00:28:20.167205 containerd[1440]: time="2025-05-13T00:28:20.165389426Z" level=info msg="StopPodSandbox for \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\"" May 13 00:28:20.167141 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 00:28:20.268589 systemd-networkd[1383]: cali0a4ba22c273: Gained IPv6LL May 13 00:28:20.271439 containerd[1440]: time="2025-05-13T00:28:20.270858515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:20.271685 containerd[1440]: time="2025-05-13T00:28:20.271648560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 13 00:28:20.272347 containerd[1440]: time="2025-05-13T00:28:20.272278844Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:20.274761 containerd[1440]: time="2025-05-13T00:28:20.274535018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:20.276541 containerd[1440]: time="2025-05-13T00:28:20.276432429Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.701471143s" May 13 00:28:20.276541 containerd[1440]: time="2025-05-13T00:28:20.276476070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 00:28:20.279367 containerd[1440]: time="2025-05-13T00:28:20.279215286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 00:28:20.282578 containerd[1440]: time="2025-05-13T00:28:20.282538907Z" level=info msg="CreateContainer within sandbox \"2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 00:28:20.293273 containerd[1440]: 2025-05-13 00:28:20.236 [INFO][4416] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" May 13 00:28:20.293273 containerd[1440]: 2025-05-13 00:28:20.236 [INFO][4416] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" iface="eth0" netns="/var/run/netns/cni-463f0d4c-1261-29ae-a596-e3003ff9cfdb" May 13 00:28:20.293273 containerd[1440]: 2025-05-13 00:28:20.237 [INFO][4416] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" iface="eth0" netns="/var/run/netns/cni-463f0d4c-1261-29ae-a596-e3003ff9cfdb" May 13 00:28:20.293273 containerd[1440]: 2025-05-13 00:28:20.237 [INFO][4416] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" iface="eth0" netns="/var/run/netns/cni-463f0d4c-1261-29ae-a596-e3003ff9cfdb" May 13 00:28:20.293273 containerd[1440]: 2025-05-13 00:28:20.237 [INFO][4416] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" May 13 00:28:20.293273 containerd[1440]: 2025-05-13 00:28:20.237 [INFO][4416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" May 13 00:28:20.293273 containerd[1440]: 2025-05-13 00:28:20.266 [INFO][4438] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" HandleID="k8s-pod-network.1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" Workload="localhost-k8s-coredns--668d6bf9bc--x58pd-eth0" May 13 00:28:20.293273 containerd[1440]: 2025-05-13 00:28:20.266 [INFO][4438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:20.293273 containerd[1440]: 2025-05-13 00:28:20.266 [INFO][4438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:20.293273 containerd[1440]: 2025-05-13 00:28:20.278 [WARNING][4438] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" HandleID="k8s-pod-network.1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" Workload="localhost-k8s-coredns--668d6bf9bc--x58pd-eth0" May 13 00:28:20.293273 containerd[1440]: 2025-05-13 00:28:20.278 [INFO][4438] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" HandleID="k8s-pod-network.1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" Workload="localhost-k8s-coredns--668d6bf9bc--x58pd-eth0" May 13 00:28:20.293273 containerd[1440]: 2025-05-13 00:28:20.282 [INFO][4438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:20.293273 containerd[1440]: 2025-05-13 00:28:20.287 [INFO][4416] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" May 13 00:28:20.294037 containerd[1440]: time="2025-05-13T00:28:20.293722856Z" level=info msg="TearDown network for sandbox \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\" successfully" May 13 00:28:20.294037 containerd[1440]: time="2025-05-13T00:28:20.293751536Z" level=info msg="StopPodSandbox for \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\" returns successfully" May 13 00:28:20.295864 kubelet[2479]: E0513 00:28:20.294809 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:20.297486 systemd[1]: run-netns-cni\x2d463f0d4c\x2d1261\x2d29ae\x2da596\x2de3003ff9cfdb.mount: Deactivated successfully. May 13 00:28:20.298714 containerd[1440]: time="2025-05-13T00:28:20.298485085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x58pd,Uid:4cfd3c97-e6c2-46da-a620-1303e1ac26b6,Namespace:kube-system,Attempt:1,}" May 13 00:28:20.307603 containerd[1440]: time="2025-05-13T00:28:20.307560061Z" level=info msg="CreateContainer within sandbox \"2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8a0dd0dc581e1eb7480c724642d53ae85a861929243a1403157ff5146e328e40\"" May 13 00:28:20.307956 containerd[1440]: time="2025-05-13T00:28:20.307932863Z" level=info msg="StartContainer for \"8a0dd0dc581e1eb7480c724642d53ae85a861929243a1403157ff5146e328e40\"" May 13 00:28:20.308026 kubelet[2479]: E0513 00:28:20.307949 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:20.310528 containerd[1440]: 2025-05-13 00:28:20.229 [INFO][4417] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" May 13 00:28:20.310528 containerd[1440]: 2025-05-13 00:28:20.229 [INFO][4417] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" iface="eth0" netns="/var/run/netns/cni-2191d002-06fb-614a-c5f9-fd180eec10e8" May 13 00:28:20.310528 containerd[1440]: 2025-05-13 00:28:20.230 [INFO][4417] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" iface="eth0" netns="/var/run/netns/cni-2191d002-06fb-614a-c5f9-fd180eec10e8" May 13 00:28:20.310528 containerd[1440]: 2025-05-13 00:28:20.230 [INFO][4417] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" iface="eth0" netns="/var/run/netns/cni-2191d002-06fb-614a-c5f9-fd180eec10e8" May 13 00:28:20.310528 containerd[1440]: 2025-05-13 00:28:20.230 [INFO][4417] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" May 13 00:28:20.310528 containerd[1440]: 2025-05-13 00:28:20.230 [INFO][4417] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" May 13 00:28:20.310528 containerd[1440]: 2025-05-13 00:28:20.279 [INFO][4436] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" HandleID="k8s-pod-network.754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" Workload="localhost-k8s-csi--node--driver--bgptb-eth0" May 13 00:28:20.310528 containerd[1440]: 2025-05-13 00:28:20.279 [INFO][4436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:20.310528 containerd[1440]: 2025-05-13 00:28:20.281 [INFO][4436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:20.310528 containerd[1440]: 2025-05-13 00:28:20.294 [WARNING][4436] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" HandleID="k8s-pod-network.754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" Workload="localhost-k8s-csi--node--driver--bgptb-eth0" May 13 00:28:20.310528 containerd[1440]: 2025-05-13 00:28:20.294 [INFO][4436] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" HandleID="k8s-pod-network.754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" Workload="localhost-k8s-csi--node--driver--bgptb-eth0" May 13 00:28:20.310528 containerd[1440]: 2025-05-13 00:28:20.301 [INFO][4436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:20.310528 containerd[1440]: 2025-05-13 00:28:20.304 [INFO][4417] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" May 13 00:28:20.310528 containerd[1440]: time="2025-05-13T00:28:20.310387918Z" level=info msg="TearDown network for sandbox \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\" successfully" May 13 00:28:20.310528 containerd[1440]: time="2025-05-13T00:28:20.310421838Z" level=info msg="StopPodSandbox for \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\" returns successfully" May 13 00:28:20.312463 containerd[1440]: time="2025-05-13T00:28:20.311099403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bgptb,Uid:5e18653e-7d13-4d2f-8b0d-991f11e13bcd,Namespace:calico-system,Attempt:1,}" May 13 00:28:20.313893 systemd[1]: run-netns-cni\x2d2191d002\x2d06fb\x2d614a\x2dc5f9\x2dfd180eec10e8.mount: Deactivated successfully. May 13 00:28:20.367608 systemd[1]: Started cri-containerd-8a0dd0dc581e1eb7480c724642d53ae85a861929243a1403157ff5146e328e40.scope - libcontainer container 8a0dd0dc581e1eb7480c724642d53ae85a861929243a1403157ff5146e328e40. May 13 00:28:20.448300 containerd[1440]: time="2025-05-13T00:28:20.448068926Z" level=info msg="StartContainer for \"8a0dd0dc581e1eb7480c724642d53ae85a861929243a1403157ff5146e328e40\" returns successfully" May 13 00:28:20.476559 sshd[4381]: pam_unix(sshd:session): session closed for user core May 13 00:28:20.482565 systemd-logind[1424]: Session 9 logged out. Waiting for processes to exit. May 13 00:28:20.484640 systemd[1]: sshd@8-10.0.0.91:22-10.0.0.1:50728.service: Deactivated successfully. May 13 00:28:20.489043 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:28:20.490383 systemd-logind[1424]: Removed session 9. May 13 00:28:20.500133 containerd[1440]: time="2025-05-13T00:28:20.500083886Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:20.500735 containerd[1440]: time="2025-05-13T00:28:20.500686810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 13 00:28:20.502860 containerd[1440]: time="2025-05-13T00:28:20.502821663Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 223.568696ms" May 13 00:28:20.502928 containerd[1440]: time="2025-05-13T00:28:20.502858503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 00:28:20.506354 containerd[1440]: time="2025-05-13T00:28:20.506308204Z" level=info msg="CreateContainer within sandbox \"17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 00:28:20.517343 containerd[1440]: time="2025-05-13T00:28:20.517300512Z" level=info msg="CreateContainer within sandbox \"17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bc893b014874f559d612d10595478773d26562e67dbe81a3df3d8290db239e1e\"" May 13 00:28:20.518004 containerd[1440]: time="2025-05-13T00:28:20.517943236Z" level=info msg="StartContainer for \"bc893b014874f559d612d10595478773d26562e67dbe81a3df3d8290db239e1e\"" May 13 00:28:20.558597 systemd[1]: Started cri-containerd-bc893b014874f559d612d10595478773d26562e67dbe81a3df3d8290db239e1e.scope - libcontainer container bc893b014874f559d612d10595478773d26562e67dbe81a3df3d8290db239e1e. May 13 00:28:20.579260 systemd-networkd[1383]: cali804dc27fbc7: Link UP May 13 00:28:20.580032 systemd-networkd[1383]: cali804dc27fbc7: Gained carrier May 13 00:28:20.601846 containerd[1440]: 2025-05-13 00:28:20.379 [INFO][4483] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bgptb-eth0 csi-node-driver- calico-system 5e18653e-7d13-4d2f-8b0d-991f11e13bcd 844 0 2025-05-13 00:27:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-bgptb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali804dc27fbc7 [] []}} ContainerID="ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1" Namespace="calico-system" Pod="csi-node-driver-bgptb" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgptb-" May 13 00:28:20.601846 containerd[1440]: 2025-05-13 00:28:20.379 [INFO][4483] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1" Namespace="calico-system" Pod="csi-node-driver-bgptb" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgptb-eth0" May 13 00:28:20.601846 containerd[1440]: 2025-05-13 00:28:20.426 [INFO][4523] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1" HandleID="k8s-pod-network.ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1" Workload="localhost-k8s-csi--node--driver--bgptb-eth0" May 13 00:28:20.601846 containerd[1440]: 2025-05-13 00:28:20.449 [INFO][4523] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1" HandleID="k8s-pod-network.ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1" Workload="localhost-k8s-csi--node--driver--bgptb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400026df50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bgptb", "timestamp":"2025-05-13 00:28:20.426635754 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:28:20.601846 containerd[1440]: 2025-05-13 00:28:20.450 [INFO][4523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:20.601846 containerd[1440]: 2025-05-13 00:28:20.450 [INFO][4523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:20.601846 containerd[1440]: 2025-05-13 00:28:20.450 [INFO][4523] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:28:20.601846 containerd[1440]: 2025-05-13 00:28:20.455 [INFO][4523] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1" host="localhost" May 13 00:28:20.601846 containerd[1440]: 2025-05-13 00:28:20.542 [INFO][4523] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:28:20.601846 containerd[1440]: 2025-05-13 00:28:20.552 [INFO][4523] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:28:20.601846 containerd[1440]: 2025-05-13 00:28:20.554 [INFO][4523] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:28:20.601846 containerd[1440]: 2025-05-13 00:28:20.557 [INFO][4523] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:28:20.601846 containerd[1440]: 2025-05-13 00:28:20.557 [INFO][4523] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1" host="localhost" May 13 00:28:20.601846 containerd[1440]: 2025-05-13 00:28:20.558 [INFO][4523] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1 May 13 00:28:20.601846 containerd[1440]: 2025-05-13 00:28:20.563 [INFO][4523] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1" host="localhost" May 13 00:28:20.601846 containerd[1440]: 2025-05-13 00:28:20.570 [INFO][4523] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1" host="localhost" May 13 00:28:20.601846 containerd[1440]: 2025-05-13 00:28:20.570 [INFO][4523] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1" host="localhost" May 13 00:28:20.601846 containerd[1440]: 2025-05-13 00:28:20.570 [INFO][4523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:20.601846 containerd[1440]: 2025-05-13 00:28:20.570 [INFO][4523] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1" HandleID="k8s-pod-network.ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1" Workload="localhost-k8s-csi--node--driver--bgptb-eth0" May 13 00:28:20.602804 containerd[1440]: 2025-05-13 00:28:20.574 [INFO][4483] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1" Namespace="calico-system" Pod="csi-node-driver-bgptb" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgptb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bgptb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e18653e-7d13-4d2f-8b0d-991f11e13bcd", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bgptb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali804dc27fbc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:20.602804 containerd[1440]: 2025-05-13 00:28:20.574 [INFO][4483] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1" Namespace="calico-system" Pod="csi-node-driver-bgptb" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgptb-eth0" May 13 00:28:20.602804 containerd[1440]: 2025-05-13 00:28:20.574 [INFO][4483] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali804dc27fbc7 ContainerID="ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1" Namespace="calico-system" Pod="csi-node-driver-bgptb" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgptb-eth0" May 13 00:28:20.602804 containerd[1440]: 2025-05-13 00:28:20.580 [INFO][4483] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1" Namespace="calico-system" Pod="csi-node-driver-bgptb" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgptb-eth0" May 13 00:28:20.602804 containerd[1440]: 2025-05-13 00:28:20.580 [INFO][4483] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1" Namespace="calico-system" Pod="csi-node-driver-bgptb" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgptb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bgptb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e18653e-7d13-4d2f-8b0d-991f11e13bcd", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1", Pod:"csi-node-driver-bgptb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali804dc27fbc7", MAC:"2e:df:fd:f2:a5:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:20.602804 containerd[1440]: 2025-05-13 00:28:20.598 [INFO][4483] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1" Namespace="calico-system" Pod="csi-node-driver-bgptb" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgptb-eth0" May 13 00:28:20.615308 containerd[1440]: time="2025-05-13T00:28:20.615250115Z" level=info msg="StartContainer for \"bc893b014874f559d612d10595478773d26562e67dbe81a3df3d8290db239e1e\" returns successfully" May 13 00:28:20.629599 containerd[1440]: time="2025-05-13T00:28:20.629476762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:28:20.629599 containerd[1440]: time="2025-05-13T00:28:20.629546243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:28:20.629599 containerd[1440]: time="2025-05-13T00:28:20.629557923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:20.629963 containerd[1440]: time="2025-05-13T00:28:20.629636443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:20.650670 systemd[1]: Started cri-containerd-ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1.scope - libcontainer container ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1. May 13 00:28:20.672436 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:28:20.680023 systemd-networkd[1383]: cali517e1838e3f: Link UP May 13 00:28:20.680241 systemd-networkd[1383]: cali517e1838e3f: Gained carrier May 13 00:28:20.690170 containerd[1440]: time="2025-05-13T00:28:20.690119895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bgptb,Uid:5e18653e-7d13-4d2f-8b0d-991f11e13bcd,Namespace:calico-system,Attempt:1,} returns sandbox id \"ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1\"" May 13 00:28:20.694107 containerd[1440]: time="2025-05-13T00:28:20.693526196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 00:28:20.703671 containerd[1440]: 2025-05-13 00:28:20.370 [INFO][4466] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--x58pd-eth0 coredns-668d6bf9bc- kube-system 4cfd3c97-e6c2-46da-a620-1303e1ac26b6 845 0 2025-05-13 00:27:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-x58pd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali517e1838e3f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff" Namespace="kube-system" Pod="coredns-668d6bf9bc-x58pd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x58pd-" May 13 00:28:20.703671 containerd[1440]: 2025-05-13 00:28:20.370 [INFO][4466] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff" Namespace="kube-system" Pod="coredns-668d6bf9bc-x58pd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x58pd-eth0" May 13 00:28:20.703671 containerd[1440]: 2025-05-13 00:28:20.455 [INFO][4517] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff" HandleID="k8s-pod-network.25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff" Workload="localhost-k8s-coredns--668d6bf9bc--x58pd-eth0" May 13 00:28:20.703671 containerd[1440]: 2025-05-13 00:28:20.546 [INFO][4517] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff" HandleID="k8s-pod-network.25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff" Workload="localhost-k8s-coredns--668d6bf9bc--x58pd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f4080), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-x58pd", "timestamp":"2025-05-13 00:28:20.455850094 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:28:20.703671 containerd[1440]: 2025-05-13 00:28:20.546 [INFO][4517] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:20.703671 containerd[1440]: 2025-05-13 00:28:20.570 [INFO][4517] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:20.703671 containerd[1440]: 2025-05-13 00:28:20.571 [INFO][4517] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:28:20.703671 containerd[1440]: 2025-05-13 00:28:20.573 [INFO][4517] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff" host="localhost" May 13 00:28:20.703671 containerd[1440]: 2025-05-13 00:28:20.644 [INFO][4517] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:28:20.703671 containerd[1440]: 2025-05-13 00:28:20.650 [INFO][4517] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:28:20.703671 containerd[1440]: 2025-05-13 00:28:20.652 [INFO][4517] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:28:20.703671 containerd[1440]: 2025-05-13 00:28:20.656 [INFO][4517] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:28:20.703671 containerd[1440]: 2025-05-13 00:28:20.656 [INFO][4517] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff" host="localhost" May 13 00:28:20.703671 containerd[1440]: 2025-05-13 00:28:20.658 [INFO][4517] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff May 13 00:28:20.703671 containerd[1440]: 2025-05-13 00:28:20.664 [INFO][4517] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff" host="localhost" May 13 00:28:20.703671 containerd[1440]: 2025-05-13 00:28:20.671 [INFO][4517] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff" host="localhost" May 13 00:28:20.703671 containerd[1440]: 2025-05-13 00:28:20.671 [INFO][4517] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff" host="localhost" May 13 00:28:20.703671 containerd[1440]: 2025-05-13 00:28:20.671 [INFO][4517] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:20.703671 containerd[1440]: 2025-05-13 00:28:20.671 [INFO][4517] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff" HandleID="k8s-pod-network.25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff" Workload="localhost-k8s-coredns--668d6bf9bc--x58pd-eth0" May 13 00:28:20.704215 containerd[1440]: 2025-05-13 00:28:20.676 [INFO][4466] cni-plugin/k8s.go 386: Populated endpoint ContainerID="25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff" Namespace="kube-system" Pod="coredns-668d6bf9bc-x58pd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x58pd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--x58pd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4cfd3c97-e6c2-46da-a620-1303e1ac26b6", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-x58pd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali517e1838e3f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:20.704215 containerd[1440]: 2025-05-13 00:28:20.677 [INFO][4466] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff" Namespace="kube-system" Pod="coredns-668d6bf9bc-x58pd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x58pd-eth0" May 13 00:28:20.704215 containerd[1440]: 2025-05-13 00:28:20.677 [INFO][4466] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali517e1838e3f ContainerID="25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff" Namespace="kube-system" Pod="coredns-668d6bf9bc-x58pd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x58pd-eth0" May 13 00:28:20.704215 containerd[1440]: 2025-05-13 00:28:20.679 [INFO][4466] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff" Namespace="kube-system" Pod="coredns-668d6bf9bc-x58pd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x58pd-eth0" May 13 00:28:20.704215 containerd[1440]: 2025-05-13 00:28:20.681 [INFO][4466] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff" Namespace="kube-system" Pod="coredns-668d6bf9bc-x58pd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x58pd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--x58pd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4cfd3c97-e6c2-46da-a620-1303e1ac26b6", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff", Pod:"coredns-668d6bf9bc-x58pd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali517e1838e3f", MAC:"e6:7d:63:e6:cb:6d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:20.704215 containerd[1440]: 2025-05-13 00:28:20.701 [INFO][4466] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff" Namespace="kube-system" Pod="coredns-668d6bf9bc-x58pd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x58pd-eth0" May 13 00:28:20.726933 containerd[1440]: time="2025-05-13T00:28:20.726833601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:28:20.726933 containerd[1440]: time="2025-05-13T00:28:20.726898482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:28:20.726933 containerd[1440]: time="2025-05-13T00:28:20.726915722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:20.727165 containerd[1440]: time="2025-05-13T00:28:20.727023883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:20.746589 systemd[1]: Started cri-containerd-25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff.scope - libcontainer container 25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff. May 13 00:28:20.759811 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:28:20.782210 containerd[1440]: time="2025-05-13T00:28:20.782167542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x58pd,Uid:4cfd3c97-e6c2-46da-a620-1303e1ac26b6,Namespace:kube-system,Attempt:1,} returns sandbox id \"25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff\"" May 13 00:28:20.782986 kubelet[2479]: E0513 00:28:20.782962 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:20.784845 containerd[1440]: time="2025-05-13T00:28:20.784800078Z" level=info msg="CreateContainer within sandbox \"25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:28:20.796472 containerd[1440]: time="2025-05-13T00:28:20.796401350Z" level=info msg="CreateContainer within sandbox \"25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4948df18dbaf38ff90f88e0247e9be33646eb4008f646242624da206fbe7bdb0\"" May 13 00:28:20.797439 containerd[1440]: time="2025-05-13T00:28:20.796937073Z" level=info msg="StartContainer for \"4948df18dbaf38ff90f88e0247e9be33646eb4008f646242624da206fbe7bdb0\"" May 13 00:28:20.823594 systemd[1]: Started cri-containerd-4948df18dbaf38ff90f88e0247e9be33646eb4008f646242624da206fbe7bdb0.scope - libcontainer container 4948df18dbaf38ff90f88e0247e9be33646eb4008f646242624da206fbe7bdb0. May 13 00:28:20.871155 containerd[1440]: time="2025-05-13T00:28:20.871110849Z" level=info msg="StartContainer for \"4948df18dbaf38ff90f88e0247e9be33646eb4008f646242624da206fbe7bdb0\" returns successfully" May 13 00:28:21.163419 containerd[1440]: time="2025-05-13T00:28:21.163360580Z" level=info msg="StopPodSandbox for \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\"" May 13 00:28:21.252322 containerd[1440]: 2025-05-13 00:28:21.210 [INFO][4754] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" May 13 00:28:21.252322 containerd[1440]: 2025-05-13 00:28:21.210 [INFO][4754] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" iface="eth0" netns="/var/run/netns/cni-e8b686b9-51ce-b524-9b73-74f7406fd7da" May 13 00:28:21.252322 containerd[1440]: 2025-05-13 00:28:21.211 [INFO][4754] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" iface="eth0" netns="/var/run/netns/cni-e8b686b9-51ce-b524-9b73-74f7406fd7da" May 13 00:28:21.252322 containerd[1440]: 2025-05-13 00:28:21.211 [INFO][4754] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" iface="eth0" netns="/var/run/netns/cni-e8b686b9-51ce-b524-9b73-74f7406fd7da" May 13 00:28:21.252322 containerd[1440]: 2025-05-13 00:28:21.211 [INFO][4754] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" May 13 00:28:21.252322 containerd[1440]: 2025-05-13 00:28:21.211 [INFO][4754] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" May 13 00:28:21.252322 containerd[1440]: 2025-05-13 00:28:21.237 [INFO][4763] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" HandleID="k8s-pod-network.c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" Workload="localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0" May 13 00:28:21.252322 containerd[1440]: 2025-05-13 00:28:21.237 [INFO][4763] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:21.252322 containerd[1440]: 2025-05-13 00:28:21.237 [INFO][4763] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:21.252322 containerd[1440]: 2025-05-13 00:28:21.247 [WARNING][4763] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" HandleID="k8s-pod-network.c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" Workload="localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0" May 13 00:28:21.252322 containerd[1440]: 2025-05-13 00:28:21.247 [INFO][4763] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" HandleID="k8s-pod-network.c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" Workload="localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0" May 13 00:28:21.252322 containerd[1440]: 2025-05-13 00:28:21.248 [INFO][4763] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:21.252322 containerd[1440]: 2025-05-13 00:28:21.250 [INFO][4754] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" May 13 00:28:21.252900 containerd[1440]: time="2025-05-13T00:28:21.252482433Z" level=info msg="TearDown network for sandbox \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\" successfully" May 13 00:28:21.252900 containerd[1440]: time="2025-05-13T00:28:21.252508353Z" level=info msg="StopPodSandbox for \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\" returns successfully" May 13 00:28:21.253188 containerd[1440]: time="2025-05-13T00:28:21.253146357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56cdf655d5-cvxnm,Uid:66dabaa2-e4d2-4e87-bf56-94c73a7429d1,Namespace:calico-system,Attempt:1,}" May 13 00:28:21.319151 kubelet[2479]: E0513 00:28:21.318317 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:21.358136 kubelet[2479]: I0513 00:28:21.357578 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74cfd5766c-2dwxc" podStartSLOduration=23.518732667 podStartE2EDuration="25.357561021s" podCreationTimestamp="2025-05-13 00:27:56 +0000 UTC" firstStartedPulling="2025-05-13 00:28:18.665968761 +0000 UTC m=+34.581895293" lastFinishedPulling="2025-05-13 00:28:20.504797115 +0000 UTC m=+36.420723647" observedRunningTime="2025-05-13 00:28:21.35742662 +0000 UTC m=+37.273353152" watchObservedRunningTime="2025-05-13 00:28:21.357561021 +0000 UTC m=+37.273487553" May 13 00:28:21.358136 kubelet[2479]: I0513 00:28:21.357844 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-x58pd" podStartSLOduration=31.357839143 podStartE2EDuration="31.357839143s" podCreationTimestamp="2025-05-13 00:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:28:21.33560681 +0000 UTC m=+37.251533342" watchObservedRunningTime="2025-05-13 00:28:21.357839143 +0000 UTC m=+37.273765675" May 13 00:28:21.386105 kubelet[2479]: I0513 00:28:21.386034 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74cfd5766c-hkfcg" podStartSLOduration=23.681550669 podStartE2EDuration="25.386015791s" podCreationTimestamp="2025-05-13 00:27:56 +0000 UTC" firstStartedPulling="2025-05-13 00:28:18.574565523 +0000 UTC m=+34.490492055" lastFinishedPulling="2025-05-13 00:28:20.279030645 +0000 UTC m=+36.194957177" observedRunningTime="2025-05-13 00:28:21.368851848 +0000 UTC m=+37.284778380" watchObservedRunningTime="2025-05-13 00:28:21.386015791 +0000 UTC m=+37.301942323" May 13 00:28:21.419403 systemd[1]: run-netns-cni\x2de8b686b9\x2d51ce\x2db524\x2d9b73\x2d74f7406fd7da.mount: Deactivated successfully. May 13 00:28:21.477372 systemd-networkd[1383]: calif4ade593a67: Link UP May 13 00:28:21.478091 systemd-networkd[1383]: calif4ade593a67: Gained carrier May 13 00:28:21.499898 containerd[1440]: 2025-05-13 00:28:21.296 [INFO][4771] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0 calico-kube-controllers-56cdf655d5- calico-system 66dabaa2-e4d2-4e87-bf56-94c73a7429d1 872 0 2025-05-13 00:27:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:56cdf655d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-56cdf655d5-cvxnm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif4ade593a67 [] []}} ContainerID="5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20" Namespace="calico-system" Pod="calico-kube-controllers-56cdf655d5-cvxnm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-" May 13 00:28:21.499898 containerd[1440]: 2025-05-13 00:28:21.296 [INFO][4771] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20" Namespace="calico-system" Pod="calico-kube-controllers-56cdf655d5-cvxnm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0" May 13 00:28:21.499898 containerd[1440]: 2025-05-13 00:28:21.333 [INFO][4784] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20" HandleID="k8s-pod-network.5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20" Workload="localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0" May 13 00:28:21.499898 containerd[1440]: 2025-05-13 00:28:21.350 [INFO][4784] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20" HandleID="k8s-pod-network.5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20" Workload="localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e0b60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-56cdf655d5-cvxnm", "timestamp":"2025-05-13 00:28:21.333870599 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:28:21.499898 containerd[1440]: 2025-05-13 00:28:21.350 [INFO][4784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:21.499898 containerd[1440]: 2025-05-13 00:28:21.350 [INFO][4784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:21.499898 containerd[1440]: 2025-05-13 00:28:21.350 [INFO][4784] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:28:21.499898 containerd[1440]: 2025-05-13 00:28:21.353 [INFO][4784] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20" host="localhost" May 13 00:28:21.499898 containerd[1440]: 2025-05-13 00:28:21.450 [INFO][4784] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:28:21.499898 containerd[1440]: 2025-05-13 00:28:21.455 [INFO][4784] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:28:21.499898 containerd[1440]: 2025-05-13 00:28:21.457 [INFO][4784] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:28:21.499898 containerd[1440]: 2025-05-13 00:28:21.459 [INFO][4784] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:28:21.499898 containerd[1440]: 2025-05-13 00:28:21.459 [INFO][4784] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20" host="localhost" May 13 00:28:21.499898 containerd[1440]: 2025-05-13 00:28:21.461 [INFO][4784] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20 May 13 00:28:21.499898 containerd[1440]: 2025-05-13 00:28:21.465 [INFO][4784] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20" host="localhost" May 13 00:28:21.499898 containerd[1440]: 2025-05-13 00:28:21.473 [INFO][4784] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20" host="localhost" May 13 00:28:21.499898 containerd[1440]: 2025-05-13 00:28:21.473 [INFO][4784] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20" host="localhost" May 13 00:28:21.499898 containerd[1440]: 2025-05-13 00:28:21.473 [INFO][4784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:21.499898 containerd[1440]: 2025-05-13 00:28:21.473 [INFO][4784] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20" HandleID="k8s-pod-network.5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20" Workload="localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0" May 13 00:28:21.500456 containerd[1440]: 2025-05-13 00:28:21.475 [INFO][4771] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20" Namespace="calico-system" Pod="calico-kube-controllers-56cdf655d5-cvxnm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0", GenerateName:"calico-kube-controllers-56cdf655d5-", Namespace:"calico-system", SelfLink:"", UID:"66dabaa2-e4d2-4e87-bf56-94c73a7429d1", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56cdf655d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-56cdf655d5-cvxnm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4ade593a67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:21.500456 containerd[1440]: 2025-05-13 00:28:21.475 [INFO][4771] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20" Namespace="calico-system" Pod="calico-kube-controllers-56cdf655d5-cvxnm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0" May 13 00:28:21.500456 containerd[1440]: 2025-05-13 00:28:21.475 [INFO][4771] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4ade593a67 ContainerID="5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20" Namespace="calico-system" Pod="calico-kube-controllers-56cdf655d5-cvxnm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0" May 13 00:28:21.500456 containerd[1440]: 2025-05-13 00:28:21.478 [INFO][4771] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20" Namespace="calico-system" Pod="calico-kube-controllers-56cdf655d5-cvxnm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0" May 13 00:28:21.500456 containerd[1440]: 2025-05-13 00:28:21.479 [INFO][4771] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20" Namespace="calico-system" Pod="calico-kube-controllers-56cdf655d5-cvxnm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0", GenerateName:"calico-kube-controllers-56cdf655d5-", Namespace:"calico-system", SelfLink:"", UID:"66dabaa2-e4d2-4e87-bf56-94c73a7429d1", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56cdf655d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20", Pod:"calico-kube-controllers-56cdf655d5-cvxnm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4ade593a67", MAC:"42:ca:55:fe:1c:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:21.500456 containerd[1440]: 2025-05-13 00:28:21.492 [INFO][4771] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20" Namespace="calico-system" Pod="calico-kube-controllers-56cdf655d5-cvxnm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0" May 13 00:28:21.519604 containerd[1440]: time="2025-05-13T00:28:21.519267268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:28:21.523419 containerd[1440]: time="2025-05-13T00:28:21.519598350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:28:21.523419 containerd[1440]: time="2025-05-13T00:28:21.519617630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:21.523419 containerd[1440]: time="2025-05-13T00:28:21.519719751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:21.550604 systemd[1]: Started cri-containerd-5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20.scope - libcontainer container 5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20. May 13 00:28:21.562925 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:28:21.588757 containerd[1440]: time="2025-05-13T00:28:21.588710923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56cdf655d5-cvxnm,Uid:66dabaa2-e4d2-4e87-bf56-94c73a7429d1,Namespace:calico-system,Attempt:1,} returns sandbox id \"5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20\"" May 13 00:28:21.869808 containerd[1440]: time="2025-05-13T00:28:21.869752404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:21.871175 containerd[1440]: time="2025-05-13T00:28:21.871137652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 13 00:28:21.871343 containerd[1440]: time="2025-05-13T00:28:21.871321533Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:21.874893 containerd[1440]: time="2025-05-13T00:28:21.874853794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:21.875373 containerd[1440]: time="2025-05-13T00:28:21.875342557Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.18177604s" May 13 00:28:21.875425 containerd[1440]: time="2025-05-13T00:28:21.875379117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 13 00:28:21.884384 containerd[1440]: time="2025-05-13T00:28:21.884299691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 00:28:21.885045 containerd[1440]: time="2025-05-13T00:28:21.884999655Z" level=info msg="CreateContainer within sandbox \"ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 00:28:21.933658 containerd[1440]: time="2025-05-13T00:28:21.933608785Z" level=info msg="CreateContainer within sandbox \"ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c6a7900e81af10e6498854d3aae5422047486a9582a1fd8ec5c91fb1d5379d6f\"" May 13 00:28:21.934352 containerd[1440]: time="2025-05-13T00:28:21.934322310Z" level=info msg="StartContainer for \"c6a7900e81af10e6498854d3aae5422047486a9582a1fd8ec5c91fb1d5379d6f\"" May 13 00:28:21.961570 systemd[1]: Started cri-containerd-c6a7900e81af10e6498854d3aae5422047486a9582a1fd8ec5c91fb1d5379d6f.scope - libcontainer container c6a7900e81af10e6498854d3aae5422047486a9582a1fd8ec5c91fb1d5379d6f. May 13 00:28:21.990852 containerd[1440]: time="2025-05-13T00:28:21.990807048Z" level=info msg="StartContainer for \"c6a7900e81af10e6498854d3aae5422047486a9582a1fd8ec5c91fb1d5379d6f\" returns successfully" May 13 00:28:22.316871 systemd-networkd[1383]: cali804dc27fbc7: Gained IPv6LL May 13 00:28:22.348221 kubelet[2479]: E0513 00:28:22.348168 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:22.355721 kubelet[2479]: I0513 00:28:22.355672 2479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:28:22.513916 systemd-networkd[1383]: calif4ade593a67: Gained IPv6LL May 13 00:28:22.572581 systemd-networkd[1383]: cali517e1838e3f: Gained IPv6LL May 13 00:28:23.350034 kubelet[2479]: E0513 00:28:23.349678 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:23.360284 containerd[1440]: time="2025-05-13T00:28:23.359501354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:23.361395 containerd[1440]: time="2025-05-13T00:28:23.361334644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 13 00:28:23.362413 containerd[1440]: time="2025-05-13T00:28:23.362376250Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:23.364896 containerd[1440]: time="2025-05-13T00:28:23.364862744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:23.365555 containerd[1440]: time="2025-05-13T00:28:23.365450548Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.481066977s" May 13 00:28:23.365657 containerd[1440]: time="2025-05-13T00:28:23.365640909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 13 00:28:23.366765 containerd[1440]: time="2025-05-13T00:28:23.366543994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 00:28:23.372640 containerd[1440]: time="2025-05-13T00:28:23.372610148Z" level=info msg="CreateContainer within sandbox \"5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 00:28:23.386121 containerd[1440]: time="2025-05-13T00:28:23.386067424Z" level=info msg="CreateContainer within sandbox \"5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"73ec99fffc43c7cb7bab64609d4c5081ec1b80bc134ae7791851fff85ff362b2\"" May 13 00:28:23.386888 containerd[1440]: time="2025-05-13T00:28:23.386858589Z" level=info msg="StartContainer for \"73ec99fffc43c7cb7bab64609d4c5081ec1b80bc134ae7791851fff85ff362b2\"" May 13 00:28:23.416546 systemd[1]: Started cri-containerd-73ec99fffc43c7cb7bab64609d4c5081ec1b80bc134ae7791851fff85ff362b2.scope - libcontainer container 73ec99fffc43c7cb7bab64609d4c5081ec1b80bc134ae7791851fff85ff362b2. May 13 00:28:23.454228 containerd[1440]: time="2025-05-13T00:28:23.453654687Z" level=info msg="StartContainer for \"73ec99fffc43c7cb7bab64609d4c5081ec1b80bc134ae7791851fff85ff362b2\" returns successfully" May 13 00:28:24.366995 kubelet[2479]: I0513 00:28:24.366834 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-56cdf655d5-cvxnm" podStartSLOduration=26.590874464 podStartE2EDuration="28.366814484s" podCreationTimestamp="2025-05-13 00:27:56 +0000 UTC" firstStartedPulling="2025-05-13 00:28:21.590380533 +0000 UTC m=+37.506307065" lastFinishedPulling="2025-05-13 00:28:23.366320553 +0000 UTC m=+39.282247085" observedRunningTime="2025-05-13 00:28:24.365126235 +0000 UTC m=+40.281052767" watchObservedRunningTime="2025-05-13 00:28:24.366814484 +0000 UTC m=+40.282741016" May 13 00:28:24.730738 containerd[1440]: time="2025-05-13T00:28:24.730624331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:24.731680 containerd[1440]: time="2025-05-13T00:28:24.731361655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 13 00:28:24.732441 containerd[1440]: time="2025-05-13T00:28:24.732391981Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:24.735233 containerd[1440]: time="2025-05-13T00:28:24.735196997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:24.735864 containerd[1440]: time="2025-05-13T00:28:24.735824480Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.369184726s" May 13 00:28:24.735920 containerd[1440]: time="2025-05-13T00:28:24.735862360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 13 00:28:24.738825 containerd[1440]: time="2025-05-13T00:28:24.738794496Z" level=info msg="CreateContainer within sandbox \"ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 00:28:24.763162 containerd[1440]: time="2025-05-13T00:28:24.763110631Z" level=info msg="CreateContainer within sandbox \"ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f28fb2ea6e1d5ac51b75a72735cb09821b6ca441a1b3a3e24d32762ec796aa4b\"" May 13 00:28:24.763992 containerd[1440]: time="2025-05-13T00:28:24.763656354Z" level=info msg="StartContainer for \"f28fb2ea6e1d5ac51b75a72735cb09821b6ca441a1b3a3e24d32762ec796aa4b\"" May 13 00:28:24.793570 systemd[1]: Started cri-containerd-f28fb2ea6e1d5ac51b75a72735cb09821b6ca441a1b3a3e24d32762ec796aa4b.scope - libcontainer container f28fb2ea6e1d5ac51b75a72735cb09821b6ca441a1b3a3e24d32762ec796aa4b. May 13 00:28:24.818550 containerd[1440]: time="2025-05-13T00:28:24.818509016Z" level=info msg="StartContainer for \"f28fb2ea6e1d5ac51b75a72735cb09821b6ca441a1b3a3e24d32762ec796aa4b\" returns successfully" May 13 00:28:25.254765 kubelet[2479]: I0513 00:28:25.254720 2479 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 00:28:25.254765 kubelet[2479]: I0513 00:28:25.254770 2479 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 00:28:25.373856 kubelet[2479]: I0513 00:28:25.373770 2479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bgptb" podStartSLOduration=25.330022857 podStartE2EDuration="29.373753149s" podCreationTimestamp="2025-05-13 00:27:56 +0000 UTC" firstStartedPulling="2025-05-13 00:28:20.693024673 +0000 UTC m=+36.608951205" lastFinishedPulling="2025-05-13 00:28:24.736755005 +0000 UTC m=+40.652681497" observedRunningTime="2025-05-13 00:28:25.373263027 +0000 UTC m=+41.289189559" watchObservedRunningTime="2025-05-13 00:28:25.373753149 +0000 UTC m=+41.289679641" May 13 00:28:25.488577 systemd[1]: Started sshd@9-10.0.0.91:22-10.0.0.1:39508.service - OpenSSH per-connection server daemon (10.0.0.1:39508). May 13 00:28:25.574750 sshd[5013]: Accepted publickey for core from 10.0.0.1 port 39508 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:25.576356 sshd[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:25.580450 systemd-logind[1424]: New session 10 of user core. May 13 00:28:25.590613 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 00:28:25.818177 sshd[5013]: pam_unix(sshd:session): session closed for user core May 13 00:28:25.829193 systemd[1]: sshd@9-10.0.0.91:22-10.0.0.1:39508.service: Deactivated successfully. May 13 00:28:25.831672 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:28:25.834247 systemd-logind[1424]: Session 10 logged out. Waiting for processes to exit. May 13 00:28:25.838769 systemd[1]: Started sshd@10-10.0.0.91:22-10.0.0.1:39524.service - OpenSSH per-connection server daemon (10.0.0.1:39524). May 13 00:28:25.840482 systemd-logind[1424]: Removed session 10. May 13 00:28:25.873838 sshd[5029]: Accepted publickey for core from 10.0.0.1 port 39524 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:25.875004 sshd[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:25.879470 systemd-logind[1424]: New session 11 of user core. May 13 00:28:25.886571 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 00:28:26.082062 sshd[5029]: pam_unix(sshd:session): session closed for user core May 13 00:28:26.092969 systemd[1]: sshd@10-10.0.0.91:22-10.0.0.1:39524.service: Deactivated successfully. May 13 00:28:26.095510 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:28:26.097552 systemd-logind[1424]: Session 11 logged out. Waiting for processes to exit. May 13 00:28:26.105770 systemd[1]: Started sshd@11-10.0.0.91:22-10.0.0.1:39532.service - OpenSSH per-connection server daemon (10.0.0.1:39532). May 13 00:28:26.106944 systemd-logind[1424]: Removed session 11. May 13 00:28:26.148955 sshd[5041]: Accepted publickey for core from 10.0.0.1 port 39532 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:26.150454 sshd[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:26.154441 systemd-logind[1424]: New session 12 of user core. May 13 00:28:26.160593 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 00:28:26.234034 kubelet[2479]: I0513 00:28:26.233993 2479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:28:26.317116 sshd[5041]: pam_unix(sshd:session): session closed for user core May 13 00:28:26.321301 systemd[1]: sshd@11-10.0.0.91:22-10.0.0.1:39532.service: Deactivated successfully. May 13 00:28:26.323151 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:28:26.323780 systemd-logind[1424]: Session 12 logged out. Waiting for processes to exit. May 13 00:28:26.324990 systemd-logind[1424]: Removed session 12. May 13 00:28:31.340698 systemd[1]: Started sshd@12-10.0.0.91:22-10.0.0.1:39546.service - OpenSSH per-connection server daemon (10.0.0.1:39546). May 13 00:28:31.377258 sshd[5063]: Accepted publickey for core from 10.0.0.1 port 39546 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:31.379014 sshd[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:31.382661 systemd-logind[1424]: New session 13 of user core. May 13 00:28:31.391608 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 00:28:31.603532 sshd[5063]: pam_unix(sshd:session): session closed for user core May 13 00:28:31.617296 systemd[1]: sshd@12-10.0.0.91:22-10.0.0.1:39546.service: Deactivated successfully. May 13 00:28:31.619034 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:28:31.620561 systemd-logind[1424]: Session 13 logged out. Waiting for processes to exit. May 13 00:28:31.627791 systemd[1]: Started sshd@13-10.0.0.91:22-10.0.0.1:39548.service - OpenSSH per-connection server daemon (10.0.0.1:39548). May 13 00:28:31.628713 systemd-logind[1424]: Removed session 13. May 13 00:28:31.665130 sshd[5079]: Accepted publickey for core from 10.0.0.1 port 39548 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:31.666569 sshd[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:31.670559 systemd-logind[1424]: New session 14 of user core. May 13 00:28:31.678601 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 00:28:31.955033 sshd[5079]: pam_unix(sshd:session): session closed for user core May 13 00:28:31.969253 systemd[1]: sshd@13-10.0.0.91:22-10.0.0.1:39548.service: Deactivated successfully. May 13 00:28:31.970886 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:28:31.972700 systemd-logind[1424]: Session 14 logged out. Waiting for processes to exit. May 13 00:28:31.981943 systemd[1]: Started sshd@14-10.0.0.91:22-10.0.0.1:39560.service - OpenSSH per-connection server daemon (10.0.0.1:39560). May 13 00:28:31.983012 systemd-logind[1424]: Removed session 14. May 13 00:28:32.017809 sshd[5091]: Accepted publickey for core from 10.0.0.1 port 39560 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:32.019137 sshd[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:32.023010 systemd-logind[1424]: New session 15 of user core. May 13 00:28:32.029580 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 00:28:32.783488 sshd[5091]: pam_unix(sshd:session): session closed for user core May 13 00:28:32.792600 systemd[1]: sshd@14-10.0.0.91:22-10.0.0.1:39560.service: Deactivated successfully. May 13 00:28:32.794941 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:28:32.797707 systemd-logind[1424]: Session 15 logged out. Waiting for processes to exit. May 13 00:28:32.810744 systemd[1]: Started sshd@15-10.0.0.91:22-10.0.0.1:38176.service - OpenSSH per-connection server daemon (10.0.0.1:38176). May 13 00:28:32.812674 systemd-logind[1424]: Removed session 15. May 13 00:28:32.849104 sshd[5117]: Accepted publickey for core from 10.0.0.1 port 38176 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:32.850695 sshd[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:32.854786 systemd-logind[1424]: New session 16 of user core. May 13 00:28:32.864582 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 00:28:33.192179 sshd[5117]: pam_unix(sshd:session): session closed for user core May 13 00:28:33.201210 systemd[1]: sshd@15-10.0.0.91:22-10.0.0.1:38176.service: Deactivated successfully. May 13 00:28:33.202963 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:28:33.206733 systemd-logind[1424]: Session 16 logged out. Waiting for processes to exit. May 13 00:28:33.216788 systemd[1]: Started sshd@16-10.0.0.91:22-10.0.0.1:38182.service - OpenSSH per-connection server daemon (10.0.0.1:38182). May 13 00:28:33.217841 systemd-logind[1424]: Removed session 16. May 13 00:28:33.251192 sshd[5130]: Accepted publickey for core from 10.0.0.1 port 38182 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:33.252739 sshd[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:33.259584 systemd-logind[1424]: New session 17 of user core. May 13 00:28:33.270615 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 00:28:33.426879 sshd[5130]: pam_unix(sshd:session): session closed for user core May 13 00:28:33.429527 systemd[1]: sshd@16-10.0.0.91:22-10.0.0.1:38182.service: Deactivated successfully. May 13 00:28:33.431401 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:28:33.433003 systemd-logind[1424]: Session 17 logged out. Waiting for processes to exit. May 13 00:28:33.434011 systemd-logind[1424]: Removed session 17. May 13 00:28:38.443595 systemd[1]: Started sshd@17-10.0.0.91:22-10.0.0.1:38190.service - OpenSSH per-connection server daemon (10.0.0.1:38190). May 13 00:28:38.485379 sshd[5144]: Accepted publickey for core from 10.0.0.1 port 38190 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:38.486803 sshd[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:38.490610 systemd-logind[1424]: New session 18 of user core. May 13 00:28:38.501699 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 00:28:38.627905 sshd[5144]: pam_unix(sshd:session): session closed for user core May 13 00:28:38.632756 systemd[1]: sshd@17-10.0.0.91:22-10.0.0.1:38190.service: Deactivated successfully. May 13 00:28:38.634392 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:28:38.635211 systemd-logind[1424]: Session 18 logged out. Waiting for processes to exit. May 13 00:28:38.636000 systemd-logind[1424]: Removed session 18. May 13 00:28:42.353672 kubelet[2479]: E0513 00:28:42.353593 2479 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:43.642425 systemd[1]: Started sshd@18-10.0.0.91:22-10.0.0.1:42442.service - OpenSSH per-connection server daemon (10.0.0.1:42442). May 13 00:28:43.685717 sshd[5207]: Accepted publickey for core from 10.0.0.1 port 42442 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:43.688451 sshd[5207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:43.692108 systemd-logind[1424]: New session 19 of user core. May 13 00:28:43.707560 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 00:28:43.852637 sshd[5207]: pam_unix(sshd:session): session closed for user core May 13 00:28:43.856545 systemd[1]: sshd@18-10.0.0.91:22-10.0.0.1:42442.service: Deactivated successfully. May 13 00:28:43.860170 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:28:43.861940 systemd-logind[1424]: Session 19 logged out. Waiting for processes to exit. May 13 00:28:43.863036 systemd-logind[1424]: Removed session 19. May 13 00:28:44.160582 containerd[1440]: time="2025-05-13T00:28:44.160543858Z" level=info msg="StopPodSandbox for \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\"" May 13 00:28:44.249277 containerd[1440]: 2025-05-13 00:28:44.204 [WARNING][5236] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--x58pd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4cfd3c97-e6c2-46da-a620-1303e1ac26b6", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff", Pod:"coredns-668d6bf9bc-x58pd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali517e1838e3f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:44.249277 containerd[1440]: 2025-05-13 00:28:44.204 [INFO][5236] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" May 13 00:28:44.249277 containerd[1440]: 2025-05-13 00:28:44.204 [INFO][5236] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" iface="eth0" netns="" May 13 00:28:44.249277 containerd[1440]: 2025-05-13 00:28:44.204 [INFO][5236] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" May 13 00:28:44.249277 containerd[1440]: 2025-05-13 00:28:44.204 [INFO][5236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" May 13 00:28:44.249277 containerd[1440]: 2025-05-13 00:28:44.233 [INFO][5245] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" HandleID="k8s-pod-network.1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" Workload="localhost-k8s-coredns--668d6bf9bc--x58pd-eth0" May 13 00:28:44.249277 containerd[1440]: 2025-05-13 00:28:44.233 [INFO][5245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:44.249277 containerd[1440]: 2025-05-13 00:28:44.233 [INFO][5245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:44.249277 containerd[1440]: 2025-05-13 00:28:44.242 [WARNING][5245] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" HandleID="k8s-pod-network.1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" Workload="localhost-k8s-coredns--668d6bf9bc--x58pd-eth0" May 13 00:28:44.249277 containerd[1440]: 2025-05-13 00:28:44.242 [INFO][5245] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" HandleID="k8s-pod-network.1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" Workload="localhost-k8s-coredns--668d6bf9bc--x58pd-eth0" May 13 00:28:44.249277 containerd[1440]: 2025-05-13 00:28:44.244 [INFO][5245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:44.249277 containerd[1440]: 2025-05-13 00:28:44.247 [INFO][5236] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" May 13 00:28:44.249785 containerd[1440]: time="2025-05-13T00:28:44.249282768Z" level=info msg="TearDown network for sandbox \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\" successfully" May 13 00:28:44.249785 containerd[1440]: time="2025-05-13T00:28:44.249307808Z" level=info msg="StopPodSandbox for \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\" returns successfully" May 13 00:28:44.249905 containerd[1440]: time="2025-05-13T00:28:44.249824370Z" level=info msg="RemovePodSandbox for \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\"" May 13 00:28:44.260141 containerd[1440]: time="2025-05-13T00:28:44.260093851Z" level=info msg="Forcibly stopping sandbox \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\"" May 13 00:28:44.330156 containerd[1440]: 2025-05-13 00:28:44.294 [WARNING][5267] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--x58pd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4cfd3c97-e6c2-46da-a620-1303e1ac26b6", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25a81a7afd956c584b00c6232e697d3ebadb457a0a8fb7205c87a8e0c7ed00ff", Pod:"coredns-668d6bf9bc-x58pd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali517e1838e3f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:44.330156 containerd[1440]: 2025-05-13 00:28:44.295 [INFO][5267] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" May 13 00:28:44.330156 containerd[1440]: 2025-05-13 00:28:44.295 [INFO][5267] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" iface="eth0" netns="" May 13 00:28:44.330156 containerd[1440]: 2025-05-13 00:28:44.295 [INFO][5267] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" May 13 00:28:44.330156 containerd[1440]: 2025-05-13 00:28:44.295 [INFO][5267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" May 13 00:28:44.330156 containerd[1440]: 2025-05-13 00:28:44.315 [INFO][5276] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" HandleID="k8s-pod-network.1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" Workload="localhost-k8s-coredns--668d6bf9bc--x58pd-eth0" May 13 00:28:44.330156 containerd[1440]: 2025-05-13 00:28:44.315 [INFO][5276] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:44.330156 containerd[1440]: 2025-05-13 00:28:44.315 [INFO][5276] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:44.330156 containerd[1440]: 2025-05-13 00:28:44.325 [WARNING][5276] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" HandleID="k8s-pod-network.1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" Workload="localhost-k8s-coredns--668d6bf9bc--x58pd-eth0" May 13 00:28:44.330156 containerd[1440]: 2025-05-13 00:28:44.325 [INFO][5276] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" HandleID="k8s-pod-network.1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" Workload="localhost-k8s-coredns--668d6bf9bc--x58pd-eth0" May 13 00:28:44.330156 containerd[1440]: 2025-05-13 00:28:44.326 [INFO][5276] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:44.330156 containerd[1440]: 2025-05-13 00:28:44.328 [INFO][5267] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c" May 13 00:28:44.330616 containerd[1440]: time="2025-05-13T00:28:44.330193888Z" level=info msg="TearDown network for sandbox \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\" successfully" May 13 00:28:44.345081 containerd[1440]: time="2025-05-13T00:28:44.345033826Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:28:44.345219 containerd[1440]: time="2025-05-13T00:28:44.345128667Z" level=info msg="RemovePodSandbox \"1f0e64201ad9e2a1d45f6d722f861f54987cd696cbbfd10654a95273f439963c\" returns successfully" May 13 00:28:44.345787 containerd[1440]: time="2025-05-13T00:28:44.345525428Z" level=info msg="StopPodSandbox for \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\"" May 13 00:28:44.423876 containerd[1440]: 2025-05-13 00:28:44.386 [WARNING][5298] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s4q64-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f3494a81-13f5-44da-afd1-f8752f281b7f", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434", Pod:"coredns-668d6bf9bc-s4q64", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0bf40bf152d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:44.423876 containerd[1440]: 2025-05-13 00:28:44.387 [INFO][5298] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" May 13 00:28:44.423876 containerd[1440]: 2025-05-13 00:28:44.387 [INFO][5298] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" iface="eth0" netns="" May 13 00:28:44.423876 containerd[1440]: 2025-05-13 00:28:44.387 [INFO][5298] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" May 13 00:28:44.423876 containerd[1440]: 2025-05-13 00:28:44.387 [INFO][5298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" May 13 00:28:44.423876 containerd[1440]: 2025-05-13 00:28:44.406 [INFO][5306] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" HandleID="k8s-pod-network.9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" Workload="localhost-k8s-coredns--668d6bf9bc--s4q64-eth0" May 13 00:28:44.423876 containerd[1440]: 2025-05-13 00:28:44.406 [INFO][5306] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:44.423876 containerd[1440]: 2025-05-13 00:28:44.407 [INFO][5306] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:44.423876 containerd[1440]: 2025-05-13 00:28:44.418 [WARNING][5306] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" HandleID="k8s-pod-network.9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" Workload="localhost-k8s-coredns--668d6bf9bc--s4q64-eth0" May 13 00:28:44.423876 containerd[1440]: 2025-05-13 00:28:44.418 [INFO][5306] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" HandleID="k8s-pod-network.9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" Workload="localhost-k8s-coredns--668d6bf9bc--s4q64-eth0" May 13 00:28:44.423876 containerd[1440]: 2025-05-13 00:28:44.420 [INFO][5306] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:44.423876 containerd[1440]: 2025-05-13 00:28:44.422 [INFO][5298] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" May 13 00:28:44.423876 containerd[1440]: time="2025-05-13T00:28:44.423819137Z" level=info msg="TearDown network for sandbox \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\" successfully" May 13 00:28:44.423876 containerd[1440]: time="2025-05-13T00:28:44.423842817Z" level=info msg="StopPodSandbox for \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\" returns successfully" May 13 00:28:44.424357 containerd[1440]: time="2025-05-13T00:28:44.424256619Z" level=info msg="RemovePodSandbox for \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\"" May 13 00:28:44.424357 containerd[1440]: time="2025-05-13T00:28:44.424285099Z" level=info msg="Forcibly stopping sandbox \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\"" May 13 00:28:44.494640 containerd[1440]: 2025-05-13 00:28:44.461 [WARNING][5329] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s4q64-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f3494a81-13f5-44da-afd1-f8752f281b7f", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf9fb5e06866aaa71781df4bb0020e51399783f9f441e7061728c25317db6434", Pod:"coredns-668d6bf9bc-s4q64", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0bf40bf152d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:44.494640 containerd[1440]: 2025-05-13 00:28:44.461 [INFO][5329] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" May 13 00:28:44.494640 containerd[1440]: 2025-05-13 00:28:44.461 [INFO][5329] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" iface="eth0" netns="" May 13 00:28:44.494640 containerd[1440]: 2025-05-13 00:28:44.461 [INFO][5329] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" May 13 00:28:44.494640 containerd[1440]: 2025-05-13 00:28:44.461 [INFO][5329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" May 13 00:28:44.494640 containerd[1440]: 2025-05-13 00:28:44.480 [INFO][5337] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" HandleID="k8s-pod-network.9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" Workload="localhost-k8s-coredns--668d6bf9bc--s4q64-eth0" May 13 00:28:44.494640 containerd[1440]: 2025-05-13 00:28:44.480 [INFO][5337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:44.494640 containerd[1440]: 2025-05-13 00:28:44.480 [INFO][5337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:44.494640 containerd[1440]: 2025-05-13 00:28:44.489 [WARNING][5337] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" HandleID="k8s-pod-network.9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" Workload="localhost-k8s-coredns--668d6bf9bc--s4q64-eth0" May 13 00:28:44.494640 containerd[1440]: 2025-05-13 00:28:44.489 [INFO][5337] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" HandleID="k8s-pod-network.9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" Workload="localhost-k8s-coredns--668d6bf9bc--s4q64-eth0" May 13 00:28:44.494640 containerd[1440]: 2025-05-13 00:28:44.490 [INFO][5337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:44.494640 containerd[1440]: 2025-05-13 00:28:44.492 [INFO][5329] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4" May 13 00:28:44.495162 containerd[1440]: time="2025-05-13T00:28:44.494669977Z" level=info msg="TearDown network for sandbox \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\" successfully" May 13 00:28:44.497343 containerd[1440]: time="2025-05-13T00:28:44.497311587Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:28:44.497391 containerd[1440]: time="2025-05-13T00:28:44.497369748Z" level=info msg="RemovePodSandbox \"9e3822d669c14840011bfd6169a1ab907b8f55efa85f063ee037f82a863520a4\" returns successfully" May 13 00:28:44.497866 containerd[1440]: time="2025-05-13T00:28:44.497842229Z" level=info msg="StopPodSandbox for \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\"" May 13 00:28:44.579152 containerd[1440]: 2025-05-13 00:28:44.534 [WARNING][5360] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0", GenerateName:"calico-kube-controllers-56cdf655d5-", Namespace:"calico-system", SelfLink:"", UID:"66dabaa2-e4d2-4e87-bf56-94c73a7429d1", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56cdf655d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20", Pod:"calico-kube-controllers-56cdf655d5-cvxnm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4ade593a67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:44.579152 containerd[1440]: 2025-05-13 00:28:44.534 [INFO][5360] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" May 13 00:28:44.579152 containerd[1440]: 2025-05-13 00:28:44.534 [INFO][5360] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" iface="eth0" netns="" May 13 00:28:44.579152 containerd[1440]: 2025-05-13 00:28:44.534 [INFO][5360] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" May 13 00:28:44.579152 containerd[1440]: 2025-05-13 00:28:44.534 [INFO][5360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" May 13 00:28:44.579152 containerd[1440]: 2025-05-13 00:28:44.564 [INFO][5368] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" HandleID="k8s-pod-network.c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" Workload="localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0" May 13 00:28:44.579152 containerd[1440]: 2025-05-13 00:28:44.564 [INFO][5368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:44.579152 containerd[1440]: 2025-05-13 00:28:44.564 [INFO][5368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:44.579152 containerd[1440]: 2025-05-13 00:28:44.573 [WARNING][5368] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" HandleID="k8s-pod-network.c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" Workload="localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0" May 13 00:28:44.579152 containerd[1440]: 2025-05-13 00:28:44.573 [INFO][5368] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" HandleID="k8s-pod-network.c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" Workload="localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0" May 13 00:28:44.579152 containerd[1440]: 2025-05-13 00:28:44.575 [INFO][5368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:44.579152 containerd[1440]: 2025-05-13 00:28:44.577 [INFO][5360] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" May 13 00:28:44.579152 containerd[1440]: time="2025-05-13T00:28:44.579044230Z" level=info msg="TearDown network for sandbox \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\" successfully" May 13 00:28:44.579152 containerd[1440]: time="2025-05-13T00:28:44.579068390Z" level=info msg="StopPodSandbox for \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\" returns successfully" May 13 00:28:44.579688 containerd[1440]: time="2025-05-13T00:28:44.579602792Z" level=info msg="RemovePodSandbox for \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\"" May 13 00:28:44.579688 containerd[1440]: time="2025-05-13T00:28:44.579643272Z" level=info msg="Forcibly stopping sandbox \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\"" May 13 00:28:44.648735 containerd[1440]: 2025-05-13 00:28:44.615 [WARNING][5391] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0", GenerateName:"calico-kube-controllers-56cdf655d5-", Namespace:"calico-system", SelfLink:"", UID:"66dabaa2-e4d2-4e87-bf56-94c73a7429d1", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56cdf655d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b0e4277b05660462de010091c6571092375451c24055544bcd31f4b95fd8d20", Pod:"calico-kube-controllers-56cdf655d5-cvxnm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4ade593a67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:44.648735 containerd[1440]: 2025-05-13 00:28:44.615 [INFO][5391] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" May 13 00:28:44.648735 containerd[1440]: 2025-05-13 00:28:44.615 [INFO][5391] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" iface="eth0" netns="" May 13 00:28:44.648735 containerd[1440]: 2025-05-13 00:28:44.615 [INFO][5391] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" May 13 00:28:44.648735 containerd[1440]: 2025-05-13 00:28:44.615 [INFO][5391] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" May 13 00:28:44.648735 containerd[1440]: 2025-05-13 00:28:44.635 [INFO][5400] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" HandleID="k8s-pod-network.c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" Workload="localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0" May 13 00:28:44.648735 containerd[1440]: 2025-05-13 00:28:44.635 [INFO][5400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:44.648735 containerd[1440]: 2025-05-13 00:28:44.635 [INFO][5400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:44.648735 containerd[1440]: 2025-05-13 00:28:44.643 [WARNING][5400] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" HandleID="k8s-pod-network.c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" Workload="localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0" May 13 00:28:44.648735 containerd[1440]: 2025-05-13 00:28:44.643 [INFO][5400] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" HandleID="k8s-pod-network.c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" Workload="localhost-k8s-calico--kube--controllers--56cdf655d5--cvxnm-eth0" May 13 00:28:44.648735 containerd[1440]: 2025-05-13 00:28:44.645 [INFO][5400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:44.648735 containerd[1440]: 2025-05-13 00:28:44.646 [INFO][5391] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345" May 13 00:28:44.649197 containerd[1440]: time="2025-05-13T00:28:44.648771145Z" level=info msg="TearDown network for sandbox \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\" successfully" May 13 00:28:44.652328 containerd[1440]: time="2025-05-13T00:28:44.652285519Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:28:44.652439 containerd[1440]: time="2025-05-13T00:28:44.652346199Z" level=info msg="RemovePodSandbox \"c372738721cbd010f27120abce9c5d8e5f8a54b18da3b4cfbff7bf4253565345\" returns successfully" May 13 00:28:44.652862 containerd[1440]: time="2025-05-13T00:28:44.652833921Z" level=info msg="StopPodSandbox for \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\"" May 13 00:28:44.726742 containerd[1440]: 2025-05-13 00:28:44.690 [WARNING][5422] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0", GenerateName:"calico-apiserver-74cfd5766c-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a95ccfc-05c7-4192-9854-3fc518d2c335", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74cfd5766c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166", Pod:"calico-apiserver-74cfd5766c-2dwxc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0a4ba22c273", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:44.726742 containerd[1440]: 2025-05-13 00:28:44.690 [INFO][5422] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" May 13 00:28:44.726742 containerd[1440]: 2025-05-13 00:28:44.690 [INFO][5422] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" iface="eth0" netns="" May 13 00:28:44.726742 containerd[1440]: 2025-05-13 00:28:44.690 [INFO][5422] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" May 13 00:28:44.726742 containerd[1440]: 2025-05-13 00:28:44.690 [INFO][5422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" May 13 00:28:44.726742 containerd[1440]: 2025-05-13 00:28:44.711 [INFO][5430] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" HandleID="k8s-pod-network.59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" Workload="localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0" May 13 00:28:44.726742 containerd[1440]: 2025-05-13 00:28:44.711 [INFO][5430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:44.726742 containerd[1440]: 2025-05-13 00:28:44.711 [INFO][5430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:44.726742 containerd[1440]: 2025-05-13 00:28:44.720 [WARNING][5430] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" HandleID="k8s-pod-network.59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" Workload="localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0" May 13 00:28:44.726742 containerd[1440]: 2025-05-13 00:28:44.720 [INFO][5430] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" HandleID="k8s-pod-network.59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" Workload="localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0" May 13 00:28:44.726742 containerd[1440]: 2025-05-13 00:28:44.722 [INFO][5430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:44.726742 containerd[1440]: 2025-05-13 00:28:44.724 [INFO][5422] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" May 13 00:28:44.726742 containerd[1440]: time="2025-05-13T00:28:44.726715453Z" level=info msg="TearDown network for sandbox \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\" successfully" May 13 00:28:44.726742 containerd[1440]: time="2025-05-13T00:28:44.726740933Z" level=info msg="StopPodSandbox for \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\" returns successfully" May 13 00:28:44.727465 containerd[1440]: time="2025-05-13T00:28:44.727233575Z" level=info msg="RemovePodSandbox for \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\"" May 13 00:28:44.727465 containerd[1440]: time="2025-05-13T00:28:44.727262295Z" level=info msg="Forcibly stopping sandbox \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\"" May 13 00:28:44.800625 containerd[1440]: 2025-05-13 00:28:44.767 [WARNING][5452] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0", GenerateName:"calico-apiserver-74cfd5766c-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a95ccfc-05c7-4192-9854-3fc518d2c335", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74cfd5766c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"17dce839f4c7ead795d1cc4e0881c6a6b35aaa092a6058e9f354df6f09c9a166", Pod:"calico-apiserver-74cfd5766c-2dwxc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0a4ba22c273", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:44.800625 containerd[1440]: 2025-05-13 00:28:44.767 [INFO][5452] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" May 13 00:28:44.800625 containerd[1440]: 2025-05-13 00:28:44.767 [INFO][5452] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" iface="eth0" netns="" May 13 00:28:44.800625 containerd[1440]: 2025-05-13 00:28:44.767 [INFO][5452] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" May 13 00:28:44.800625 containerd[1440]: 2025-05-13 00:28:44.767 [INFO][5452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" May 13 00:28:44.800625 containerd[1440]: 2025-05-13 00:28:44.786 [INFO][5461] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" HandleID="k8s-pod-network.59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" Workload="localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0" May 13 00:28:44.800625 containerd[1440]: 2025-05-13 00:28:44.787 [INFO][5461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:44.800625 containerd[1440]: 2025-05-13 00:28:44.787 [INFO][5461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:44.800625 containerd[1440]: 2025-05-13 00:28:44.795 [WARNING][5461] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" HandleID="k8s-pod-network.59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" Workload="localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0" May 13 00:28:44.800625 containerd[1440]: 2025-05-13 00:28:44.795 [INFO][5461] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" HandleID="k8s-pod-network.59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" Workload="localhost-k8s-calico--apiserver--74cfd5766c--2dwxc-eth0" May 13 00:28:44.800625 containerd[1440]: 2025-05-13 00:28:44.797 [INFO][5461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:44.800625 containerd[1440]: 2025-05-13 00:28:44.798 [INFO][5452] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e" May 13 00:28:44.800625 containerd[1440]: time="2025-05-13T00:28:44.800533305Z" level=info msg="TearDown network for sandbox \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\" successfully" May 13 00:28:44.813196 containerd[1440]: time="2025-05-13T00:28:44.813148314Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:28:44.813591 containerd[1440]: time="2025-05-13T00:28:44.813223475Z" level=info msg="RemovePodSandbox \"59b39242ff9ade1cd6271c52f7745229fb31d057f5ebe1fa40889b9071b64f2e\" returns successfully" May 13 00:28:44.813787 containerd[1440]: time="2025-05-13T00:28:44.813723117Z" level=info msg="StopPodSandbox for \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\"" May 13 00:28:44.883236 containerd[1440]: 2025-05-13 00:28:44.850 [WARNING][5484] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bgptb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e18653e-7d13-4d2f-8b0d-991f11e13bcd", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1", Pod:"csi-node-driver-bgptb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali804dc27fbc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:44.883236 containerd[1440]: 2025-05-13 00:28:44.850 [INFO][5484] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" May 13 00:28:44.883236 containerd[1440]: 2025-05-13 00:28:44.850 [INFO][5484] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" iface="eth0" netns="" May 13 00:28:44.883236 containerd[1440]: 2025-05-13 00:28:44.850 [INFO][5484] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" May 13 00:28:44.883236 containerd[1440]: 2025-05-13 00:28:44.850 [INFO][5484] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" May 13 00:28:44.883236 containerd[1440]: 2025-05-13 00:28:44.869 [INFO][5492] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" HandleID="k8s-pod-network.754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" Workload="localhost-k8s-csi--node--driver--bgptb-eth0" May 13 00:28:44.883236 containerd[1440]: 2025-05-13 00:28:44.869 [INFO][5492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:44.883236 containerd[1440]: 2025-05-13 00:28:44.869 [INFO][5492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:44.883236 containerd[1440]: 2025-05-13 00:28:44.878 [WARNING][5492] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" HandleID="k8s-pod-network.754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" Workload="localhost-k8s-csi--node--driver--bgptb-eth0" May 13 00:28:44.883236 containerd[1440]: 2025-05-13 00:28:44.878 [INFO][5492] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" HandleID="k8s-pod-network.754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" Workload="localhost-k8s-csi--node--driver--bgptb-eth0" May 13 00:28:44.883236 containerd[1440]: 2025-05-13 00:28:44.879 [INFO][5492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:44.883236 containerd[1440]: 2025-05-13 00:28:44.881 [INFO][5484] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" May 13 00:28:44.883909 containerd[1440]: time="2025-05-13T00:28:44.883272231Z" level=info msg="TearDown network for sandbox \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\" successfully" May 13 00:28:44.883909 containerd[1440]: time="2025-05-13T00:28:44.883295751Z" level=info msg="StopPodSandbox for \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\" returns successfully" May 13 00:28:44.883909 containerd[1440]: time="2025-05-13T00:28:44.883759753Z" level=info msg="RemovePodSandbox for \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\"" May 13 00:28:44.883909 containerd[1440]: time="2025-05-13T00:28:44.883790673Z" level=info msg="Forcibly stopping sandbox \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\"" May 13 00:28:44.954170 containerd[1440]: 2025-05-13 00:28:44.921 [WARNING][5515] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bgptb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e18653e-7d13-4d2f-8b0d-991f11e13bcd", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ff419a29b054aa4a565b46c8c9ac387f946ee244b963e0f429571fd985b9efa1", Pod:"csi-node-driver-bgptb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali804dc27fbc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:44.954170 containerd[1440]: 2025-05-13 00:28:44.921 [INFO][5515] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" May 13 00:28:44.954170 containerd[1440]: 2025-05-13 00:28:44.921 [INFO][5515] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" iface="eth0" netns="" May 13 00:28:44.954170 containerd[1440]: 2025-05-13 00:28:44.921 [INFO][5515] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" May 13 00:28:44.954170 containerd[1440]: 2025-05-13 00:28:44.921 [INFO][5515] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" May 13 00:28:44.954170 containerd[1440]: 2025-05-13 00:28:44.940 [INFO][5523] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" HandleID="k8s-pod-network.754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" Workload="localhost-k8s-csi--node--driver--bgptb-eth0" May 13 00:28:44.954170 containerd[1440]: 2025-05-13 00:28:44.940 [INFO][5523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:44.954170 containerd[1440]: 2025-05-13 00:28:44.940 [INFO][5523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:44.954170 containerd[1440]: 2025-05-13 00:28:44.948 [WARNING][5523] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" HandleID="k8s-pod-network.754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" Workload="localhost-k8s-csi--node--driver--bgptb-eth0" May 13 00:28:44.954170 containerd[1440]: 2025-05-13 00:28:44.948 [INFO][5523] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" HandleID="k8s-pod-network.754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" Workload="localhost-k8s-csi--node--driver--bgptb-eth0" May 13 00:28:44.954170 containerd[1440]: 2025-05-13 00:28:44.950 [INFO][5523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:44.954170 containerd[1440]: 2025-05-13 00:28:44.952 [INFO][5515] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86" May 13 00:28:44.954609 containerd[1440]: time="2025-05-13T00:28:44.954203831Z" level=info msg="TearDown network for sandbox \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\" successfully" May 13 00:28:44.956785 containerd[1440]: time="2025-05-13T00:28:44.956755641Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:28:44.956857 containerd[1440]: time="2025-05-13T00:28:44.956811562Z" level=info msg="RemovePodSandbox \"754657b2534859abb846aa87175ccb5245099e2069a4f7380fefbaa6e94a0b86\" returns successfully" May 13 00:28:44.957302 containerd[1440]: time="2025-05-13T00:28:44.957274163Z" level=info msg="StopPodSandbox for \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\"" May 13 00:28:45.029291 containerd[1440]: 2025-05-13 00:28:44.994 [WARNING][5546] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0", GenerateName:"calico-apiserver-74cfd5766c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3294737e-8c5e-4292-96da-dff74f2e17e9", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74cfd5766c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404", Pod:"calico-apiserver-74cfd5766c-hkfcg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic77a10915b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:45.029291 containerd[1440]: 2025-05-13 00:28:44.994 [INFO][5546] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" May 13 00:28:45.029291 containerd[1440]: 2025-05-13 00:28:44.994 [INFO][5546] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" iface="eth0" netns="" May 13 00:28:45.029291 containerd[1440]: 2025-05-13 00:28:44.994 [INFO][5546] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" May 13 00:28:45.029291 containerd[1440]: 2025-05-13 00:28:44.994 [INFO][5546] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" May 13 00:28:45.029291 containerd[1440]: 2025-05-13 00:28:45.013 [INFO][5554] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" HandleID="k8s-pod-network.70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" Workload="localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0" May 13 00:28:45.029291 containerd[1440]: 2025-05-13 00:28:45.013 [INFO][5554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:45.029291 containerd[1440]: 2025-05-13 00:28:45.013 [INFO][5554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:45.029291 containerd[1440]: 2025-05-13 00:28:45.023 [WARNING][5554] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" HandleID="k8s-pod-network.70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" Workload="localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0" May 13 00:28:45.029291 containerd[1440]: 2025-05-13 00:28:45.023 [INFO][5554] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" HandleID="k8s-pod-network.70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" Workload="localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0" May 13 00:28:45.029291 containerd[1440]: 2025-05-13 00:28:45.025 [INFO][5554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:45.029291 containerd[1440]: 2025-05-13 00:28:45.027 [INFO][5546] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" May 13 00:28:45.029291 containerd[1440]: time="2025-05-13T00:28:45.029270527Z" level=info msg="TearDown network for sandbox \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\" successfully" May 13 00:28:45.029291 containerd[1440]: time="2025-05-13T00:28:45.029296207Z" level=info msg="StopPodSandbox for \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\" returns successfully" May 13 00:28:45.030396 containerd[1440]: time="2025-05-13T00:28:45.029772528Z" level=info msg="RemovePodSandbox for \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\"" May 13 00:28:45.030396 containerd[1440]: time="2025-05-13T00:28:45.029804489Z" level=info msg="Forcibly stopping sandbox \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\"" May 13 00:28:45.098702 containerd[1440]: 2025-05-13 00:28:45.066 [WARNING][5577] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0", GenerateName:"calico-apiserver-74cfd5766c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3294737e-8c5e-4292-96da-dff74f2e17e9", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74cfd5766c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2c3cae66b9fb28d250178242baa3d7a5c2fe9629596b4a27139dd855d24dc404", Pod:"calico-apiserver-74cfd5766c-hkfcg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic77a10915b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:28:45.098702 containerd[1440]: 2025-05-13 00:28:45.066 [INFO][5577] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" May 13 00:28:45.098702 containerd[1440]: 2025-05-13 00:28:45.066 [INFO][5577] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" iface="eth0" netns="" May 13 00:28:45.098702 containerd[1440]: 2025-05-13 00:28:45.066 [INFO][5577] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" May 13 00:28:45.098702 containerd[1440]: 2025-05-13 00:28:45.066 [INFO][5577] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" May 13 00:28:45.098702 containerd[1440]: 2025-05-13 00:28:45.085 [INFO][5585] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" HandleID="k8s-pod-network.70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" Workload="localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0" May 13 00:28:45.098702 containerd[1440]: 2025-05-13 00:28:45.085 [INFO][5585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:28:45.098702 containerd[1440]: 2025-05-13 00:28:45.085 [INFO][5585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:28:45.098702 containerd[1440]: 2025-05-13 00:28:45.093 [WARNING][5585] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" HandleID="k8s-pod-network.70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" Workload="localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0" May 13 00:28:45.098702 containerd[1440]: 2025-05-13 00:28:45.093 [INFO][5585] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" HandleID="k8s-pod-network.70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" Workload="localhost-k8s-calico--apiserver--74cfd5766c--hkfcg-eth0" May 13 00:28:45.098702 containerd[1440]: 2025-05-13 00:28:45.095 [INFO][5585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:28:45.098702 containerd[1440]: 2025-05-13 00:28:45.096 [INFO][5577] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8" May 13 00:28:45.099128 containerd[1440]: time="2025-05-13T00:28:45.098838719Z" level=info msg="TearDown network for sandbox \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\" successfully" May 13 00:28:45.105573 containerd[1440]: time="2025-05-13T00:28:45.105533025Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:28:45.105665 containerd[1440]: time="2025-05-13T00:28:45.105600105Z" level=info msg="RemovePodSandbox \"70409843bb899d004e8d9c52f3838eac653a6e93a9c0175d950bec06bdcf52c8\" returns successfully" May 13 00:28:48.863711 systemd[1]: Started sshd@19-10.0.0.91:22-10.0.0.1:42446.service - OpenSSH per-connection server daemon (10.0.0.1:42446). May 13 00:28:48.909510 sshd[5594]: Accepted publickey for core from 10.0.0.1 port 42446 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:48.910899 sshd[5594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:48.915192 systemd-logind[1424]: New session 20 of user core. May 13 00:28:48.924577 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 00:28:49.063759 sshd[5594]: pam_unix(sshd:session): session closed for user core May 13 00:28:49.068744 systemd[1]: sshd@19-10.0.0.91:22-10.0.0.1:42446.service: Deactivated successfully. May 13 00:28:49.070519 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:28:49.073053 systemd-logind[1424]: Session 20 logged out. Waiting for processes to exit. May 13 00:28:49.074635 systemd-logind[1424]: Removed session 20. May 13 00:28:54.077306 systemd[1]: Started sshd@20-10.0.0.91:22-10.0.0.1:43032.service - OpenSSH per-connection server daemon (10.0.0.1:43032). May 13 00:28:54.119872 sshd[5616]: Accepted publickey for core from 10.0.0.1 port 43032 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:54.121270 sshd[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:54.126167 systemd-logind[1424]: New session 21 of user core. May 13 00:28:54.138894 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 00:28:54.282993 sshd[5616]: pam_unix(sshd:session): session closed for user core May 13 00:28:54.285908 systemd[1]: sshd@20-10.0.0.91:22-10.0.0.1:43032.service: Deactivated successfully. May 13 00:28:54.287561 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:28:54.291241 systemd-logind[1424]: Session 21 logged out. Waiting for processes to exit. May 13 00:28:54.292766 systemd-logind[1424]: Removed session 21.