May 8 00:43:38.905091 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 8 00:43:38.905112 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed May 7 22:57:52 -00 2025 May 8 00:43:38.905121 kernel: KASLR enabled May 8 00:43:38.905127 kernel: efi: EFI v2.7 by EDK II May 8 00:43:38.905132 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 8 00:43:38.905138 kernel: random: crng init done May 8 00:43:38.905145 kernel: ACPI: Early table checksum verification disabled May 8 00:43:38.905151 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 8 00:43:38.905157 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 8 00:43:38.905164 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:43:38.905170 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:43:38.905176 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:43:38.905182 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:43:38.905188 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:43:38.905196 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:43:38.905203 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:43:38.905210 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:43:38.905216 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:43:38.905222 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 8 00:43:38.905228 kernel: NUMA: Failed to initialise from firmware May 8 00:43:38.905235 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:43:38.905241 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 8 00:43:38.905247 kernel: Zone ranges: May 8 00:43:38.905253 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:43:38.905259 kernel: DMA32 empty May 8 00:43:38.905266 kernel: Normal empty May 8 00:43:38.905273 kernel: Movable zone start for each node May 8 00:43:38.905279 kernel: Early memory node ranges May 8 00:43:38.905285 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 8 00:43:38.905291 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 8 00:43:38.905298 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 8 00:43:38.905304 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 8 00:43:38.905310 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 8 00:43:38.905316 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 8 00:43:38.905322 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 8 00:43:38.905329 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:43:38.905335 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 8 00:43:38.905342 kernel: psci: probing for conduit method from ACPI. May 8 00:43:38.905348 kernel: psci: PSCIv1.1 detected in firmware. May 8 00:43:38.905355 kernel: psci: Using standard PSCI v0.2 function IDs May 8 00:43:38.905363 kernel: psci: Trusted OS migration not required May 8 00:43:38.905370 kernel: psci: SMC Calling Convention v1.1 May 8 00:43:38.905377 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 8 00:43:38.905385 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 May 8 00:43:38.905392 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 May 8 00:43:38.905398 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 8 00:43:38.905405 kernel: Detected PIPT I-cache on CPU0 May 8 00:43:38.905412 kernel: CPU features: detected: GIC system register CPU interface May 8 00:43:38.905418 kernel: CPU features: detected: Hardware dirty bit management May 8 00:43:38.905425 kernel: CPU features: detected: Spectre-v4 May 8 00:43:38.905431 kernel: CPU features: detected: Spectre-BHB May 8 00:43:38.905438 kernel: CPU features: kernel page table isolation forced ON by KASLR May 8 00:43:38.905445 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 8 00:43:38.905472 kernel: CPU features: detected: ARM erratum 1418040 May 8 00:43:38.905480 kernel: CPU features: detected: SSBS not fully self-synchronizing May 8 00:43:38.905486 kernel: alternatives: applying boot alternatives May 8 00:43:38.905494 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ed66668e4cab2597a697b6f83cdcbc6a64a98dbc7e2125304191704297c07daf May 8 00:43:38.905501 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:43:38.905508 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:43:38.905515 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:43:38.905521 kernel: Fallback order for Node 0: 0 May 8 00:43:38.905528 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 8 00:43:38.905535 kernel: Policy zone: DMA May 8 00:43:38.905541 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:43:38.905550 kernel: software IO TLB: area num 4. May 8 00:43:38.905557 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 8 00:43:38.905564 kernel: Memory: 2386468K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185820K reserved, 0K cma-reserved) May 8 00:43:38.905570 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:43:38.905577 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:43:38.905584 kernel: rcu: RCU event tracing is enabled. May 8 00:43:38.905591 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:43:38.905598 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:43:38.905605 kernel: Tracing variant of Tasks RCU enabled. May 8 00:43:38.905611 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:43:38.905618 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:43:38.905625 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 00:43:38.905632 kernel: GICv3: 256 SPIs implemented May 8 00:43:38.905639 kernel: GICv3: 0 Extended SPIs implemented May 8 00:43:38.905646 kernel: Root IRQ handler: gic_handle_irq May 8 00:43:38.905652 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 8 00:43:38.905659 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 8 00:43:38.905666 kernel: ITS [mem 0x08080000-0x0809ffff] May 8 00:43:38.905672 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 8 00:43:38.905679 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 8 00:43:38.905686 kernel: GICv3: using LPI property table @0x00000000400f0000 May 8 00:43:38.905692 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 8 00:43:38.905706 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:43:38.905714 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:43:38.905721 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 8 00:43:38.905728 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 8 00:43:38.905735 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 8 00:43:38.905742 kernel: arm-pv: using stolen time PV May 8 00:43:38.905749 kernel: Console: colour dummy device 80x25 May 8 00:43:38.905755 kernel: ACPI: Core revision 20230628 May 8 00:43:38.905763 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 8 00:43:38.905769 kernel: pid_max: default: 32768 minimum: 301 May 8 00:43:38.905776 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:43:38.905784 kernel: landlock: Up and running. May 8 00:43:38.905791 kernel: SELinux: Initializing. May 8 00:43:38.905798 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:43:38.905805 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:43:38.905812 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:43:38.905818 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:43:38.905825 kernel: rcu: Hierarchical SRCU implementation. May 8 00:43:38.905832 kernel: rcu: Max phase no-delay instances is 400. May 8 00:43:38.905839 kernel: Platform MSI: ITS@0x8080000 domain created May 8 00:43:38.905847 kernel: PCI/MSI: ITS@0x8080000 domain created May 8 00:43:38.905853 kernel: Remapping and enabling EFI services. May 8 00:43:38.905860 kernel: smp: Bringing up secondary CPUs ... May 8 00:43:38.905867 kernel: Detected PIPT I-cache on CPU1 May 8 00:43:38.905874 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 8 00:43:38.905881 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 8 00:43:38.905887 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:43:38.905894 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 8 00:43:38.905901 kernel: Detected PIPT I-cache on CPU2 May 8 00:43:38.905908 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 8 00:43:38.905916 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 8 00:43:38.905923 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:43:38.905934 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 8 00:43:38.905942 kernel: Detected PIPT I-cache on CPU3 May 8 00:43:38.905949 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 8 00:43:38.905957 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 8 00:43:38.905964 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:43:38.905971 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 8 00:43:38.905978 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:43:38.905986 kernel: SMP: Total of 4 processors activated. May 8 00:43:38.905993 kernel: CPU features: detected: 32-bit EL0 Support May 8 00:43:38.906000 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 8 00:43:38.906008 kernel: CPU features: detected: Common not Private translations May 8 00:43:38.906015 kernel: CPU features: detected: CRC32 instructions May 8 00:43:38.906022 kernel: CPU features: detected: Enhanced Virtualization Traps May 8 00:43:38.906029 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 8 00:43:38.906036 kernel: CPU features: detected: LSE atomic instructions May 8 00:43:38.906044 kernel: CPU features: detected: Privileged Access Never May 8 00:43:38.906051 kernel: CPU features: detected: RAS Extension Support May 8 00:43:38.906058 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 8 00:43:38.906066 kernel: CPU: All CPU(s) started at EL1 May 8 00:43:38.906073 kernel: alternatives: applying system-wide alternatives May 8 00:43:38.906080 kernel: devtmpfs: initialized May 8 00:43:38.906087 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:43:38.906094 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:43:38.906102 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:43:38.906110 kernel: SMBIOS 3.0.0 present. May 8 00:43:38.906117 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 8 00:43:38.906125 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:43:38.906132 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 8 00:43:38.906139 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 00:43:38.906146 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 00:43:38.906154 kernel: audit: initializing netlink subsys (disabled) May 8 00:43:38.906161 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 May 8 00:43:38.906168 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:43:38.906177 kernel: cpuidle: using governor menu May 8 00:43:38.906184 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 00:43:38.906191 kernel: ASID allocator initialised with 32768 entries May 8 00:43:38.906198 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:43:38.906205 kernel: Serial: AMBA PL011 UART driver May 8 00:43:38.906213 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 8 00:43:38.906220 kernel: Modules: 0 pages in range for non-PLT usage May 8 00:43:38.906227 kernel: Modules: 509024 pages in range for PLT usage May 8 00:43:38.906234 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:43:38.906242 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:43:38.906250 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 8 00:43:38.906257 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 8 00:43:38.906264 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:43:38.906271 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:43:38.906278 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 8 00:43:38.906286 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 8 00:43:38.906293 kernel: ACPI: Added _OSI(Module Device) May 8 00:43:38.906300 kernel: ACPI: Added _OSI(Processor Device) May 8 00:43:38.906309 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:43:38.906316 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:43:38.906323 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:43:38.906330 kernel: ACPI: Interpreter enabled May 8 00:43:38.906337 kernel: ACPI: Using GIC for interrupt routing May 8 00:43:38.906344 kernel: ACPI: MCFG table detected, 1 entries May 8 00:43:38.906352 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 8 00:43:38.906359 kernel: printk: console [ttyAMA0] enabled May 8 00:43:38.906366 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:43:38.906499 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:43:38.906574 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 8 00:43:38.906641 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 8 00:43:38.906715 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 8 00:43:38.906781 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 8 00:43:38.906791 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 8 00:43:38.906798 kernel: PCI host bridge to bus 0000:00 May 8 00:43:38.906877 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 8 00:43:38.906938 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 8 00:43:38.906997 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 8 00:43:38.907055 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:43:38.907132 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 8 00:43:38.907208 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:43:38.907296 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 8 00:43:38.907366 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 8 00:43:38.907433 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:43:38.907589 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:43:38.907658 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 8 00:43:38.907733 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 8 00:43:38.907810 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 8 00:43:38.907874 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 8 00:43:38.907933 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 8 00:43:38.907943 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 8 00:43:38.907950 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 8 00:43:38.907957 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 8 00:43:38.907965 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 8 00:43:38.907972 kernel: iommu: Default domain type: Translated May 8 00:43:38.907979 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 00:43:38.907989 kernel: efivars: Registered efivars operations May 8 00:43:38.907996 kernel: vgaarb: loaded May 8 00:43:38.908003 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 00:43:38.908010 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:43:38.908018 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:43:38.908025 kernel: pnp: PnP ACPI init May 8 00:43:38.908093 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 8 00:43:38.908104 kernel: pnp: PnP ACPI: found 1 devices May 8 00:43:38.908111 kernel: NET: Registered PF_INET protocol family May 8 00:43:38.908120 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:43:38.908128 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:43:38.908135 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:43:38.908143 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:43:38.908150 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:43:38.908158 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:43:38.908165 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:43:38.908172 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:43:38.908181 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:43:38.908188 kernel: PCI: CLS 0 bytes, default 64 May 8 00:43:38.908195 kernel: kvm [1]: HYP mode not available May 8 00:43:38.908202 kernel: Initialise system trusted keyrings May 8 00:43:38.908209 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:43:38.908217 kernel: Key type asymmetric registered May 8 00:43:38.908224 kernel: Asymmetric key parser 'x509' registered May 8 00:43:38.908231 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 8 00:43:38.908238 kernel: io scheduler mq-deadline registered May 8 00:43:38.908245 kernel: io scheduler kyber registered May 8 00:43:38.908254 kernel: io scheduler bfq registered May 8 00:43:38.908261 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 8 00:43:38.908269 kernel: ACPI: button: Power Button [PWRB] May 8 00:43:38.908276 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 8 00:43:38.908340 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 8 00:43:38.908350 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:43:38.908358 kernel: thunder_xcv, ver 1.0 May 8 00:43:38.908365 kernel: thunder_bgx, ver 1.0 May 8 00:43:38.908372 kernel: nicpf, ver 1.0 May 8 00:43:38.908381 kernel: nicvf, ver 1.0 May 8 00:43:38.908469 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 00:43:38.908540 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T00:43:38 UTC (1746665018) May 8 00:43:38.908549 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 00:43:38.908557 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 8 00:43:38.908564 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 8 00:43:38.908571 kernel: watchdog: Hard watchdog permanently disabled May 8 00:43:38.908579 kernel: NET: Registered PF_INET6 protocol family May 8 00:43:38.908589 kernel: Segment Routing with IPv6 May 8 00:43:38.908596 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:43:38.908603 kernel: NET: Registered PF_PACKET protocol family May 8 00:43:38.908610 kernel: Key type dns_resolver registered May 8 00:43:38.908618 kernel: registered taskstats version 1 May 8 00:43:38.908625 kernel: Loading compiled-in X.509 certificates May 8 00:43:38.908632 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e350a514a19a92525be490be8fe368f9972240ea' May 8 00:43:38.908639 kernel: Key type .fscrypt registered May 8 00:43:38.908646 kernel: Key type fscrypt-provisioning registered May 8 00:43:38.908654 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:43:38.908662 kernel: ima: Allocated hash algorithm: sha1 May 8 00:43:38.908669 kernel: ima: No architecture policies found May 8 00:43:38.908677 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 00:43:38.908684 kernel: clk: Disabling unused clocks May 8 00:43:38.908691 kernel: Freeing unused kernel memory: 39424K May 8 00:43:38.908704 kernel: Run /init as init process May 8 00:43:38.908711 kernel: with arguments: May 8 00:43:38.908718 kernel: /init May 8 00:43:38.908727 kernel: with environment: May 8 00:43:38.908734 kernel: HOME=/ May 8 00:43:38.908740 kernel: TERM=linux May 8 00:43:38.908748 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:43:38.908875 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:43:38.908886 systemd[1]: Detected virtualization kvm. May 8 00:43:38.908894 systemd[1]: Detected architecture arm64. May 8 00:43:38.908905 systemd[1]: Running in initrd. May 8 00:43:38.908913 systemd[1]: No hostname configured, using default hostname. May 8 00:43:38.908921 systemd[1]: Hostname set to . May 8 00:43:38.908929 systemd[1]: Initializing machine ID from VM UUID. May 8 00:43:38.908937 systemd[1]: Queued start job for default target initrd.target. May 8 00:43:38.908944 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:43:38.908952 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:43:38.908961 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:43:38.908970 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:43:38.908979 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:43:38.908987 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:43:38.908996 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:43:38.909005 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:43:38.909012 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:43:38.909020 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:43:38.909029 systemd[1]: Reached target paths.target - Path Units. May 8 00:43:38.909037 systemd[1]: Reached target slices.target - Slice Units. May 8 00:43:38.909045 systemd[1]: Reached target swap.target - Swaps. May 8 00:43:38.909053 systemd[1]: Reached target timers.target - Timer Units. May 8 00:43:38.909061 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:43:38.909068 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:43:38.909076 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:43:38.909084 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:43:38.909092 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:43:38.909101 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:43:38.909109 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:43:38.909117 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:43:38.909124 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:43:38.909133 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:43:38.909141 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:43:38.909149 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:43:38.909156 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:43:38.909165 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:43:38.909173 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:43:38.909181 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:43:38.909189 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:43:38.909197 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:43:38.909205 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:43:38.909238 systemd-journald[239]: Collecting audit messages is disabled. May 8 00:43:38.909257 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:43:38.909266 systemd-journald[239]: Journal started May 8 00:43:38.909287 systemd-journald[239]: Runtime Journal (/run/log/journal/f9cf6a637d59444a8958bc6c538c82f1) is 5.9M, max 47.3M, 41.4M free. May 8 00:43:38.899846 systemd-modules-load[240]: Inserted module 'overlay' May 8 00:43:38.911993 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:43:38.912395 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:43:38.918478 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:43:38.919631 systemd-modules-load[240]: Inserted module 'br_netfilter' May 8 00:43:38.920519 kernel: Bridge firewalling registered May 8 00:43:38.926571 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:43:38.928189 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:43:38.930510 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:43:38.933304 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:43:38.944584 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:43:38.947472 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:43:38.948851 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:43:38.951070 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:43:38.953355 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:43:38.974657 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:43:38.976987 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:43:38.983741 dracut-cmdline[277]: dracut-dracut-053 May 8 00:43:38.986136 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ed66668e4cab2597a697b6f83cdcbc6a64a98dbc7e2125304191704297c07daf May 8 00:43:39.004524 systemd-resolved[279]: Positive Trust Anchors: May 8 00:43:39.004540 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:43:39.004571 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:43:39.009239 systemd-resolved[279]: Defaulting to hostname 'linux'. May 8 00:43:39.010490 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:43:39.013714 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:43:39.054483 kernel: SCSI subsystem initialized May 8 00:43:39.059465 kernel: Loading iSCSI transport class v2.0-870. May 8 00:43:39.066478 kernel: iscsi: registered transport (tcp) May 8 00:43:39.079480 kernel: iscsi: registered transport (qla4xxx) May 8 00:43:39.079494 kernel: QLogic iSCSI HBA Driver May 8 00:43:39.119389 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:43:39.138581 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:43:39.156533 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:43:39.156571 kernel: device-mapper: uevent: version 1.0.3 May 8 00:43:39.157563 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:43:39.206515 kernel: raid6: neonx8 gen() 15739 MB/s May 8 00:43:39.223489 kernel: raid6: neonx4 gen() 15555 MB/s May 8 00:43:39.240487 kernel: raid6: neonx2 gen() 13114 MB/s May 8 00:43:39.257536 kernel: raid6: neonx1 gen() 10439 MB/s May 8 00:43:39.274487 kernel: raid6: int64x8 gen() 6833 MB/s May 8 00:43:39.291475 kernel: raid6: int64x4 gen() 7297 MB/s May 8 00:43:39.308488 kernel: raid6: int64x2 gen() 6052 MB/s May 8 00:43:39.325583 kernel: raid6: int64x1 gen() 5025 MB/s May 8 00:43:39.325609 kernel: raid6: using algorithm neonx8 gen() 15739 MB/s May 8 00:43:39.343572 kernel: raid6: .... xor() 11896 MB/s, rmw enabled May 8 00:43:39.343585 kernel: raid6: using neon recovery algorithm May 8 00:43:39.348980 kernel: xor: measuring software checksum speed May 8 00:43:39.348996 kernel: 8regs : 19807 MB/sec May 8 00:43:39.349680 kernel: 32regs : 19208 MB/sec May 8 00:43:39.350949 kernel: arm64_neon : 24976 MB/sec May 8 00:43:39.350965 kernel: xor: using function: arm64_neon (24976 MB/sec) May 8 00:43:39.401471 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:43:39.412218 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:43:39.423606 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:43:39.434533 systemd-udevd[460]: Using default interface naming scheme 'v255'. May 8 00:43:39.437586 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:43:39.451731 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:43:39.463784 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation May 8 00:43:39.488467 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:43:39.511595 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:43:39.549163 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:43:39.555608 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:43:39.567943 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:43:39.570082 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:43:39.571674 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:43:39.574150 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:43:39.582649 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:43:39.591032 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:43:39.600228 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 8 00:43:39.615729 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:43:39.615825 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:43:39.615836 kernel: GPT:9289727 != 19775487 May 8 00:43:39.615845 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:43:39.615859 kernel: GPT:9289727 != 19775487 May 8 00:43:39.615868 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:43:39.615877 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:43:39.605249 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:43:39.605361 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:43:39.608362 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:43:39.611846 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:43:39.611976 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:43:39.614584 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:43:39.621742 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:43:39.633474 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (516) May 8 00:43:39.635935 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:43:39.640581 kernel: BTRFS: device fsid 0be52225-f929-4b89-9354-df54a643ece0 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (514) May 8 00:43:39.641481 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 00:43:39.652264 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:43:39.657152 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 00:43:39.661310 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 00:43:39.662520 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 00:43:39.677587 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:43:39.679311 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:43:39.685096 disk-uuid[550]: Primary Header is updated. May 8 00:43:39.685096 disk-uuid[550]: Secondary Entries is updated. May 8 00:43:39.685096 disk-uuid[550]: Secondary Header is updated. May 8 00:43:39.690471 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:43:39.703545 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:43:40.698471 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:43:40.699969 disk-uuid[551]: The operation has completed successfully. May 8 00:43:40.727669 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:43:40.727774 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:43:40.746636 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:43:40.749700 sh[575]: Success May 8 00:43:40.764482 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 00:43:40.797348 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:43:40.809724 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:43:40.811118 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:43:40.821652 kernel: BTRFS info (device dm-0): first mount of filesystem 0be52225-f929-4b89-9354-df54a643ece0 May 8 00:43:40.821690 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 8 00:43:40.821704 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:43:40.823477 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:43:40.823494 kernel: BTRFS info (device dm-0): using free space tree May 8 00:43:40.827273 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:43:40.828593 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:43:40.838625 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:43:40.840388 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:43:40.848584 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:43:40.848621 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:43:40.848638 kernel: BTRFS info (device vda6): using free space tree May 8 00:43:40.851475 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:43:40.861672 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:43:40.863582 kernel: BTRFS info (device vda6): last unmount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:43:40.868255 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:43:40.875640 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:43:40.943587 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:43:40.955637 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:43:40.982385 systemd-networkd[768]: lo: Link UP May 8 00:43:40.982395 systemd-networkd[768]: lo: Gained carrier May 8 00:43:40.983151 systemd-networkd[768]: Enumeration completed May 8 00:43:40.983261 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:43:40.985153 systemd[1]: Reached target network.target - Network. May 8 00:43:40.986141 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:43:40.986144 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:43:40.986932 systemd-networkd[768]: eth0: Link UP May 8 00:43:40.993948 ignition[669]: Ignition 2.19.0 May 8 00:43:40.986935 systemd-networkd[768]: eth0: Gained carrier May 8 00:43:40.993954 ignition[669]: Stage: fetch-offline May 8 00:43:40.986942 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:43:40.993986 ignition[669]: no configs at "/usr/lib/ignition/base.d" May 8 00:43:40.993994 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:43:40.994204 ignition[669]: parsed url from cmdline: "" May 8 00:43:40.994207 ignition[669]: no config URL provided May 8 00:43:40.994212 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:43:40.994218 ignition[669]: no config at "/usr/lib/ignition/user.ign" May 8 00:43:40.994242 ignition[669]: op(1): [started] loading QEMU firmware config module May 8 00:43:40.994247 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:43:41.007129 ignition[669]: op(1): [finished] loading QEMU firmware config module May 8 00:43:41.009511 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.155/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:43:41.048682 ignition[669]: parsing config with SHA512: ca9078e237d5e2b64506cfcfd8a9220b46d991dbba7befbc824e8dd76fa20a22cb5e0866a8ca1d71619aa6471e705d97fc20ee08c29a6f4332b6ef235e40a103 May 8 00:43:41.053134 unknown[669]: fetched base config from "system" May 8 00:43:41.053144 unknown[669]: fetched user config from "qemu" May 8 00:43:41.053659 ignition[669]: fetch-offline: fetch-offline passed May 8 00:43:41.053739 ignition[669]: Ignition finished successfully May 8 00:43:41.055970 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:43:41.057884 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:43:41.067641 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:43:41.077717 ignition[774]: Ignition 2.19.0 May 8 00:43:41.077726 ignition[774]: Stage: kargs May 8 00:43:41.077877 ignition[774]: no configs at "/usr/lib/ignition/base.d" May 8 00:43:41.077886 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:43:41.078801 ignition[774]: kargs: kargs passed May 8 00:43:41.078845 ignition[774]: Ignition finished successfully May 8 00:43:41.082515 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:43:41.084568 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:43:41.097611 ignition[782]: Ignition 2.19.0 May 8 00:43:41.097621 ignition[782]: Stage: disks May 8 00:43:41.097801 ignition[782]: no configs at "/usr/lib/ignition/base.d" May 8 00:43:41.100672 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:43:41.097810 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:43:41.098676 ignition[782]: disks: disks passed May 8 00:43:41.103280 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:43:41.098729 ignition[782]: Ignition finished successfully May 8 00:43:41.105266 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:43:41.107056 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:43:41.109090 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:43:41.110816 systemd[1]: Reached target basic.target - Basic System. May 8 00:43:41.123637 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:43:41.133224 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:43:41.141487 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:43:41.153606 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:43:41.197472 kernel: EXT4-fs (vda9): mounted filesystem f1546e2a-34df-485a-a644-37e10cd925e0 r/w with ordered data mode. Quota mode: none. May 8 00:43:41.198200 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:43:41.199530 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:43:41.220590 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:43:41.223153 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:43:41.224245 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:43:41.224292 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:43:41.224315 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:43:41.235031 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) May 8 00:43:41.235055 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:43:41.235066 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:43:41.235076 kernel: BTRFS info (device vda6): using free space tree May 8 00:43:41.230746 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:43:41.232766 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:43:41.240473 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:43:41.241611 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:43:41.280855 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:43:41.284017 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory May 8 00:43:41.287569 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:43:41.291215 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:43:41.366748 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:43:41.375596 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:43:41.377870 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:43:41.382463 kernel: BTRFS info (device vda6): last unmount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:43:41.397774 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:43:41.400083 ignition[913]: INFO : Ignition 2.19.0 May 8 00:43:41.400083 ignition[913]: INFO : Stage: mount May 8 00:43:41.401680 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:43:41.401680 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:43:41.401680 ignition[913]: INFO : mount: mount passed May 8 00:43:41.401680 ignition[913]: INFO : Ignition finished successfully May 8 00:43:41.402853 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:43:41.414556 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:43:41.820599 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:43:41.829693 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:43:41.836411 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) May 8 00:43:41.836444 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:43:41.836463 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:43:41.838482 kernel: BTRFS info (device vda6): using free space tree May 8 00:43:41.840564 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:43:41.841549 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:43:41.857698 ignition[945]: INFO : Ignition 2.19.0 May 8 00:43:41.857698 ignition[945]: INFO : Stage: files May 8 00:43:41.859482 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:43:41.859482 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:43:41.859482 ignition[945]: DEBUG : files: compiled without relabeling support, skipping May 8 00:43:41.863113 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:43:41.863113 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:43:41.863113 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:43:41.863113 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:43:41.863113 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:43:41.863113 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 00:43:41.863113 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 00:43:41.863113 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 00:43:41.863113 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 8 00:43:41.861763 unknown[945]: wrote ssh authorized keys file for user: core May 8 00:43:42.038585 systemd-networkd[768]: eth0: Gained IPv6LL May 8 00:43:42.090142 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:43:42.323293 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 00:43:42.323293 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 00:43:42.328196 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:43:42.328196 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:43:42.328196 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:43:42.328196 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:43:42.328196 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:43:42.328196 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:43:42.328196 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:43:42.328196 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:43:42.328196 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:43:42.328196 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:43:42.328196 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:43:42.328196 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:43:42.328196 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 8 00:43:42.655769 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 00:43:43.049240 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:43:43.049240 ignition[945]: INFO : files: op(c): [started] processing unit "containerd.service" May 8 00:43:43.052824 ignition[945]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 00:43:43.052824 ignition[945]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 00:43:43.052824 ignition[945]: INFO : files: op(c): [finished] processing unit "containerd.service" May 8 00:43:43.052824 ignition[945]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 8 00:43:43.052824 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:43:43.052824 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:43:43.052824 ignition[945]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 8 00:43:43.052824 ignition[945]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 8 00:43:43.052824 ignition[945]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:43:43.052824 ignition[945]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:43:43.052824 ignition[945]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 8 00:43:43.052824 ignition[945]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:43:43.073551 ignition[945]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:43:43.075118 ignition[945]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:43:43.077725 ignition[945]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:43:43.077725 ignition[945]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" May 8 00:43:43.077725 ignition[945]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:43:43.077725 ignition[945]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:43:43.077725 ignition[945]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:43:43.077725 ignition[945]: INFO : files: files passed May 8 00:43:43.077725 ignition[945]: INFO : Ignition finished successfully May 8 00:43:43.078162 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:43:43.087621 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:43:43.090430 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:43:43.092664 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:43:43.092785 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:43:43.098545 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory May 8 00:43:43.101509 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:43:43.101509 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:43:43.105080 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:43:43.104609 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:43:43.106409 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:43:43.115711 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:43:43.133418 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:43:43.133542 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:43:43.135794 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:43:43.137710 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:43:43.139589 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:43:43.140282 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:43:43.155562 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:43:43.162644 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:43:43.171608 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:43:43.172875 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:43:43.174997 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:43:43.176806 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:43:43.176919 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:43:43.179508 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:43:43.181556 systemd[1]: Stopped target basic.target - Basic System. May 8 00:43:43.183314 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:43:43.185127 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:43:43.187148 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:43:43.189206 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:43:43.191136 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:43:43.193230 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:43:43.195258 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:43:43.197062 systemd[1]: Stopped target swap.target - Swaps. May 8 00:43:43.198611 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:43:43.198736 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:43:43.201119 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:43:43.203123 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:43:43.205096 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:43:43.208523 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:43:43.209844 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:43:43.209956 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:43:43.212861 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:43:43.212973 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:43:43.215008 systemd[1]: Stopped target paths.target - Path Units. May 8 00:43:43.216593 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:43:43.217599 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:43:43.219601 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:43:43.221525 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:43:43.223629 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:43:43.223726 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:43:43.225271 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:43:43.225354 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:43:43.227008 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:43:43.227119 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:43:43.228822 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:43:43.228927 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:43:43.239612 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:43:43.241162 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:43:43.242171 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:43:43.242299 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:43:43.244288 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:43:43.244387 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:43:43.250329 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:43:43.250491 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:43:43.254729 ignition[999]: INFO : Ignition 2.19.0 May 8 00:43:43.254729 ignition[999]: INFO : Stage: umount May 8 00:43:43.254729 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:43:43.254729 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:43:43.261173 ignition[999]: INFO : umount: umount passed May 8 00:43:43.261173 ignition[999]: INFO : Ignition finished successfully May 8 00:43:43.255468 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:43:43.256901 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:43:43.257011 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:43:43.258268 systemd[1]: Stopped target network.target - Network. May 8 00:43:43.260187 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:43:43.260249 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:43:43.262106 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:43:43.262153 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:43:43.263745 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:43:43.263793 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:43:43.265370 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:43:43.265414 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:43:43.267835 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:43:43.269344 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:43:43.274320 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:43:43.274435 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:43:43.278541 systemd-networkd[768]: eth0: DHCPv6 lease lost May 8 00:43:43.278850 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:43:43.278897 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:43:43.280332 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:43:43.280430 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:43:43.282441 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:43:43.282532 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:43:43.293545 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:43:43.294979 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:43:43.295041 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:43:43.296931 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:43:43.296978 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:43:43.298996 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:43:43.299043 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:43:43.301039 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:43:43.311058 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:43:43.311186 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:43:43.313953 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:43:43.314055 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:43:43.315836 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:43:43.315923 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:43:43.318140 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:43:43.318267 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:43:43.319879 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:43:43.319919 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:43:43.321415 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:43:43.321446 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:43:43.323253 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:43:43.323302 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:43:43.326095 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:43:43.326142 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:43:43.328812 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:43:43.328862 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:43:43.343635 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:43:43.344732 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:43:43.344795 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:43:43.346978 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 8 00:43:43.347026 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:43:43.349008 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:43:43.349055 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:43:43.351205 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:43:43.351254 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:43:43.353542 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:43:43.353637 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:43:43.356200 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:43:43.358058 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:43:43.369958 systemd[1]: Switching root. May 8 00:43:43.403365 systemd-journald[239]: Journal stopped May 8 00:43:44.167708 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 8 00:43:44.167767 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:43:44.167780 kernel: SELinux: policy capability open_perms=1 May 8 00:43:44.167790 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:43:44.167799 kernel: SELinux: policy capability always_check_network=0 May 8 00:43:44.167809 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:43:44.167818 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:43:44.167829 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:43:44.167841 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:43:44.167851 kernel: audit: type=1403 audit(1746665023.586:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:43:44.167862 systemd[1]: Successfully loaded SELinux policy in 32.012ms. May 8 00:43:44.167882 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.189ms. May 8 00:43:44.167894 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:43:44.167905 systemd[1]: Detected virtualization kvm. May 8 00:43:44.167916 systemd[1]: Detected architecture arm64. May 8 00:43:44.167926 systemd[1]: Detected first boot. May 8 00:43:44.167936 systemd[1]: Initializing machine ID from VM UUID. May 8 00:43:44.167948 zram_generator::config[1066]: No configuration found. May 8 00:43:44.167960 systemd[1]: Populated /etc with preset unit settings. May 8 00:43:44.167970 systemd[1]: Queued start job for default target multi-user.target. May 8 00:43:44.167981 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 00:43:44.167992 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:43:44.168005 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:43:44.168015 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:43:44.168025 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:43:44.168037 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:43:44.168048 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:43:44.168058 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:43:44.168069 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:43:44.168079 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:43:44.168090 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:43:44.168101 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:43:44.168111 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:43:44.168122 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:43:44.168133 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:43:44.168144 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 8 00:43:44.168155 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:43:44.168166 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:43:44.168176 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:43:44.168187 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:43:44.168197 systemd[1]: Reached target slices.target - Slice Units. May 8 00:43:44.168212 systemd[1]: Reached target swap.target - Swaps. May 8 00:43:44.168224 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:43:44.168236 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:43:44.168246 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:43:44.168257 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:43:44.168267 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:43:44.168278 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:43:44.168289 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:43:44.168299 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:43:44.168310 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:43:44.168321 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:43:44.168332 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:43:44.168342 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:43:44.168352 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:43:44.168362 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:43:44.168373 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:43:44.168384 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:43:44.168394 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:43:44.168405 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:43:44.168416 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:43:44.168427 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:43:44.168437 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:43:44.168447 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:43:44.168469 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:43:44.168492 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:43:44.168504 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 8 00:43:44.168515 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 8 00:43:44.168528 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:43:44.168538 kernel: loop: module loaded May 8 00:43:44.168548 kernel: fuse: init (API version 7.39) May 8 00:43:44.168558 kernel: ACPI: bus type drm_connector registered May 8 00:43:44.168567 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:43:44.168578 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:43:44.168588 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:43:44.168598 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:43:44.168608 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:43:44.168620 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:43:44.168648 systemd-journald[1144]: Collecting audit messages is disabled. May 8 00:43:44.168679 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:43:44.168691 systemd-journald[1144]: Journal started May 8 00:43:44.168712 systemd-journald[1144]: Runtime Journal (/run/log/journal/f9cf6a637d59444a8958bc6c538c82f1) is 5.9M, max 47.3M, 41.4M free. May 8 00:43:44.172495 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:43:44.172966 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:43:44.174165 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:43:44.175386 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:43:44.176684 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:43:44.178383 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:43:44.179855 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:43:44.180012 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:43:44.181386 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:43:44.181556 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:43:44.182883 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:43:44.183037 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:43:44.184321 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:43:44.184492 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:43:44.185943 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:43:44.186092 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:43:44.187384 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:43:44.187593 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:43:44.189117 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:43:44.190562 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:43:44.192393 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:43:44.203729 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:43:44.213556 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:43:44.215628 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:43:44.216747 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:43:44.220641 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:43:44.222715 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:43:44.223917 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:43:44.225031 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:43:44.226180 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:43:44.229613 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:43:44.230919 systemd-journald[1144]: Time spent on flushing to /var/log/journal/f9cf6a637d59444a8958bc6c538c82f1 is 18.205ms for 844 entries. May 8 00:43:44.230919 systemd-journald[1144]: System Journal (/var/log/journal/f9cf6a637d59444a8958bc6c538c82f1) is 8.0M, max 195.6M, 187.6M free. May 8 00:43:44.257107 systemd-journald[1144]: Received client request to flush runtime journal. May 8 00:43:44.232639 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:43:44.235239 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:43:44.236882 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:43:44.238256 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:43:44.242620 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:43:44.245331 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:43:44.250207 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:43:44.257389 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:43:44.259699 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:43:44.261489 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. May 8 00:43:44.261502 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. May 8 00:43:44.265640 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:43:44.277727 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:43:44.279545 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:43:44.300473 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:43:44.307744 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:43:44.320205 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. May 8 00:43:44.320226 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. May 8 00:43:44.324028 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:43:44.665479 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:43:44.683692 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:43:44.703057 systemd-udevd[1224]: Using default interface naming scheme 'v255'. May 8 00:43:44.720956 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:43:44.734799 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:43:44.759913 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:43:44.766899 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. May 8 00:43:44.768484 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1237) May 8 00:43:44.817614 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:43:44.820338 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:43:44.861705 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:43:44.867827 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:43:44.870510 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:43:44.874435 systemd-networkd[1231]: lo: Link UP May 8 00:43:44.874438 systemd-networkd[1231]: lo: Gained carrier May 8 00:43:44.875213 systemd-networkd[1231]: Enumeration completed May 8 00:43:44.875432 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:43:44.875674 systemd-networkd[1231]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:43:44.875679 systemd-networkd[1231]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:43:44.876311 systemd-networkd[1231]: eth0: Link UP May 8 00:43:44.876320 systemd-networkd[1231]: eth0: Gained carrier May 8 00:43:44.876332 systemd-networkd[1231]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:43:44.878973 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:43:44.887751 lvm[1261]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:43:44.900518 systemd-networkd[1231]: eth0: DHCPv4 address 10.0.0.155/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:43:44.911376 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:43:44.914873 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:43:44.916466 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:43:44.926691 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:43:44.931344 lvm[1271]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:43:44.957863 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:43:44.959402 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:43:44.960709 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:43:44.960741 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:43:44.961741 systemd[1]: Reached target machines.target - Containers. May 8 00:43:44.963645 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 00:43:44.978600 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:43:44.980918 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:43:44.982049 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:43:44.982965 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:43:44.985181 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 00:43:44.989616 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:43:44.991578 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:43:44.997849 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:43:45.000484 kernel: loop0: detected capacity change from 0 to 194096 May 8 00:43:45.006599 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:43:45.007290 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 00:43:45.013716 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:43:45.052480 kernel: loop1: detected capacity change from 0 to 114328 May 8 00:43:45.088502 kernel: loop2: detected capacity change from 0 to 114432 May 8 00:43:45.133550 kernel: loop3: detected capacity change from 0 to 194096 May 8 00:43:45.144510 kernel: loop4: detected capacity change from 0 to 114328 May 8 00:43:45.150508 kernel: loop5: detected capacity change from 0 to 114432 May 8 00:43:45.157749 (sd-merge)[1292]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 00:43:45.158150 (sd-merge)[1292]: Merged extensions into '/usr'. May 8 00:43:45.161465 systemd[1]: Reloading requested from client PID 1279 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:43:45.161481 systemd[1]: Reloading... May 8 00:43:45.220593 zram_generator::config[1323]: No configuration found. May 8 00:43:45.289604 ldconfig[1276]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:43:45.314960 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:43:45.358926 systemd[1]: Reloading finished in 197 ms. May 8 00:43:45.375190 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:43:45.376791 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:43:45.391603 systemd[1]: Starting ensure-sysext.service... May 8 00:43:45.393440 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:43:45.398446 systemd[1]: Reloading requested from client PID 1361 ('systemctl') (unit ensure-sysext.service)... May 8 00:43:45.398475 systemd[1]: Reloading... May 8 00:43:45.409399 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:43:45.409806 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:43:45.410411 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:43:45.410661 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. May 8 00:43:45.410721 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. May 8 00:43:45.413632 systemd-tmpfiles[1362]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:43:45.413645 systemd-tmpfiles[1362]: Skipping /boot May 8 00:43:45.420251 systemd-tmpfiles[1362]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:43:45.420266 systemd-tmpfiles[1362]: Skipping /boot May 8 00:43:45.438472 zram_generator::config[1391]: No configuration found. May 8 00:43:45.528418 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:43:45.573230 systemd[1]: Reloading finished in 174 ms. May 8 00:43:45.588435 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:43:45.616037 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:43:45.618759 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:43:45.621324 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:43:45.624150 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:43:45.628303 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:43:45.632114 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:43:45.634822 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:43:45.636980 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:43:45.641277 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:43:45.643282 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:43:45.645689 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:43:45.646513 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:43:45.649014 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:43:45.649154 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:43:45.651047 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:43:45.651724 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:43:45.655783 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:43:45.662036 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:43:45.663363 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:43:45.667745 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:43:45.671567 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:43:45.673253 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:43:45.675735 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:43:45.678256 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:43:45.682994 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:43:45.683143 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:43:45.687636 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:43:45.687795 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:43:45.688846 augenrules[1471]: No rules May 8 00:43:45.689417 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:43:45.692625 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:43:45.694274 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:43:45.696076 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:43:45.697759 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:43:45.706343 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:43:45.714814 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:43:45.716856 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:43:45.721602 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:43:45.724800 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:43:45.726002 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:43:45.726263 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:43:45.727199 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:43:45.727423 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:43:45.728038 systemd-resolved[1438]: Positive Trust Anchors: May 8 00:43:45.728056 systemd-resolved[1438]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:43:45.728089 systemd-resolved[1438]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:43:45.729492 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:43:45.729629 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:43:45.731429 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:43:45.731674 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:43:45.733329 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:43:45.733750 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:43:45.734005 systemd-resolved[1438]: Defaulting to hostname 'linux'. May 8 00:43:45.738354 systemd[1]: Finished ensure-sysext.service. May 8 00:43:45.741639 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:43:45.741729 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:43:45.753622 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:43:45.754812 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:43:45.756178 systemd[1]: Reached target network.target - Network. May 8 00:43:45.757091 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:43:45.795235 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:43:45.796024 systemd-timesyncd[1505]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:43:45.796074 systemd-timesyncd[1505]: Initial clock synchronization to Thu 2025-05-08 00:43:45.763828 UTC. May 8 00:43:45.796847 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:43:45.798003 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:43:45.799233 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:43:45.800507 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:43:45.801743 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:43:45.801792 systemd[1]: Reached target paths.target - Path Units. May 8 00:43:45.802808 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:43:45.803991 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:43:45.805107 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:43:45.806305 systemd[1]: Reached target timers.target - Timer Units. May 8 00:43:45.807974 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:43:45.810447 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:43:45.812808 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:43:45.817443 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:43:45.818498 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:43:45.819412 systemd[1]: Reached target basic.target - Basic System. May 8 00:43:45.820520 systemd[1]: System is tainted: cgroupsv1 May 8 00:43:45.820564 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:43:45.820582 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:43:45.822181 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:43:45.824213 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:43:45.826113 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:43:45.830596 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:43:45.831569 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:43:45.832514 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:43:45.837363 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:43:45.839190 jq[1511]: false May 8 00:43:45.841717 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:43:45.847396 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:43:45.852139 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:43:45.859849 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:43:45.860416 extend-filesystems[1513]: Found loop3 May 8 00:43:45.862015 extend-filesystems[1513]: Found loop4 May 8 00:43:45.862015 extend-filesystems[1513]: Found loop5 May 8 00:43:45.862015 extend-filesystems[1513]: Found vda May 8 00:43:45.862015 extend-filesystems[1513]: Found vda1 May 8 00:43:45.862015 extend-filesystems[1513]: Found vda2 May 8 00:43:45.862015 extend-filesystems[1513]: Found vda3 May 8 00:43:45.862015 extend-filesystems[1513]: Found usr May 8 00:43:45.862015 extend-filesystems[1513]: Found vda4 May 8 00:43:45.862015 extend-filesystems[1513]: Found vda6 May 8 00:43:45.862015 extend-filesystems[1513]: Found vda7 May 8 00:43:45.862015 extend-filesystems[1513]: Found vda9 May 8 00:43:45.862015 extend-filesystems[1513]: Checking size of /dev/vda9 May 8 00:43:45.862016 dbus-daemon[1510]: [system] SELinux support is enabled May 8 00:43:45.862733 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:43:45.871589 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:43:45.874585 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:43:45.883099 jq[1533]: true May 8 00:43:45.883392 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:43:45.883649 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:43:45.883955 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:43:45.884202 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:43:45.884562 extend-filesystems[1513]: Resized partition /dev/vda9 May 8 00:43:45.892671 extend-filesystems[1540]: resize2fs 1.47.1 (20-May-2024) May 8 00:43:45.910701 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:43:45.910731 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1228) May 8 00:43:45.897836 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:43:45.898058 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:43:45.934030 tar[1541]: linux-arm64/helm May 8 00:43:45.934303 jq[1543]: true May 8 00:43:45.943476 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:43:45.945192 update_engine[1528]: I20250508 00:43:45.944593 1528 main.cc:92] Flatcar Update Engine starting May 8 00:43:45.955148 update_engine[1528]: I20250508 00:43:45.946427 1528 update_check_scheduler.cc:74] Next update check in 7m50s May 8 00:43:45.951968 (ntainerd)[1546]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:43:45.957296 systemd[1]: Started update-engine.service - Update Engine. May 8 00:43:45.958416 systemd-logind[1522]: Watching system buttons on /dev/input/event0 (Power Button) May 8 00:43:45.958690 extend-filesystems[1540]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:43:45.958690 extend-filesystems[1540]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:43:45.958690 extend-filesystems[1540]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:43:45.967254 extend-filesystems[1513]: Resized filesystem in /dev/vda9 May 8 00:43:45.958996 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:43:45.959019 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:43:45.963216 systemd-logind[1522]: New seat seat0. May 8 00:43:45.963654 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:43:45.963679 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:43:45.966674 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:43:45.972874 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:43:45.975078 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:43:45.976646 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:43:45.977298 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:43:46.012991 bash[1574]: Updated "/home/core/.ssh/authorized_keys" May 8 00:43:46.014585 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:43:46.016417 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:43:46.034982 locksmithd[1566]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:43:46.148207 containerd[1546]: time="2025-05-08T00:43:46.148022932Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 00:43:46.180782 containerd[1546]: time="2025-05-08T00:43:46.180555781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:43:46.182043 containerd[1546]: time="2025-05-08T00:43:46.182009497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:43:46.182121 containerd[1546]: time="2025-05-08T00:43:46.182107500Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:43:46.182174 containerd[1546]: time="2025-05-08T00:43:46.182162150Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:43:46.182420 containerd[1546]: time="2025-05-08T00:43:46.182396159Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:43:46.182517 containerd[1546]: time="2025-05-08T00:43:46.182502306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:43:46.182642 containerd[1546]: time="2025-05-08T00:43:46.182621586Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:43:46.182710 containerd[1546]: time="2025-05-08T00:43:46.182696715Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:43:46.183707 containerd[1546]: time="2025-05-08T00:43:46.182967131Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:43:46.183707 containerd[1546]: time="2025-05-08T00:43:46.182989047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:43:46.183707 containerd[1546]: time="2025-05-08T00:43:46.183002061Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:43:46.183707 containerd[1546]: time="2025-05-08T00:43:46.183011482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:43:46.183707 containerd[1546]: time="2025-05-08T00:43:46.183084575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:43:46.183707 containerd[1546]: time="2025-05-08T00:43:46.183261060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:43:46.183707 containerd[1546]: time="2025-05-08T00:43:46.183410280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:43:46.183707 containerd[1546]: time="2025-05-08T00:43:46.183425489Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:43:46.183707 containerd[1546]: time="2025-05-08T00:43:46.183531755Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:43:46.183707 containerd[1546]: time="2025-05-08T00:43:46.183576945Z" level=info msg="metadata content store policy set" policy=shared May 8 00:43:46.190746 containerd[1546]: time="2025-05-08T00:43:46.190721016Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:43:46.190855 containerd[1546]: time="2025-05-08T00:43:46.190840576Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:43:46.190913 containerd[1546]: time="2025-05-08T00:43:46.190900855Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:43:46.190966 containerd[1546]: time="2025-05-08T00:43:46.190954107Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:43:46.191015 containerd[1546]: time="2025-05-08T00:43:46.191004526Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:43:46.191217 containerd[1546]: time="2025-05-08T00:43:46.191197338Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:43:46.191689 containerd[1546]: time="2025-05-08T00:43:46.191662363Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:43:46.191819 containerd[1546]: time="2025-05-08T00:43:46.191801643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:43:46.191840 containerd[1546]: time="2025-05-08T00:43:46.191823040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:43:46.191858 containerd[1546]: time="2025-05-08T00:43:46.191836932Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:43:46.191858 containerd[1546]: time="2025-05-08T00:43:46.191852261Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:43:46.191911 containerd[1546]: time="2025-05-08T00:43:46.191865993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:43:46.191911 containerd[1546]: time="2025-05-08T00:43:46.191878927Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:43:46.191911 containerd[1546]: time="2025-05-08T00:43:46.191892021Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:43:46.191963 containerd[1546]: time="2025-05-08T00:43:46.191907470Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:43:46.191963 containerd[1546]: time="2025-05-08T00:43:46.191925354Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:43:46.191963 containerd[1546]: time="2025-05-08T00:43:46.191937490Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:43:46.191963 containerd[1546]: time="2025-05-08T00:43:46.191949386Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:43:46.192029 containerd[1546]: time="2025-05-08T00:43:46.191975373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:43:46.192029 containerd[1546]: time="2025-05-08T00:43:46.191996890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:43:46.192029 containerd[1546]: time="2025-05-08T00:43:46.192009864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:43:46.192029 containerd[1546]: time="2025-05-08T00:43:46.192021521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:43:46.192098 containerd[1546]: time="2025-05-08T00:43:46.192033576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:43:46.192098 containerd[1546]: time="2025-05-08T00:43:46.192047309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:43:46.192098 containerd[1546]: time="2025-05-08T00:43:46.192059444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:43:46.192098 containerd[1546]: time="2025-05-08T00:43:46.192071101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:43:46.192098 containerd[1546]: time="2025-05-08T00:43:46.192083596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:43:46.192098 containerd[1546]: time="2025-05-08T00:43:46.192097408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:43:46.192196 containerd[1546]: time="2025-05-08T00:43:46.192110222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:43:46.192196 containerd[1546]: time="2025-05-08T00:43:46.192122158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:43:46.192196 containerd[1546]: time="2025-05-08T00:43:46.192133376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:43:46.192196 containerd[1546]: time="2025-05-08T00:43:46.192152098Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:43:46.192196 containerd[1546]: time="2025-05-08T00:43:46.192172297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:43:46.192196 containerd[1546]: time="2025-05-08T00:43:46.192183794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:43:46.192196 containerd[1546]: time="2025-05-08T00:43:46.192193934Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:43:46.192332 containerd[1546]: time="2025-05-08T00:43:46.192320080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:43:46.192352 containerd[1546]: time="2025-05-08T00:43:46.192338004Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:43:46.192352 containerd[1546]: time="2025-05-08T00:43:46.192348862Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:43:46.192388 containerd[1546]: time="2025-05-08T00:43:46.192363473Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:43:46.192388 containerd[1546]: time="2025-05-08T00:43:46.192373652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:43:46.192388 containerd[1546]: time="2025-05-08T00:43:46.192385668Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:43:46.192441 containerd[1546]: time="2025-05-08T00:43:46.192395409Z" level=info msg="NRI interface is disabled by configuration." May 8 00:43:46.192441 containerd[1546]: time="2025-05-08T00:43:46.192407265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:43:46.192798 containerd[1546]: time="2025-05-08T00:43:46.192739916Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:43:46.192901 containerd[1546]: time="2025-05-08T00:43:46.192802550Z" level=info msg="Connect containerd service" May 8 00:43:46.192901 containerd[1546]: time="2025-05-08T00:43:46.192835164Z" level=info msg="using legacy CRI server" May 8 00:43:46.192901 containerd[1546]: time="2025-05-08T00:43:46.192841671Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:43:46.192965 containerd[1546]: time="2025-05-08T00:43:46.192921950Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:43:46.193747 containerd[1546]: time="2025-05-08T00:43:46.193721382Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:43:46.193987 containerd[1546]: time="2025-05-08T00:43:46.193932118Z" level=info msg="Start subscribing containerd event" May 8 00:43:46.194102 containerd[1546]: time="2025-05-08T00:43:46.194041538Z" level=info msg="Start recovering state" May 8 00:43:46.194187 containerd[1546]: time="2025-05-08T00:43:46.194172714Z" level=info msg="Start event monitor" May 8 00:43:46.194344 containerd[1546]: time="2025-05-08T00:43:46.194229879Z" level=info msg="Start snapshots syncer" May 8 00:43:46.194344 containerd[1546]: time="2025-05-08T00:43:46.194245647Z" level=info msg="Start cni network conf syncer for default" May 8 00:43:46.194344 containerd[1546]: time="2025-05-08T00:43:46.194255388Z" level=info msg="Start streaming server" May 8 00:43:46.194422 containerd[1546]: time="2025-05-08T00:43:46.194358181Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:43:46.194542 containerd[1546]: time="2025-05-08T00:43:46.194461134Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:43:46.194742 containerd[1546]: time="2025-05-08T00:43:46.194648637Z" level=info msg="containerd successfully booted in 0.049037s" May 8 00:43:46.194757 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:43:46.305731 tar[1541]: linux-arm64/LICENSE May 8 00:43:46.305828 tar[1541]: linux-arm64/README.md May 8 00:43:46.317620 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:43:46.452484 sshd_keygen[1535]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:43:46.471478 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:43:46.481689 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:43:46.488298 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:43:46.488534 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:43:46.491189 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:43:46.503895 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:43:46.517781 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:43:46.520002 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 8 00:43:46.521382 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:43:46.774655 systemd-networkd[1231]: eth0: Gained IPv6LL May 8 00:43:46.776922 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:43:46.778845 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:43:46.786647 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 00:43:46.788998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:43:46.791053 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:43:46.805092 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:43:46.805302 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 00:43:46.807396 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:43:46.809370 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:43:47.285134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:43:47.286818 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:43:47.288772 systemd[1]: Startup finished in 5.443s (kernel) + 3.742s (userspace) = 9.185s. May 8 00:43:47.289064 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:43:47.759255 kubelet[1645]: E0508 00:43:47.759148 1645 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:43:47.761887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:43:47.762093 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:43:51.657262 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:43:51.673712 systemd[1]: Started sshd@0-10.0.0.155:22-10.0.0.1:53500.service - OpenSSH per-connection server daemon (10.0.0.1:53500). May 8 00:43:51.720841 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 53500 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:43:51.722567 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:43:51.741294 systemd-logind[1522]: New session 1 of user core. May 8 00:43:51.741684 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:43:51.755686 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:43:51.765024 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:43:51.767228 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:43:51.773406 (systemd)[1666]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:43:51.864229 systemd[1666]: Queued start job for default target default.target. May 8 00:43:51.864623 systemd[1666]: Created slice app.slice - User Application Slice. May 8 00:43:51.864645 systemd[1666]: Reached target paths.target - Paths. May 8 00:43:51.864656 systemd[1666]: Reached target timers.target - Timers. May 8 00:43:51.875552 systemd[1666]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:43:51.881116 systemd[1666]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:43:51.881175 systemd[1666]: Reached target sockets.target - Sockets. May 8 00:43:51.881186 systemd[1666]: Reached target basic.target - Basic System. May 8 00:43:51.881220 systemd[1666]: Reached target default.target - Main User Target. May 8 00:43:51.881244 systemd[1666]: Startup finished in 102ms. May 8 00:43:51.881559 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:43:51.883369 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:43:51.937673 systemd[1]: Started sshd@1-10.0.0.155:22-10.0.0.1:53502.service - OpenSSH per-connection server daemon (10.0.0.1:53502). May 8 00:43:51.966503 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 53502 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:43:51.967656 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:43:51.971554 systemd-logind[1522]: New session 2 of user core. May 8 00:43:51.977676 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:43:52.028399 sshd[1678]: pam_unix(sshd:session): session closed for user core May 8 00:43:52.048734 systemd[1]: Started sshd@2-10.0.0.155:22-10.0.0.1:53504.service - OpenSSH per-connection server daemon (10.0.0.1:53504). May 8 00:43:52.049084 systemd[1]: sshd@1-10.0.0.155:22-10.0.0.1:53502.service: Deactivated successfully. May 8 00:43:52.051144 systemd-logind[1522]: Session 2 logged out. Waiting for processes to exit. May 8 00:43:52.051251 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:43:52.052200 systemd-logind[1522]: Removed session 2. May 8 00:43:52.076098 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 53504 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:43:52.077165 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:43:52.080983 systemd-logind[1522]: New session 3 of user core. May 8 00:43:52.090654 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:43:52.138013 sshd[1683]: pam_unix(sshd:session): session closed for user core May 8 00:43:52.147745 systemd[1]: Started sshd@3-10.0.0.155:22-10.0.0.1:53516.service - OpenSSH per-connection server daemon (10.0.0.1:53516). May 8 00:43:52.148162 systemd[1]: sshd@2-10.0.0.155:22-10.0.0.1:53504.service: Deactivated successfully. May 8 00:43:52.149787 systemd-logind[1522]: Session 3 logged out. Waiting for processes to exit. May 8 00:43:52.150387 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:43:52.151703 systemd-logind[1522]: Removed session 3. May 8 00:43:52.174786 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 53516 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:43:52.175959 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:43:52.179514 systemd-logind[1522]: New session 4 of user core. May 8 00:43:52.191654 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:43:52.243399 sshd[1691]: pam_unix(sshd:session): session closed for user core May 8 00:43:52.249672 systemd[1]: Started sshd@4-10.0.0.155:22-10.0.0.1:53520.service - OpenSSH per-connection server daemon (10.0.0.1:53520). May 8 00:43:52.250055 systemd[1]: sshd@3-10.0.0.155:22-10.0.0.1:53516.service: Deactivated successfully. May 8 00:43:52.252299 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:43:52.252604 systemd-logind[1522]: Session 4 logged out. Waiting for processes to exit. May 8 00:43:52.253717 systemd-logind[1522]: Removed session 4. May 8 00:43:52.277840 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 53520 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:43:52.279574 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:43:52.283298 systemd-logind[1522]: New session 5 of user core. May 8 00:43:52.296758 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:43:52.367377 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:43:52.367678 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:43:52.393251 sudo[1706]: pam_unix(sudo:session): session closed for user root May 8 00:43:52.394813 sshd[1699]: pam_unix(sshd:session): session closed for user core May 8 00:43:52.405644 systemd[1]: Started sshd@5-10.0.0.155:22-10.0.0.1:53526.service - OpenSSH per-connection server daemon (10.0.0.1:53526). May 8 00:43:52.406005 systemd[1]: sshd@4-10.0.0.155:22-10.0.0.1:53520.service: Deactivated successfully. May 8 00:43:52.408146 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:43:52.408789 systemd-logind[1522]: Session 5 logged out. Waiting for processes to exit. May 8 00:43:52.409655 systemd-logind[1522]: Removed session 5. May 8 00:43:52.433570 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 53526 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:43:52.434718 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:43:52.438006 systemd-logind[1522]: New session 6 of user core. May 8 00:43:52.449732 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:43:52.501837 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:43:52.502109 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:43:52.505011 sudo[1716]: pam_unix(sudo:session): session closed for user root May 8 00:43:52.509317 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 00:43:52.509863 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:43:52.530706 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 8 00:43:52.532042 auditctl[1719]: No rules May 8 00:43:52.532779 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:43:52.533034 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 8 00:43:52.535022 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:43:52.556928 augenrules[1738]: No rules May 8 00:43:52.558031 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:43:52.559968 sudo[1715]: pam_unix(sudo:session): session closed for user root May 8 00:43:52.561474 sshd[1708]: pam_unix(sshd:session): session closed for user core May 8 00:43:52.572694 systemd[1]: Started sshd@6-10.0.0.155:22-10.0.0.1:46432.service - OpenSSH per-connection server daemon (10.0.0.1:46432). May 8 00:43:52.573572 systemd[1]: sshd@5-10.0.0.155:22-10.0.0.1:53526.service: Deactivated successfully. May 8 00:43:52.574989 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:43:52.575709 systemd-logind[1522]: Session 6 logged out. Waiting for processes to exit. May 8 00:43:52.577385 systemd-logind[1522]: Removed session 6. May 8 00:43:52.600181 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 46432 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:43:52.601307 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:43:52.605238 systemd-logind[1522]: New session 7 of user core. May 8 00:43:52.619685 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:43:52.668921 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:43:52.669177 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:43:52.975705 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:43:52.976018 (dockerd)[1770]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:43:53.263478 dockerd[1770]: time="2025-05-08T00:43:53.263334315Z" level=info msg="Starting up" May 8 00:43:53.505017 dockerd[1770]: time="2025-05-08T00:43:53.504962862Z" level=info msg="Loading containers: start." May 8 00:43:53.600495 kernel: Initializing XFRM netlink socket May 8 00:43:53.674276 systemd-networkd[1231]: docker0: Link UP May 8 00:43:53.698563 dockerd[1770]: time="2025-05-08T00:43:53.698513070Z" level=info msg="Loading containers: done." May 8 00:43:53.718425 dockerd[1770]: time="2025-05-08T00:43:53.718370868Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:43:53.718563 dockerd[1770]: time="2025-05-08T00:43:53.718493911Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 8 00:43:53.718634 dockerd[1770]: time="2025-05-08T00:43:53.718595341Z" level=info msg="Daemon has completed initialization" May 8 00:43:53.750767 dockerd[1770]: time="2025-05-08T00:43:53.750461657Z" level=info msg="API listen on /run/docker.sock" May 8 00:43:53.750678 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:43:54.441511 containerd[1546]: time="2025-05-08T00:43:54.441445044Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 00:43:55.062440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount32417967.mount: Deactivated successfully. May 8 00:43:56.572065 containerd[1546]: time="2025-05-08T00:43:56.571911551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:43:56.573035 containerd[1546]: time="2025-05-08T00:43:56.572757339Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 8 00:43:56.573791 containerd[1546]: time="2025-05-08T00:43:56.573753769Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:43:56.576756 containerd[1546]: time="2025-05-08T00:43:56.576721602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:43:56.577980 containerd[1546]: time="2025-05-08T00:43:56.577935483Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.136426634s" May 8 00:43:56.578021 containerd[1546]: time="2025-05-08T00:43:56.577978877Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 8 00:43:56.596208 containerd[1546]: time="2025-05-08T00:43:56.596162475Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 00:43:58.012289 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:43:58.021633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:43:58.131617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:43:58.135577 (kubelet)[2003]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:43:58.215660 containerd[1546]: time="2025-05-08T00:43:58.215610691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:43:58.217402 containerd[1546]: time="2025-05-08T00:43:58.217305401Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 8 00:43:58.219777 containerd[1546]: time="2025-05-08T00:43:58.218383323Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:43:58.221539 containerd[1546]: time="2025-05-08T00:43:58.221507390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:43:58.222201 containerd[1546]: time="2025-05-08T00:43:58.222160625Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.625955754s" May 8 00:43:58.222201 containerd[1546]: time="2025-05-08T00:43:58.222197071Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 8 00:43:58.230129 kubelet[2003]: E0508 00:43:58.230069 2003 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:43:58.233242 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:43:58.233409 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:43:58.244079 containerd[1546]: time="2025-05-08T00:43:58.244047676Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 00:43:59.256352 containerd[1546]: time="2025-05-08T00:43:59.256289228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:43:59.256999 containerd[1546]: time="2025-05-08T00:43:59.256954171Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 8 00:43:59.257594 containerd[1546]: time="2025-05-08T00:43:59.257566040Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:43:59.260328 containerd[1546]: time="2025-05-08T00:43:59.260294351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:43:59.262369 containerd[1546]: time="2025-05-08T00:43:59.262338537Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.018251896s" May 8 00:43:59.262408 containerd[1546]: time="2025-05-08T00:43:59.262370269Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 8 00:43:59.285181 containerd[1546]: time="2025-05-08T00:43:59.285148654Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 00:44:00.178363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount365433046.mount: Deactivated successfully. May 8 00:44:00.497147 containerd[1546]: time="2025-05-08T00:44:00.497022046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:00.498202 containerd[1546]: time="2025-05-08T00:44:00.497731629Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 8 00:44:00.499180 containerd[1546]: time="2025-05-08T00:44:00.499137005Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:00.501103 containerd[1546]: time="2025-05-08T00:44:00.501067074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:00.501697 containerd[1546]: time="2025-05-08T00:44:00.501667865Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.216483282s" May 8 00:44:00.501753 containerd[1546]: time="2025-05-08T00:44:00.501699519Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 8 00:44:00.520328 containerd[1546]: time="2025-05-08T00:44:00.520303179Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:44:01.122789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4183653178.mount: Deactivated successfully. May 8 00:44:01.802830 containerd[1546]: time="2025-05-08T00:44:01.802782568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:01.804276 containerd[1546]: time="2025-05-08T00:44:01.804012190Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 8 00:44:01.805100 containerd[1546]: time="2025-05-08T00:44:01.805068344Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:01.808126 containerd[1546]: time="2025-05-08T00:44:01.808087360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:01.809430 containerd[1546]: time="2025-05-08T00:44:01.809393324Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.28905893s" May 8 00:44:01.809430 containerd[1546]: time="2025-05-08T00:44:01.809428817Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 8 00:44:01.828077 containerd[1546]: time="2025-05-08T00:44:01.828048372Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 00:44:02.267363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1421245250.mount: Deactivated successfully. May 8 00:44:02.273105 containerd[1546]: time="2025-05-08T00:44:02.273052974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:02.274060 containerd[1546]: time="2025-05-08T00:44:02.273848884Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 8 00:44:02.274911 containerd[1546]: time="2025-05-08T00:44:02.274858083Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:02.277412 containerd[1546]: time="2025-05-08T00:44:02.277349021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:02.278466 containerd[1546]: time="2025-05-08T00:44:02.278412460Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 450.327596ms" May 8 00:44:02.278518 containerd[1546]: time="2025-05-08T00:44:02.278466582Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 8 00:44:02.299579 containerd[1546]: time="2025-05-08T00:44:02.299538950Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 00:44:02.842577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount86173911.mount: Deactivated successfully. May 8 00:44:05.411825 containerd[1546]: time="2025-05-08T00:44:05.411631518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:05.412813 containerd[1546]: time="2025-05-08T00:44:05.412781360Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 8 00:44:05.413536 containerd[1546]: time="2025-05-08T00:44:05.413479389Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:05.416697 containerd[1546]: time="2025-05-08T00:44:05.416643725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:05.418426 containerd[1546]: time="2025-05-08T00:44:05.418378382Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.118801379s" May 8 00:44:05.418426 containerd[1546]: time="2025-05-08T00:44:05.418420957Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 8 00:44:08.483717 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:44:08.490646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:44:08.691003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:44:08.695126 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:44:08.735051 kubelet[2235]: E0508 00:44:08.734927 2235 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:44:08.738134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:44:08.738335 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:44:10.617681 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:44:10.627691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:44:10.644189 systemd[1]: Reloading requested from client PID 2253 ('systemctl') (unit session-7.scope)... May 8 00:44:10.644209 systemd[1]: Reloading... May 8 00:44:10.713480 zram_generator::config[2295]: No configuration found. May 8 00:44:10.815063 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:44:10.867626 systemd[1]: Reloading finished in 223 ms. May 8 00:44:10.901274 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:44:10.904654 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:44:10.904917 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:44:10.906629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:44:11.001170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:44:11.005300 (kubelet)[2352]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:44:11.047042 kubelet[2352]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:44:11.047042 kubelet[2352]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:44:11.047042 kubelet[2352]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:44:11.047392 kubelet[2352]: I0508 00:44:11.047080 2352 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:44:11.785572 kubelet[2352]: I0508 00:44:11.785522 2352 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:44:11.785572 kubelet[2352]: I0508 00:44:11.785555 2352 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:44:11.785786 kubelet[2352]: I0508 00:44:11.785759 2352 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:44:11.824593 kubelet[2352]: E0508 00:44:11.824549 2352 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.155:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.155:6443: connect: connection refused May 8 00:44:11.824849 kubelet[2352]: I0508 00:44:11.824822 2352 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:44:11.835262 kubelet[2352]: I0508 00:44:11.835234 2352 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:44:11.836420 kubelet[2352]: I0508 00:44:11.836374 2352 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:44:11.836600 kubelet[2352]: I0508 00:44:11.836419 2352 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:44:11.836822 kubelet[2352]: I0508 00:44:11.836802 2352 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:44:11.836849 kubelet[2352]: I0508 00:44:11.836824 2352 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:44:11.837259 kubelet[2352]: I0508 00:44:11.837222 2352 state_mem.go:36] "Initialized new in-memory state store" May 8 00:44:11.838139 kubelet[2352]: I0508 00:44:11.838111 2352 kubelet.go:400] "Attempting to sync node with API server" May 8 00:44:11.838684 kubelet[2352]: I0508 00:44:11.838134 2352 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:44:11.838826 kubelet[2352]: W0508 00:44:11.838775 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused May 8 00:44:11.838859 kubelet[2352]: E0508 00:44:11.838831 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused May 8 00:44:11.838894 kubelet[2352]: I0508 00:44:11.838883 2352 kubelet.go:312] "Adding apiserver pod source" May 8 00:44:11.838968 kubelet[2352]: I0508 00:44:11.838958 2352 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:44:11.839369 kubelet[2352]: W0508 00:44:11.839337 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused May 8 00:44:11.839418 kubelet[2352]: E0508 00:44:11.839378 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused May 8 00:44:11.839928 kubelet[2352]: I0508 00:44:11.839909 2352 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:44:11.840439 kubelet[2352]: I0508 00:44:11.840376 2352 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:44:11.840770 kubelet[2352]: W0508 00:44:11.840584 2352 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:44:11.841632 kubelet[2352]: I0508 00:44:11.841616 2352 server.go:1264] "Started kubelet" May 8 00:44:11.843129 kubelet[2352]: I0508 00:44:11.843104 2352 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:44:11.848061 kubelet[2352]: E0508 00:44:11.847789 2352 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.155:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.155:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d669c449b7243 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:44:11.841589827 +0000 UTC m=+0.833441118,LastTimestamp:2025-05-08 00:44:11.841589827 +0000 UTC m=+0.833441118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:44:11.848188 kubelet[2352]: I0508 00:44:11.848160 2352 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:44:11.848276 kubelet[2352]: I0508 00:44:11.848226 2352 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:44:11.850398 kubelet[2352]: I0508 00:44:11.848504 2352 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:44:11.850398 kubelet[2352]: E0508 00:44:11.849385 2352 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:44:11.850398 kubelet[2352]: I0508 00:44:11.849582 2352 server.go:455] "Adding debug handlers to kubelet server" May 8 00:44:11.850398 kubelet[2352]: E0508 00:44:11.849596 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:44:11.850398 kubelet[2352]: I0508 00:44:11.849718 2352 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:44:11.850398 kubelet[2352]: I0508 00:44:11.849822 2352 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:44:11.850398 kubelet[2352]: I0508 00:44:11.850016 2352 reconciler.go:26] "Reconciler: start to sync state" May 8 00:44:11.850398 kubelet[2352]: W0508 00:44:11.850263 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused May 8 00:44:11.850398 kubelet[2352]: E0508 00:44:11.850299 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused May 8 00:44:11.850398 kubelet[2352]: E0508 00:44:11.850350 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.155:6443: connect: connection refused" interval="200ms" May 8 00:44:11.850691 kubelet[2352]: I0508 00:44:11.850520 2352 factory.go:221] Registration of the systemd container factory successfully May 8 00:44:11.850691 kubelet[2352]: I0508 00:44:11.850615 2352 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:44:11.853092 kubelet[2352]: I0508 00:44:11.853063 2352 factory.go:221] Registration of the containerd container factory successfully May 8 00:44:11.863109 kubelet[2352]: I0508 00:44:11.863051 2352 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:44:11.865403 kubelet[2352]: I0508 00:44:11.865361 2352 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:44:11.865403 kubelet[2352]: I0508 00:44:11.865409 2352 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:44:11.865512 kubelet[2352]: I0508 00:44:11.865435 2352 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:44:11.865512 kubelet[2352]: E0508 00:44:11.865501 2352 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:44:11.866443 kubelet[2352]: W0508 00:44:11.866066 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused May 8 00:44:11.866443 kubelet[2352]: E0508 00:44:11.866117 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused May 8 00:44:11.871660 kubelet[2352]: I0508 00:44:11.871637 2352 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:44:11.871759 kubelet[2352]: I0508 00:44:11.871749 2352 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:44:11.871814 kubelet[2352]: I0508 00:44:11.871806 2352 state_mem.go:36] "Initialized new in-memory state store" May 8 00:44:11.936262 kubelet[2352]: I0508 00:44:11.936231 2352 policy_none.go:49] "None policy: Start" May 8 00:44:11.937481 kubelet[2352]: I0508 00:44:11.937097 2352 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:44:11.937481 kubelet[2352]: I0508 00:44:11.937131 2352 state_mem.go:35] "Initializing new in-memory state store" May 8 00:44:11.945610 kubelet[2352]: I0508 00:44:11.945580 2352 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:44:11.945907 kubelet[2352]: I0508 00:44:11.945868 2352 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:44:11.946426 kubelet[2352]: I0508 00:44:11.946040 2352 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:44:11.947374 kubelet[2352]: E0508 00:44:11.947351 2352 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:44:11.951569 kubelet[2352]: I0508 00:44:11.951548 2352 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:44:11.951932 kubelet[2352]: E0508 00:44:11.951895 2352 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.155:6443/api/v1/nodes\": dial tcp 10.0.0.155:6443: connect: connection refused" node="localhost" May 8 00:44:11.966025 kubelet[2352]: I0508 00:44:11.965995 2352 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:44:11.967473 kubelet[2352]: I0508 00:44:11.967092 2352 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:44:11.967961 kubelet[2352]: I0508 00:44:11.967932 2352 topology_manager.go:215] "Topology Admit Handler" podUID="c6b1e54d86db4ce750008fe8337d3cd7" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:44:12.050888 kubelet[2352]: E0508 00:44:12.050783 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.155:6443: connect: connection refused" interval="400ms" May 8 00:44:12.151047 kubelet[2352]: I0508 00:44:12.150985 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6b1e54d86db4ce750008fe8337d3cd7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c6b1e54d86db4ce750008fe8337d3cd7\") " pod="kube-system/kube-apiserver-localhost" May 8 00:44:12.151047 kubelet[2352]: I0508 00:44:12.151041 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6b1e54d86db4ce750008fe8337d3cd7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c6b1e54d86db4ce750008fe8337d3cd7\") " pod="kube-system/kube-apiserver-localhost" May 8 00:44:12.151198 kubelet[2352]: I0508 00:44:12.151062 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:44:12.151198 kubelet[2352]: I0508 00:44:12.151078 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:44:12.151198 kubelet[2352]: I0508 00:44:12.151093 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:44:12.151198 kubelet[2352]: I0508 00:44:12.151108 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:44:12.151198 kubelet[2352]: I0508 00:44:12.151124 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:44:12.151298 kubelet[2352]: I0508 00:44:12.151140 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:44:12.151298 kubelet[2352]: I0508 00:44:12.151159 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6b1e54d86db4ce750008fe8337d3cd7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c6b1e54d86db4ce750008fe8337d3cd7\") " pod="kube-system/kube-apiserver-localhost" May 8 00:44:12.152979 kubelet[2352]: I0508 00:44:12.152953 2352 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:44:12.153297 kubelet[2352]: E0508 00:44:12.153254 2352 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.155:6443/api/v1/nodes\": dial tcp 10.0.0.155:6443: connect: connection refused" node="localhost" May 8 00:44:12.271296 kubelet[2352]: E0508 00:44:12.271255 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:12.271349 kubelet[2352]: E0508 00:44:12.271276 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:12.275173 kubelet[2352]: E0508 00:44:12.275142 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:12.280439 containerd[1546]: time="2025-05-08T00:44:12.280398990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c6b1e54d86db4ce750008fe8337d3cd7,Namespace:kube-system,Attempt:0,}" May 8 00:44:12.281153 containerd[1546]: time="2025-05-08T00:44:12.280925912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 8 00:44:12.281153 containerd[1546]: time="2025-05-08T00:44:12.280474442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 8 00:44:12.451670 kubelet[2352]: E0508 00:44:12.451556 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.155:6443: connect: connection refused" interval="800ms" May 8 00:44:12.555150 kubelet[2352]: I0508 00:44:12.555115 2352 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:44:12.555501 kubelet[2352]: E0508 00:44:12.555477 2352 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.155:6443/api/v1/nodes\": dial tcp 10.0.0.155:6443: connect: connection refused" node="localhost" May 8 00:44:12.756007 kubelet[2352]: W0508 00:44:12.755870 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused May 8 00:44:12.756007 kubelet[2352]: E0508 00:44:12.755936 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused May 8 00:44:12.962133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2329579396.mount: Deactivated successfully. May 8 00:44:12.965750 containerd[1546]: time="2025-05-08T00:44:12.965696500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:44:12.968329 containerd[1546]: time="2025-05-08T00:44:12.968284129Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 8 00:44:12.970264 containerd[1546]: time="2025-05-08T00:44:12.970203370Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:44:12.971794 containerd[1546]: time="2025-05-08T00:44:12.971759106Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:44:12.972005 containerd[1546]: time="2025-05-08T00:44:12.971967188Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:44:12.972890 containerd[1546]: time="2025-05-08T00:44:12.972852856Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:44:12.972931 containerd[1546]: time="2025-05-08T00:44:12.972908875Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:44:12.975430 containerd[1546]: time="2025-05-08T00:44:12.975368073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:44:12.977782 containerd[1546]: time="2025-05-08T00:44:12.977675448Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 696.546171ms" May 8 00:44:12.978757 containerd[1546]: time="2025-05-08T00:44:12.978539684Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 697.61705ms" May 8 00:44:12.981571 containerd[1546]: time="2025-05-08T00:44:12.981530642Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 700.532917ms" May 8 00:44:13.023127 kubelet[2352]: W0508 00:44:13.022986 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused May 8 00:44:13.023127 kubelet[2352]: E0508 00:44:13.023027 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused May 8 00:44:13.119966 containerd[1546]: time="2025-05-08T00:44:13.119773522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:44:13.119966 containerd[1546]: time="2025-05-08T00:44:13.119822984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:44:13.119966 containerd[1546]: time="2025-05-08T00:44:13.119846936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:44:13.119966 containerd[1546]: time="2025-05-08T00:44:13.119594545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:44:13.119966 containerd[1546]: time="2025-05-08T00:44:13.119650085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:44:13.119966 containerd[1546]: time="2025-05-08T00:44:13.119666159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:44:13.120825 containerd[1546]: time="2025-05-08T00:44:13.120738223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:44:13.121229 containerd[1546]: time="2025-05-08T00:44:13.121082621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:44:13.124168 containerd[1546]: time="2025-05-08T00:44:13.123623568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:44:13.124168 containerd[1546]: time="2025-05-08T00:44:13.124086725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:44:13.124168 containerd[1546]: time="2025-05-08T00:44:13.124101560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:44:13.125264 containerd[1546]: time="2025-05-08T00:44:13.124447399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:44:13.160843 kubelet[2352]: W0508 00:44:13.160735 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused May 8 00:44:13.160843 kubelet[2352]: E0508 00:44:13.160802 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused May 8 00:44:13.165218 containerd[1546]: time="2025-05-08T00:44:13.165101507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"b025623d98d51cc5bdf49b31215e1f931c9f2cd7a544cce3d87ca9e0b22257a4\"" May 8 00:44:13.167038 containerd[1546]: time="2025-05-08T00:44:13.167008277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c6b1e54d86db4ce750008fe8337d3cd7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a49ac4a00eea58677e05f3f1d84b8484b483f3361da733c88cfbb9d980497d1\"" May 8 00:44:13.173302 kubelet[2352]: E0508 00:44:13.173270 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:13.173531 kubelet[2352]: E0508 00:44:13.173427 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:13.174080 containerd[1546]: time="2025-05-08T00:44:13.173899895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"792ee74141c787a6485629f5f935c74660a8b227e4e5000e71d77a459a6f1e58\"" May 8 00:44:13.175889 kubelet[2352]: E0508 00:44:13.175850 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:13.176624 containerd[1546]: time="2025-05-08T00:44:13.176579433Z" level=info msg="CreateContainer within sandbox \"b025623d98d51cc5bdf49b31215e1f931c9f2cd7a544cce3d87ca9e0b22257a4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:44:13.182141 containerd[1546]: time="2025-05-08T00:44:13.180812225Z" level=info msg="CreateContainer within sandbox \"792ee74141c787a6485629f5f935c74660a8b227e4e5000e71d77a459a6f1e58\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:44:13.182141 containerd[1546]: time="2025-05-08T00:44:13.181266185Z" level=info msg="CreateContainer within sandbox \"8a49ac4a00eea58677e05f3f1d84b8484b483f3361da733c88cfbb9d980497d1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:44:13.199670 containerd[1546]: time="2025-05-08T00:44:13.199609737Z" level=info msg="CreateContainer within sandbox \"b025623d98d51cc5bdf49b31215e1f931c9f2cd7a544cce3d87ca9e0b22257a4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c54cc79941072e92e9b30b05c0e68bbceb0af32b8b2eaa494877fd7bb3bb48be\"" May 8 00:44:13.201118 containerd[1546]: time="2025-05-08T00:44:13.200961781Z" level=info msg="StartContainer for \"c54cc79941072e92e9b30b05c0e68bbceb0af32b8b2eaa494877fd7bb3bb48be\"" May 8 00:44:13.204603 containerd[1546]: time="2025-05-08T00:44:13.204524809Z" level=info msg="CreateContainer within sandbox \"8a49ac4a00eea58677e05f3f1d84b8484b483f3361da733c88cfbb9d980497d1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"84e33e11de7cd46e7df9647c99e5f0deaefc7fbaf689a9ca0640bba5f336a650\"" May 8 00:44:13.205403 kubelet[2352]: W0508 00:44:13.205337 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused May 8 00:44:13.205403 kubelet[2352]: E0508 00:44:13.205407 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused May 8 00:44:13.205613 containerd[1546]: time="2025-05-08T00:44:13.205551288Z" level=info msg="StartContainer for \"84e33e11de7cd46e7df9647c99e5f0deaefc7fbaf689a9ca0640bba5f336a650\"" May 8 00:44:13.207190 containerd[1546]: time="2025-05-08T00:44:13.207142169Z" level=info msg="CreateContainer within sandbox \"792ee74141c787a6485629f5f935c74660a8b227e4e5000e71d77a459a6f1e58\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5eccca5dd63f960b621e182a46436db6bfe583ef2dc4fc104c7ceb6239382ee2\"" May 8 00:44:13.207922 containerd[1546]: time="2025-05-08T00:44:13.207563261Z" level=info msg="StartContainer for \"5eccca5dd63f960b621e182a46436db6bfe583ef2dc4fc104c7ceb6239382ee2\"" May 8 00:44:13.252149 kubelet[2352]: E0508 00:44:13.252099 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.155:6443: connect: connection refused" interval="1.6s" May 8 00:44:13.265223 containerd[1546]: time="2025-05-08T00:44:13.265178767Z" level=info msg="StartContainer for \"c54cc79941072e92e9b30b05c0e68bbceb0af32b8b2eaa494877fd7bb3bb48be\" returns successfully" May 8 00:44:13.265519 containerd[1546]: time="2025-05-08T00:44:13.265490098Z" level=info msg="StartContainer for \"5eccca5dd63f960b621e182a46436db6bfe583ef2dc4fc104c7ceb6239382ee2\" returns successfully" May 8 00:44:13.275587 containerd[1546]: time="2025-05-08T00:44:13.275432962Z" level=info msg="StartContainer for \"84e33e11de7cd46e7df9647c99e5f0deaefc7fbaf689a9ca0640bba5f336a650\" returns successfully" May 8 00:44:13.357059 kubelet[2352]: I0508 00:44:13.356759 2352 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:44:13.357407 kubelet[2352]: E0508 00:44:13.357374 2352 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.155:6443/api/v1/nodes\": dial tcp 10.0.0.155:6443: connect: connection refused" node="localhost" May 8 00:44:13.874875 kubelet[2352]: E0508 00:44:13.874841 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:13.876879 kubelet[2352]: E0508 00:44:13.876852 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:13.879270 kubelet[2352]: E0508 00:44:13.879250 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:14.882649 kubelet[2352]: E0508 00:44:14.882529 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:14.960448 kubelet[2352]: I0508 00:44:14.959569 2352 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:44:15.442190 kubelet[2352]: E0508 00:44:15.442130 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:15.447060 kubelet[2352]: E0508 00:44:15.447012 2352 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:44:15.533988 kubelet[2352]: I0508 00:44:15.533935 2352 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:44:15.547101 kubelet[2352]: E0508 00:44:15.547070 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:44:15.648158 kubelet[2352]: E0508 00:44:15.648114 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:44:15.748920 kubelet[2352]: E0508 00:44:15.748616 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:44:15.849345 kubelet[2352]: E0508 00:44:15.849296 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:44:15.950217 kubelet[2352]: E0508 00:44:15.950144 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:44:16.051115 kubelet[2352]: E0508 00:44:16.051004 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:44:16.151555 kubelet[2352]: E0508 00:44:16.151501 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:44:16.252024 kubelet[2352]: E0508 00:44:16.251978 2352 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:44:16.841470 kubelet[2352]: I0508 00:44:16.841377 2352 apiserver.go:52] "Watching apiserver" May 8 00:44:16.850717 kubelet[2352]: I0508 00:44:16.850678 2352 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:44:16.913478 kubelet[2352]: E0508 00:44:16.913434 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:17.467315 systemd[1]: Reloading requested from client PID 2629 ('systemctl') (unit session-7.scope)... May 8 00:44:17.467332 systemd[1]: Reloading... May 8 00:44:17.523567 zram_generator::config[2670]: No configuration found. May 8 00:44:17.607904 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:44:17.666188 systemd[1]: Reloading finished in 198 ms. May 8 00:44:17.691829 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:44:17.707447 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:44:17.707815 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:44:17.717830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:44:17.801336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:44:17.805322 (kubelet)[2720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:44:17.842643 kubelet[2720]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:44:17.842643 kubelet[2720]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:44:17.842643 kubelet[2720]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:44:17.843005 kubelet[2720]: I0508 00:44:17.842680 2720 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:44:17.846667 kubelet[2720]: I0508 00:44:17.846640 2720 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:44:17.847323 kubelet[2720]: I0508 00:44:17.846770 2720 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:44:17.847323 kubelet[2720]: I0508 00:44:17.846956 2720 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:44:17.848241 kubelet[2720]: I0508 00:44:17.848205 2720 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:44:17.849687 kubelet[2720]: I0508 00:44:17.849583 2720 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:44:17.857557 kubelet[2720]: I0508 00:44:17.857534 2720 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:44:17.858399 kubelet[2720]: I0508 00:44:17.858063 2720 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:44:17.858399 kubelet[2720]: I0508 00:44:17.858094 2720 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:44:17.858399 kubelet[2720]: I0508 00:44:17.858265 2720 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:44:17.858399 kubelet[2720]: I0508 00:44:17.858274 2720 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:44:17.858399 kubelet[2720]: I0508 00:44:17.858309 2720 state_mem.go:36] "Initialized new in-memory state store" May 8 00:44:17.858660 kubelet[2720]: I0508 00:44:17.858645 2720 kubelet.go:400] "Attempting to sync node with API server" May 8 00:44:17.858712 kubelet[2720]: I0508 00:44:17.858704 2720 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:44:17.858795 kubelet[2720]: I0508 00:44:17.858784 2720 kubelet.go:312] "Adding apiserver pod source" May 8 00:44:17.858851 kubelet[2720]: I0508 00:44:17.858843 2720 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:44:17.859556 kubelet[2720]: I0508 00:44:17.859537 2720 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:44:17.859875 kubelet[2720]: I0508 00:44:17.859862 2720 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:44:17.861132 kubelet[2720]: I0508 00:44:17.861113 2720 server.go:1264] "Started kubelet" May 8 00:44:17.864899 kubelet[2720]: I0508 00:44:17.863251 2720 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:44:17.864899 kubelet[2720]: I0508 00:44:17.863551 2720 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:44:17.864899 kubelet[2720]: I0508 00:44:17.863590 2720 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:44:17.865508 kubelet[2720]: I0508 00:44:17.865493 2720 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:44:17.865776 kubelet[2720]: I0508 00:44:17.865761 2720 server.go:455] "Adding debug handlers to kubelet server" May 8 00:44:17.876220 kubelet[2720]: I0508 00:44:17.876150 2720 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:44:17.876544 kubelet[2720]: I0508 00:44:17.876529 2720 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:44:17.878637 kubelet[2720]: I0508 00:44:17.878619 2720 reconciler.go:26] "Reconciler: start to sync state" May 8 00:44:17.880584 kubelet[2720]: I0508 00:44:17.880567 2720 factory.go:221] Registration of the systemd container factory successfully May 8 00:44:17.880911 kubelet[2720]: I0508 00:44:17.880889 2720 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:44:17.881631 kubelet[2720]: E0508 00:44:17.881174 2720 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:44:17.884381 kubelet[2720]: I0508 00:44:17.884334 2720 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:44:17.886067 kubelet[2720]: I0508 00:44:17.884851 2720 factory.go:221] Registration of the containerd container factory successfully May 8 00:44:17.887104 kubelet[2720]: I0508 00:44:17.887057 2720 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:44:17.887176 kubelet[2720]: I0508 00:44:17.887131 2720 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:44:17.887176 kubelet[2720]: I0508 00:44:17.887156 2720 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:44:17.887224 kubelet[2720]: E0508 00:44:17.887212 2720 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:44:17.922515 kubelet[2720]: I0508 00:44:17.922487 2720 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:44:17.922616 kubelet[2720]: I0508 00:44:17.922509 2720 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:44:17.922616 kubelet[2720]: I0508 00:44:17.922570 2720 state_mem.go:36] "Initialized new in-memory state store" May 8 00:44:17.922760 kubelet[2720]: I0508 00:44:17.922742 2720 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:44:17.922786 kubelet[2720]: I0508 00:44:17.922760 2720 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:44:17.922786 kubelet[2720]: I0508 00:44:17.922779 2720 policy_none.go:49] "None policy: Start" May 8 00:44:17.923383 kubelet[2720]: I0508 00:44:17.923367 2720 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:44:17.923423 kubelet[2720]: I0508 00:44:17.923397 2720 state_mem.go:35] "Initializing new in-memory state store" May 8 00:44:17.923623 kubelet[2720]: I0508 00:44:17.923605 2720 state_mem.go:75] "Updated machine memory state" May 8 00:44:17.924753 kubelet[2720]: I0508 00:44:17.924726 2720 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:44:17.925089 kubelet[2720]: I0508 00:44:17.924895 2720 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:44:17.925089 kubelet[2720]: I0508 00:44:17.924996 2720 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:44:17.980736 kubelet[2720]: I0508 00:44:17.980441 2720 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:44:17.986748 kubelet[2720]: I0508 00:44:17.986716 2720 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 8 00:44:17.987472 kubelet[2720]: I0508 00:44:17.986925 2720 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:44:17.988145 kubelet[2720]: I0508 00:44:17.987583 2720 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:44:17.988145 kubelet[2720]: I0508 00:44:17.987679 2720 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:44:17.988145 kubelet[2720]: I0508 00:44:17.987711 2720 topology_manager.go:215] "Topology Admit Handler" podUID="c6b1e54d86db4ce750008fe8337d3cd7" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:44:17.993722 kubelet[2720]: E0508 00:44:17.993675 2720 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 00:44:18.079889 kubelet[2720]: I0508 00:44:18.079795 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6b1e54d86db4ce750008fe8337d3cd7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c6b1e54d86db4ce750008fe8337d3cd7\") " pod="kube-system/kube-apiserver-localhost" May 8 00:44:18.079889 kubelet[2720]: I0508 00:44:18.079836 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6b1e54d86db4ce750008fe8337d3cd7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c6b1e54d86db4ce750008fe8337d3cd7\") " pod="kube-system/kube-apiserver-localhost" May 8 00:44:18.079889 kubelet[2720]: I0508 00:44:18.079858 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:44:18.080114 kubelet[2720]: I0508 00:44:18.079918 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:44:18.080114 kubelet[2720]: I0508 00:44:18.079949 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:44:18.080114 kubelet[2720]: I0508 00:44:18.079969 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:44:18.080114 kubelet[2720]: I0508 00:44:18.079987 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6b1e54d86db4ce750008fe8337d3cd7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c6b1e54d86db4ce750008fe8337d3cd7\") " pod="kube-system/kube-apiserver-localhost" May 8 00:44:18.080114 kubelet[2720]: I0508 00:44:18.080004 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:44:18.080219 kubelet[2720]: I0508 00:44:18.080021 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:44:18.295771 kubelet[2720]: E0508 00:44:18.295538 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:18.295771 kubelet[2720]: E0508 00:44:18.295543 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:18.296478 kubelet[2720]: E0508 00:44:18.296269 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:18.859270 kubelet[2720]: I0508 00:44:18.859204 2720 apiserver.go:52] "Watching apiserver" May 8 00:44:18.876986 kubelet[2720]: I0508 00:44:18.876926 2720 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:44:18.903866 kubelet[2720]: E0508 00:44:18.903388 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:18.903866 kubelet[2720]: E0508 00:44:18.903784 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:18.912968 kubelet[2720]: E0508 00:44:18.912433 2720 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:44:18.912968 kubelet[2720]: E0508 00:44:18.912894 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:18.929035 kubelet[2720]: I0508 00:44:18.928966 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.928786738 podStartE2EDuration="1.928786738s" podCreationTimestamp="2025-05-08 00:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:44:18.920594024 +0000 UTC m=+1.112173617" watchObservedRunningTime="2025-05-08 00:44:18.928786738 +0000 UTC m=+1.120366251" May 8 00:44:18.938645 kubelet[2720]: I0508 00:44:18.938570 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.938550053 podStartE2EDuration="2.938550053s" podCreationTimestamp="2025-05-08 00:44:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:44:18.929180238 +0000 UTC m=+1.120759751" watchObservedRunningTime="2025-05-08 00:44:18.938550053 +0000 UTC m=+1.130129526" May 8 00:44:18.947228 kubelet[2720]: I0508 00:44:18.947142 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.947121431 podStartE2EDuration="1.947121431s" podCreationTimestamp="2025-05-08 00:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:44:18.938733046 +0000 UTC m=+1.130312559" watchObservedRunningTime="2025-05-08 00:44:18.947121431 +0000 UTC m=+1.138700984" May 8 00:44:19.909770 kubelet[2720]: E0508 00:44:19.909720 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:20.447476 kubelet[2720]: E0508 00:44:20.445028 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:21.068733 kubelet[2720]: E0508 00:44:21.068691 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:21.830184 kubelet[2720]: E0508 00:44:21.830147 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:22.864818 sudo[1751]: pam_unix(sudo:session): session closed for user root May 8 00:44:22.866470 sshd[1744]: pam_unix(sshd:session): session closed for user core May 8 00:44:22.870017 systemd-logind[1522]: Session 7 logged out. Waiting for processes to exit. May 8 00:44:22.870214 systemd[1]: sshd@6-10.0.0.155:22-10.0.0.1:46432.service: Deactivated successfully. May 8 00:44:22.871852 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:44:22.872271 systemd-logind[1522]: Removed session 7. May 8 00:44:30.456592 kubelet[2720]: E0508 00:44:30.456561 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:31.076735 kubelet[2720]: E0508 00:44:31.076705 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:31.111975 update_engine[1528]: I20250508 00:44:31.111917 1528 update_attempter.cc:509] Updating boot flags... May 8 00:44:31.141550 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2815) May 8 00:44:31.170482 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2814) May 8 00:44:31.836583 kubelet[2720]: E0508 00:44:31.836554 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:32.196799 kubelet[2720]: I0508 00:44:32.196715 2720 topology_manager.go:215] "Topology Admit Handler" podUID="0627168b-7764-4687-89df-6aae33783e8d" podNamespace="kube-system" podName="kube-proxy-m7dfz" May 8 00:44:32.238394 kubelet[2720]: I0508 00:44:32.238355 2720 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:44:32.251487 containerd[1546]: time="2025-05-08T00:44:32.251419766Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:44:32.251808 kubelet[2720]: I0508 00:44:32.251705 2720 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:44:32.284159 kubelet[2720]: I0508 00:44:32.284096 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m42n2\" (UniqueName: \"kubernetes.io/projected/0627168b-7764-4687-89df-6aae33783e8d-kube-api-access-m42n2\") pod \"kube-proxy-m7dfz\" (UID: \"0627168b-7764-4687-89df-6aae33783e8d\") " pod="kube-system/kube-proxy-m7dfz" May 8 00:44:32.284239 kubelet[2720]: I0508 00:44:32.284169 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0627168b-7764-4687-89df-6aae33783e8d-kube-proxy\") pod \"kube-proxy-m7dfz\" (UID: \"0627168b-7764-4687-89df-6aae33783e8d\") " pod="kube-system/kube-proxy-m7dfz" May 8 00:44:32.284239 kubelet[2720]: I0508 00:44:32.284191 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0627168b-7764-4687-89df-6aae33783e8d-xtables-lock\") pod \"kube-proxy-m7dfz\" (UID: \"0627168b-7764-4687-89df-6aae33783e8d\") " pod="kube-system/kube-proxy-m7dfz" May 8 00:44:32.284239 kubelet[2720]: I0508 00:44:32.284205 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0627168b-7764-4687-89df-6aae33783e8d-lib-modules\") pod \"kube-proxy-m7dfz\" (UID: \"0627168b-7764-4687-89df-6aae33783e8d\") " pod="kube-system/kube-proxy-m7dfz" May 8 00:44:32.398129 kubelet[2720]: E0508 00:44:32.398023 2720 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 8 00:44:32.398129 kubelet[2720]: E0508 00:44:32.398066 2720 projected.go:200] Error preparing data for projected volume kube-api-access-m42n2 for pod kube-system/kube-proxy-m7dfz: configmap "kube-root-ca.crt" not found May 8 00:44:32.398129 kubelet[2720]: E0508 00:44:32.398126 2720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0627168b-7764-4687-89df-6aae33783e8d-kube-api-access-m42n2 podName:0627168b-7764-4687-89df-6aae33783e8d nodeName:}" failed. No retries permitted until 2025-05-08 00:44:32.898106641 +0000 UTC m=+15.089686154 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m42n2" (UniqueName: "kubernetes.io/projected/0627168b-7764-4687-89df-6aae33783e8d-kube-api-access-m42n2") pod "kube-proxy-m7dfz" (UID: "0627168b-7764-4687-89df-6aae33783e8d") : configmap "kube-root-ca.crt" not found May 8 00:44:32.518543 kubelet[2720]: I0508 00:44:32.518423 2720 topology_manager.go:215] "Topology Admit Handler" podUID="c1bc9d42-cae5-4782-9b07-b43f439ac374" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-gwb6p" May 8 00:44:32.586110 kubelet[2720]: I0508 00:44:32.586069 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c1bc9d42-cae5-4782-9b07-b43f439ac374-var-lib-calico\") pod \"tigera-operator-797db67f8-gwb6p\" (UID: \"c1bc9d42-cae5-4782-9b07-b43f439ac374\") " pod="tigera-operator/tigera-operator-797db67f8-gwb6p" May 8 00:44:32.586272 kubelet[2720]: I0508 00:44:32.586259 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpssh\" (UniqueName: \"kubernetes.io/projected/c1bc9d42-cae5-4782-9b07-b43f439ac374-kube-api-access-gpssh\") pod \"tigera-operator-797db67f8-gwb6p\" (UID: \"c1bc9d42-cae5-4782-9b07-b43f439ac374\") " pod="tigera-operator/tigera-operator-797db67f8-gwb6p" May 8 00:44:32.826596 containerd[1546]: time="2025-05-08T00:44:32.826477151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-gwb6p,Uid:c1bc9d42-cae5-4782-9b07-b43f439ac374,Namespace:tigera-operator,Attempt:0,}" May 8 00:44:32.847445 containerd[1546]: time="2025-05-08T00:44:32.847350518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:44:32.847445 containerd[1546]: time="2025-05-08T00:44:32.847412792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:44:32.847445 containerd[1546]: time="2025-05-08T00:44:32.847423351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:44:32.847626 containerd[1546]: time="2025-05-08T00:44:32.847523420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:44:32.892093 containerd[1546]: time="2025-05-08T00:44:32.892052229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-gwb6p,Uid:c1bc9d42-cae5-4782-9b07-b43f439ac374,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1fd88d58817207413673c986e7d95db6654ab6ef764ada6d9272d301dc4c9839\"" May 8 00:44:32.896664 containerd[1546]: time="2025-05-08T00:44:32.896625877Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 8 00:44:33.100089 kubelet[2720]: E0508 00:44:33.099972 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:33.100576 containerd[1546]: time="2025-05-08T00:44:33.100524698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m7dfz,Uid:0627168b-7764-4687-89df-6aae33783e8d,Namespace:kube-system,Attempt:0,}" May 8 00:44:33.118992 containerd[1546]: time="2025-05-08T00:44:33.118889603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:44:33.118992 containerd[1546]: time="2025-05-08T00:44:33.118936198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:44:33.118992 containerd[1546]: time="2025-05-08T00:44:33.118946917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:44:33.119407 containerd[1546]: time="2025-05-08T00:44:33.119033829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:44:33.149984 containerd[1546]: time="2025-05-08T00:44:33.149941721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m7dfz,Uid:0627168b-7764-4687-89df-6aae33783e8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1d7dec6cd14d767c224221747e31c68b49c35c4481b2ffda7ab18430159a5a4\"" May 8 00:44:33.150673 kubelet[2720]: E0508 00:44:33.150655 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:33.155695 containerd[1546]: time="2025-05-08T00:44:33.155485225Z" level=info msg="CreateContainer within sandbox \"c1d7dec6cd14d767c224221747e31c68b49c35c4481b2ffda7ab18430159a5a4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:44:33.168926 containerd[1546]: time="2025-05-08T00:44:33.168883210Z" level=info msg="CreateContainer within sandbox \"c1d7dec6cd14d767c224221747e31c68b49c35c4481b2ffda7ab18430159a5a4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"33b0b5a0dae10c57da379ddd6331d44d34b29220c52ba478dfb62e5350dd060d\"" May 8 00:44:33.170603 containerd[1546]: time="2025-05-08T00:44:33.170562167Z" level=info msg="StartContainer for \"33b0b5a0dae10c57da379ddd6331d44d34b29220c52ba478dfb62e5350dd060d\"" May 8 00:44:33.231507 containerd[1546]: time="2025-05-08T00:44:33.231470640Z" level=info msg="StartContainer for \"33b0b5a0dae10c57da379ddd6331d44d34b29220c52ba478dfb62e5350dd060d\" returns successfully" May 8 00:44:33.937220 kubelet[2720]: E0508 00:44:33.937140 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:33.954824 kubelet[2720]: I0508 00:44:33.954695 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m7dfz" podStartSLOduration=1.954663011 podStartE2EDuration="1.954663011s" podCreationTimestamp="2025-05-08 00:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:44:33.95374034 +0000 UTC m=+16.145319853" watchObservedRunningTime="2025-05-08 00:44:33.954663011 +0000 UTC m=+16.146242524" May 8 00:44:34.178885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2688313414.mount: Deactivated successfully. May 8 00:44:34.413476 containerd[1546]: time="2025-05-08T00:44:34.413419959Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:34.414388 containerd[1546]: time="2025-05-08T00:44:34.414326437Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 8 00:44:34.414991 containerd[1546]: time="2025-05-08T00:44:34.414964139Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:34.417621 containerd[1546]: time="2025-05-08T00:44:34.417579702Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:34.418512 containerd[1546]: time="2025-05-08T00:44:34.418443064Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 1.521775351s" May 8 00:44:34.418512 containerd[1546]: time="2025-05-08T00:44:34.418488500Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 8 00:44:34.427777 containerd[1546]: time="2025-05-08T00:44:34.427691866Z" level=info msg="CreateContainer within sandbox \"1fd88d58817207413673c986e7d95db6654ab6ef764ada6d9272d301dc4c9839\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 8 00:44:34.438926 containerd[1546]: time="2025-05-08T00:44:34.438622195Z" level=info msg="CreateContainer within sandbox \"1fd88d58817207413673c986e7d95db6654ab6ef764ada6d9272d301dc4c9839\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"df4c83b5bea4456fbf993401ce4142e2833315e1bc63bc52ddce86a6ed7e7346\"" May 8 00:44:34.440286 containerd[1546]: time="2025-05-08T00:44:34.440249448Z" level=info msg="StartContainer for \"df4c83b5bea4456fbf993401ce4142e2833315e1bc63bc52ddce86a6ed7e7346\"" May 8 00:44:34.517980 containerd[1546]: time="2025-05-08T00:44:34.517630595Z" level=info msg="StartContainer for \"df4c83b5bea4456fbf993401ce4142e2833315e1bc63bc52ddce86a6ed7e7346\" returns successfully" May 8 00:44:38.588963 kubelet[2720]: I0508 00:44:38.588897 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-gwb6p" podStartSLOduration=5.059255252 podStartE2EDuration="6.588880681s" podCreationTimestamp="2025-05-08 00:44:32 +0000 UTC" firstStartedPulling="2025-05-08 00:44:32.895802562 +0000 UTC m=+15.087382075" lastFinishedPulling="2025-05-08 00:44:34.425427991 +0000 UTC m=+16.617007504" observedRunningTime="2025-05-08 00:44:34.951500315 +0000 UTC m=+17.143079828" watchObservedRunningTime="2025-05-08 00:44:38.588880681 +0000 UTC m=+20.780460194" May 8 00:44:38.589559 kubelet[2720]: I0508 00:44:38.589034 2720 topology_manager.go:215] "Topology Admit Handler" podUID="c599bb8a-cec0-40b6-bdae-86ef1b9f8d40" podNamespace="calico-system" podName="calico-typha-67f7bf4987-wkd8v" May 8 00:44:38.593804 kubelet[2720]: W0508 00:44:38.593776 2720 reflector.go:547] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object May 8 00:44:38.601976 kubelet[2720]: E0508 00:44:38.601940 2720 reflector.go:150] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object May 8 00:44:38.628380 kubelet[2720]: I0508 00:44:38.628317 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c599bb8a-cec0-40b6-bdae-86ef1b9f8d40-tigera-ca-bundle\") pod \"calico-typha-67f7bf4987-wkd8v\" (UID: \"c599bb8a-cec0-40b6-bdae-86ef1b9f8d40\") " pod="calico-system/calico-typha-67f7bf4987-wkd8v" May 8 00:44:38.628380 kubelet[2720]: I0508 00:44:38.628380 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c599bb8a-cec0-40b6-bdae-86ef1b9f8d40-typha-certs\") pod \"calico-typha-67f7bf4987-wkd8v\" (UID: \"c599bb8a-cec0-40b6-bdae-86ef1b9f8d40\") " pod="calico-system/calico-typha-67f7bf4987-wkd8v" May 8 00:44:38.628556 kubelet[2720]: I0508 00:44:38.628406 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf44r\" (UniqueName: \"kubernetes.io/projected/c599bb8a-cec0-40b6-bdae-86ef1b9f8d40-kube-api-access-gf44r\") pod \"calico-typha-67f7bf4987-wkd8v\" (UID: \"c599bb8a-cec0-40b6-bdae-86ef1b9f8d40\") " pod="calico-system/calico-typha-67f7bf4987-wkd8v" May 8 00:44:38.778650 kubelet[2720]: I0508 00:44:38.778574 2720 topology_manager.go:215] "Topology Admit Handler" podUID="f0ba4b6f-260e-4a9a-a720-172c25bd38d2" podNamespace="calico-system" podName="calico-node-z52mg" May 8 00:44:38.830651 kubelet[2720]: I0508 00:44:38.830568 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f0ba4b6f-260e-4a9a-a720-172c25bd38d2-cni-bin-dir\") pod \"calico-node-z52mg\" (UID: \"f0ba4b6f-260e-4a9a-a720-172c25bd38d2\") " pod="calico-system/calico-node-z52mg" May 8 00:44:38.830651 kubelet[2720]: I0508 00:44:38.830609 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0ba4b6f-260e-4a9a-a720-172c25bd38d2-lib-modules\") pod \"calico-node-z52mg\" (UID: \"f0ba4b6f-260e-4a9a-a720-172c25bd38d2\") " pod="calico-system/calico-node-z52mg" May 8 00:44:38.830651 kubelet[2720]: I0508 00:44:38.830646 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0ba4b6f-260e-4a9a-a720-172c25bd38d2-xtables-lock\") pod \"calico-node-z52mg\" (UID: \"f0ba4b6f-260e-4a9a-a720-172c25bd38d2\") " pod="calico-system/calico-node-z52mg" May 8 00:44:38.830651 kubelet[2720]: I0508 00:44:38.830662 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f0ba4b6f-260e-4a9a-a720-172c25bd38d2-var-lib-calico\") pod \"calico-node-z52mg\" (UID: \"f0ba4b6f-260e-4a9a-a720-172c25bd38d2\") " pod="calico-system/calico-node-z52mg" May 8 00:44:38.831017 kubelet[2720]: I0508 00:44:38.830688 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f0ba4b6f-260e-4a9a-a720-172c25bd38d2-flexvol-driver-host\") pod \"calico-node-z52mg\" (UID: \"f0ba4b6f-260e-4a9a-a720-172c25bd38d2\") " pod="calico-system/calico-node-z52mg" May 8 00:44:38.831017 kubelet[2720]: I0508 00:44:38.830709 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blhpv\" (UniqueName: \"kubernetes.io/projected/f0ba4b6f-260e-4a9a-a720-172c25bd38d2-kube-api-access-blhpv\") pod \"calico-node-z52mg\" (UID: \"f0ba4b6f-260e-4a9a-a720-172c25bd38d2\") " pod="calico-system/calico-node-z52mg" May 8 00:44:38.831017 kubelet[2720]: I0508 00:44:38.830735 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f0ba4b6f-260e-4a9a-a720-172c25bd38d2-var-run-calico\") pod \"calico-node-z52mg\" (UID: \"f0ba4b6f-260e-4a9a-a720-172c25bd38d2\") " pod="calico-system/calico-node-z52mg" May 8 00:44:38.831017 kubelet[2720]: I0508 00:44:38.830757 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f0ba4b6f-260e-4a9a-a720-172c25bd38d2-node-certs\") pod \"calico-node-z52mg\" (UID: \"f0ba4b6f-260e-4a9a-a720-172c25bd38d2\") " pod="calico-system/calico-node-z52mg" May 8 00:44:38.831017 kubelet[2720]: I0508 00:44:38.830773 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f0ba4b6f-260e-4a9a-a720-172c25bd38d2-policysync\") pod \"calico-node-z52mg\" (UID: \"f0ba4b6f-260e-4a9a-a720-172c25bd38d2\") " pod="calico-system/calico-node-z52mg" May 8 00:44:38.831150 kubelet[2720]: I0508 00:44:38.830810 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f0ba4b6f-260e-4a9a-a720-172c25bd38d2-cni-log-dir\") pod \"calico-node-z52mg\" (UID: \"f0ba4b6f-260e-4a9a-a720-172c25bd38d2\") " pod="calico-system/calico-node-z52mg" May 8 00:44:38.831150 kubelet[2720]: I0508 00:44:38.830829 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0ba4b6f-260e-4a9a-a720-172c25bd38d2-tigera-ca-bundle\") pod \"calico-node-z52mg\" (UID: \"f0ba4b6f-260e-4a9a-a720-172c25bd38d2\") " pod="calico-system/calico-node-z52mg" May 8 00:44:38.831150 kubelet[2720]: I0508 00:44:38.830846 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f0ba4b6f-260e-4a9a-a720-172c25bd38d2-cni-net-dir\") pod \"calico-node-z52mg\" (UID: \"f0ba4b6f-260e-4a9a-a720-172c25bd38d2\") " pod="calico-system/calico-node-z52mg" May 8 00:44:38.935872 kubelet[2720]: E0508 00:44:38.935842 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:38.935872 kubelet[2720]: W0508 00:44:38.935864 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:38.936010 kubelet[2720]: E0508 00:44:38.935883 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:38.946691 kubelet[2720]: E0508 00:44:38.946609 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:38.946691 kubelet[2720]: W0508 00:44:38.946629 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:38.946691 kubelet[2720]: E0508 00:44:38.946647 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:38.982668 kubelet[2720]: I0508 00:44:38.982600 2720 topology_manager.go:215] "Topology Admit Handler" podUID="2de83a08-9bcf-4ba2-8674-79ec949e7e5f" podNamespace="calico-system" podName="csi-node-driver-ljmwb" May 8 00:44:38.983059 kubelet[2720]: E0508 00:44:38.983007 2720 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ljmwb" podUID="2de83a08-9bcf-4ba2-8674-79ec949e7e5f" May 8 00:44:39.009142 kubelet[2720]: E0508 00:44:39.009109 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.009142 kubelet[2720]: W0508 00:44:39.009131 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.009305 kubelet[2720]: E0508 00:44:39.009158 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.009371 kubelet[2720]: E0508 00:44:39.009361 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.009371 kubelet[2720]: W0508 00:44:39.009371 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.009441 kubelet[2720]: E0508 00:44:39.009380 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.009552 kubelet[2720]: E0508 00:44:39.009542 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.009552 kubelet[2720]: W0508 00:44:39.009552 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.009610 kubelet[2720]: E0508 00:44:39.009560 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.009724 kubelet[2720]: E0508 00:44:39.009713 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.009724 kubelet[2720]: W0508 00:44:39.009722 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.009816 kubelet[2720]: E0508 00:44:39.009732 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.009900 kubelet[2720]: E0508 00:44:39.009890 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.009928 kubelet[2720]: W0508 00:44:39.009900 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.009928 kubelet[2720]: E0508 00:44:39.009908 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.010046 kubelet[2720]: E0508 00:44:39.010037 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.010085 kubelet[2720]: W0508 00:44:39.010046 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.010085 kubelet[2720]: E0508 00:44:39.010053 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.010190 kubelet[2720]: E0508 00:44:39.010181 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.010190 kubelet[2720]: W0508 00:44:39.010190 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.010243 kubelet[2720]: E0508 00:44:39.010197 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.010349 kubelet[2720]: E0508 00:44:39.010338 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.010349 kubelet[2720]: W0508 00:44:39.010348 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.010411 kubelet[2720]: E0508 00:44:39.010356 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.010550 kubelet[2720]: E0508 00:44:39.010513 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.010550 kubelet[2720]: W0508 00:44:39.010523 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.010550 kubelet[2720]: E0508 00:44:39.010530 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.010677 kubelet[2720]: E0508 00:44:39.010667 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.010677 kubelet[2720]: W0508 00:44:39.010677 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.010737 kubelet[2720]: E0508 00:44:39.010685 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.010826 kubelet[2720]: E0508 00:44:39.010816 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.010826 kubelet[2720]: W0508 00:44:39.010825 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.010887 kubelet[2720]: E0508 00:44:39.010833 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.010982 kubelet[2720]: E0508 00:44:39.010972 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.010982 kubelet[2720]: W0508 00:44:39.010982 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.011044 kubelet[2720]: E0508 00:44:39.010989 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.011123 kubelet[2720]: E0508 00:44:39.011114 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.011123 kubelet[2720]: W0508 00:44:39.011123 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.011174 kubelet[2720]: E0508 00:44:39.011131 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.011271 kubelet[2720]: E0508 00:44:39.011262 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.011271 kubelet[2720]: W0508 00:44:39.011270 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.011336 kubelet[2720]: E0508 00:44:39.011277 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.011432 kubelet[2720]: E0508 00:44:39.011422 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.011432 kubelet[2720]: W0508 00:44:39.011432 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.011507 kubelet[2720]: E0508 00:44:39.011439 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.011597 kubelet[2720]: E0508 00:44:39.011588 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.011597 kubelet[2720]: W0508 00:44:39.011597 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.011651 kubelet[2720]: E0508 00:44:39.011605 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.011768 kubelet[2720]: E0508 00:44:39.011758 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.011768 kubelet[2720]: W0508 00:44:39.011767 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.011818 kubelet[2720]: E0508 00:44:39.011774 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.011912 kubelet[2720]: E0508 00:44:39.011903 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.011912 kubelet[2720]: W0508 00:44:39.011911 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.011965 kubelet[2720]: E0508 00:44:39.011918 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.012042 kubelet[2720]: E0508 00:44:39.012034 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.012042 kubelet[2720]: W0508 00:44:39.012043 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.012098 kubelet[2720]: E0508 00:44:39.012050 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.012176 kubelet[2720]: E0508 00:44:39.012167 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.012176 kubelet[2720]: W0508 00:44:39.012175 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.012227 kubelet[2720]: E0508 00:44:39.012182 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.032516 kubelet[2720]: E0508 00:44:39.032488 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.033166 kubelet[2720]: W0508 00:44:39.033042 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.033166 kubelet[2720]: E0508 00:44:39.033069 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.033166 kubelet[2720]: I0508 00:44:39.033107 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2de83a08-9bcf-4ba2-8674-79ec949e7e5f-varrun\") pod \"csi-node-driver-ljmwb\" (UID: \"2de83a08-9bcf-4ba2-8674-79ec949e7e5f\") " pod="calico-system/csi-node-driver-ljmwb" May 8 00:44:39.033353 kubelet[2720]: E0508 00:44:39.033337 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.033353 kubelet[2720]: W0508 00:44:39.033349 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.033405 kubelet[2720]: E0508 00:44:39.033362 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.033405 kubelet[2720]: I0508 00:44:39.033378 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2de83a08-9bcf-4ba2-8674-79ec949e7e5f-kubelet-dir\") pod \"csi-node-driver-ljmwb\" (UID: \"2de83a08-9bcf-4ba2-8674-79ec949e7e5f\") " pod="calico-system/csi-node-driver-ljmwb" May 8 00:44:39.033571 kubelet[2720]: E0508 00:44:39.033558 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.033571 kubelet[2720]: W0508 00:44:39.033570 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.033640 kubelet[2720]: E0508 00:44:39.033582 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.033640 kubelet[2720]: I0508 00:44:39.033597 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2de83a08-9bcf-4ba2-8674-79ec949e7e5f-socket-dir\") pod \"csi-node-driver-ljmwb\" (UID: \"2de83a08-9bcf-4ba2-8674-79ec949e7e5f\") " pod="calico-system/csi-node-driver-ljmwb" May 8 00:44:39.033766 kubelet[2720]: E0508 00:44:39.033754 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.033800 kubelet[2720]: W0508 00:44:39.033766 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.033800 kubelet[2720]: E0508 00:44:39.033775 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.033800 kubelet[2720]: I0508 00:44:39.033788 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2de83a08-9bcf-4ba2-8674-79ec949e7e5f-registration-dir\") pod \"csi-node-driver-ljmwb\" (UID: \"2de83a08-9bcf-4ba2-8674-79ec949e7e5f\") " pod="calico-system/csi-node-driver-ljmwb" May 8 00:44:39.033960 kubelet[2720]: E0508 00:44:39.033944 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.033960 kubelet[2720]: W0508 00:44:39.033955 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.034008 kubelet[2720]: E0508 00:44:39.033964 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.034101 kubelet[2720]: E0508 00:44:39.034090 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.034101 kubelet[2720]: W0508 00:44:39.034100 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.034160 kubelet[2720]: E0508 00:44:39.034111 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.034160 kubelet[2720]: I0508 00:44:39.034125 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvm9n\" (UniqueName: \"kubernetes.io/projected/2de83a08-9bcf-4ba2-8674-79ec949e7e5f-kube-api-access-hvm9n\") pod \"csi-node-driver-ljmwb\" (UID: \"2de83a08-9bcf-4ba2-8674-79ec949e7e5f\") " pod="calico-system/csi-node-driver-ljmwb" May 8 00:44:39.034305 kubelet[2720]: E0508 00:44:39.034285 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.034337 kubelet[2720]: W0508 00:44:39.034305 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.034337 kubelet[2720]: E0508 00:44:39.034319 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.034472 kubelet[2720]: E0508 00:44:39.034461 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.034472 kubelet[2720]: W0508 00:44:39.034471 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.034540 kubelet[2720]: E0508 00:44:39.034521 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.034635 kubelet[2720]: E0508 00:44:39.034624 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.034635 kubelet[2720]: W0508 00:44:39.034633 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.034693 kubelet[2720]: E0508 00:44:39.034677 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.034781 kubelet[2720]: E0508 00:44:39.034770 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.034781 kubelet[2720]: W0508 00:44:39.034779 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.034840 kubelet[2720]: E0508 00:44:39.034830 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.034931 kubelet[2720]: E0508 00:44:39.034921 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.034931 kubelet[2720]: W0508 00:44:39.034930 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.035035 kubelet[2720]: E0508 00:44:39.034969 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.035064 kubelet[2720]: E0508 00:44:39.035050 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.035064 kubelet[2720]: W0508 00:44:39.035057 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.035112 kubelet[2720]: E0508 00:44:39.035089 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.035184 kubelet[2720]: E0508 00:44:39.035173 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.035184 kubelet[2720]: W0508 00:44:39.035181 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.035238 kubelet[2720]: E0508 00:44:39.035189 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.035341 kubelet[2720]: E0508 00:44:39.035329 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.035341 kubelet[2720]: W0508 00:44:39.035339 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.035399 kubelet[2720]: E0508 00:44:39.035346 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.035498 kubelet[2720]: E0508 00:44:39.035488 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.035535 kubelet[2720]: W0508 00:44:39.035498 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.035535 kubelet[2720]: E0508 00:44:39.035506 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.035647 kubelet[2720]: E0508 00:44:39.035636 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.035647 kubelet[2720]: W0508 00:44:39.035645 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.035693 kubelet[2720]: E0508 00:44:39.035652 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.087471 kubelet[2720]: E0508 00:44:39.087408 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:39.088166 containerd[1546]: time="2025-05-08T00:44:39.088127911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z52mg,Uid:f0ba4b6f-260e-4a9a-a720-172c25bd38d2,Namespace:calico-system,Attempt:0,}" May 8 00:44:39.115337 containerd[1546]: time="2025-05-08T00:44:39.115206013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:44:39.115337 containerd[1546]: time="2025-05-08T00:44:39.115279609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:44:39.115337 containerd[1546]: time="2025-05-08T00:44:39.115299887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:44:39.115569 containerd[1546]: time="2025-05-08T00:44:39.115391761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:44:39.135183 kubelet[2720]: E0508 00:44:39.135154 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.135274 kubelet[2720]: W0508 00:44:39.135174 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.135274 kubelet[2720]: E0508 00:44:39.135222 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.135701 kubelet[2720]: E0508 00:44:39.135557 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.135701 kubelet[2720]: W0508 00:44:39.135573 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.135701 kubelet[2720]: E0508 00:44:39.135585 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.136359 kubelet[2720]: E0508 00:44:39.136049 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.136359 kubelet[2720]: W0508 00:44:39.136065 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.136359 kubelet[2720]: E0508 00:44:39.136077 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.136487 kubelet[2720]: E0508 00:44:39.136378 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.136487 kubelet[2720]: W0508 00:44:39.136389 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.136633 kubelet[2720]: E0508 00:44:39.136607 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.136683 kubelet[2720]: E0508 00:44:39.136668 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.136683 kubelet[2720]: W0508 00:44:39.136678 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.136737 kubelet[2720]: E0508 00:44:39.136687 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.137890 kubelet[2720]: E0508 00:44:39.137828 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.137890 kubelet[2720]: W0508 00:44:39.137853 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.137981 kubelet[2720]: E0508 00:44:39.137954 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.138267 kubelet[2720]: E0508 00:44:39.138225 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.138267 kubelet[2720]: W0508 00:44:39.138253 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.138475 kubelet[2720]: E0508 00:44:39.138457 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.143558 kubelet[2720]: E0508 00:44:39.143511 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.143558 kubelet[2720]: W0508 00:44:39.143548 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.143558 kubelet[2720]: E0508 00:44:39.143565 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.143933 kubelet[2720]: E0508 00:44:39.143914 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.143933 kubelet[2720]: W0508 00:44:39.143926 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.144490 kubelet[2720]: E0508 00:44:39.144003 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.144490 kubelet[2720]: E0508 00:44:39.144109 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.144490 kubelet[2720]: W0508 00:44:39.144117 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.144490 kubelet[2720]: E0508 00:44:39.144159 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.144490 kubelet[2720]: E0508 00:44:39.144384 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.144490 kubelet[2720]: W0508 00:44:39.144392 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.144490 kubelet[2720]: E0508 00:44:39.144436 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.144690 kubelet[2720]: E0508 00:44:39.144606 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.144690 kubelet[2720]: W0508 00:44:39.144614 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.144690 kubelet[2720]: E0508 00:44:39.144655 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.145334 kubelet[2720]: E0508 00:44:39.144849 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.145334 kubelet[2720]: W0508 00:44:39.144859 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.145334 kubelet[2720]: E0508 00:44:39.144871 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.145334 kubelet[2720]: E0508 00:44:39.145033 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.145334 kubelet[2720]: W0508 00:44:39.145041 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.145334 kubelet[2720]: E0508 00:44:39.145051 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.146376 kubelet[2720]: E0508 00:44:39.145415 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.146376 kubelet[2720]: W0508 00:44:39.145425 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.146376 kubelet[2720]: E0508 00:44:39.145440 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.146376 kubelet[2720]: E0508 00:44:39.145643 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.146376 kubelet[2720]: W0508 00:44:39.145652 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.146376 kubelet[2720]: E0508 00:44:39.145724 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.146376 kubelet[2720]: E0508 00:44:39.145820 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.146376 kubelet[2720]: W0508 00:44:39.145825 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.146376 kubelet[2720]: E0508 00:44:39.145889 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.146376 kubelet[2720]: E0508 00:44:39.146549 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.147502 kubelet[2720]: W0508 00:44:39.146559 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.147502 kubelet[2720]: E0508 00:44:39.146615 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.147502 kubelet[2720]: E0508 00:44:39.146757 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.147502 kubelet[2720]: W0508 00:44:39.146765 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.147502 kubelet[2720]: E0508 00:44:39.146779 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.147502 kubelet[2720]: E0508 00:44:39.146913 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.147502 kubelet[2720]: W0508 00:44:39.146922 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.147502 kubelet[2720]: E0508 00:44:39.146933 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.147502 kubelet[2720]: E0508 00:44:39.147228 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.147502 kubelet[2720]: W0508 00:44:39.147239 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.147705 kubelet[2720]: E0508 00:44:39.147257 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.147705 kubelet[2720]: E0508 00:44:39.147465 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.147705 kubelet[2720]: W0508 00:44:39.147474 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.147705 kubelet[2720]: E0508 00:44:39.147483 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.148892 kubelet[2720]: E0508 00:44:39.148869 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.148892 kubelet[2720]: W0508 00:44:39.148882 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.148892 kubelet[2720]: E0508 00:44:39.148896 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.149415 kubelet[2720]: E0508 00:44:39.149086 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.149415 kubelet[2720]: W0508 00:44:39.149095 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.149415 kubelet[2720]: E0508 00:44:39.149107 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.149415 kubelet[2720]: E0508 00:44:39.149265 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.149415 kubelet[2720]: W0508 00:44:39.149272 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.149415 kubelet[2720]: E0508 00:44:39.149332 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.150467 kubelet[2720]: E0508 00:44:39.150439 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.150518 kubelet[2720]: W0508 00:44:39.150482 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.150518 kubelet[2720]: E0508 00:44:39.150493 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.157987 kubelet[2720]: E0508 00:44:39.157963 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.157987 kubelet[2720]: W0508 00:44:39.157980 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.158091 kubelet[2720]: E0508 00:44:39.157996 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.161395 containerd[1546]: time="2025-05-08T00:44:39.161360864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z52mg,Uid:f0ba4b6f-260e-4a9a-a720-172c25bd38d2,Namespace:calico-system,Attempt:0,} returns sandbox id \"85c4ff0dee237edec9ca128fbc57bb3ee3120e588071bc96aa8cd402947e5812\"" May 8 00:44:39.162618 kubelet[2720]: E0508 00:44:39.162348 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:39.164608 containerd[1546]: time="2025-05-08T00:44:39.164570854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 00:44:39.246718 kubelet[2720]: E0508 00:44:39.246545 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.246718 kubelet[2720]: W0508 00:44:39.246566 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.246718 kubelet[2720]: E0508 00:44:39.246585 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.347837 kubelet[2720]: E0508 00:44:39.347805 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.347837 kubelet[2720]: W0508 00:44:39.347824 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.347837 kubelet[2720]: E0508 00:44:39.347842 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.449337 kubelet[2720]: E0508 00:44:39.449302 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.449337 kubelet[2720]: W0508 00:44:39.449324 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.449337 kubelet[2720]: E0508 00:44:39.449342 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.550548 kubelet[2720]: E0508 00:44:39.550435 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.550548 kubelet[2720]: W0508 00:44:39.550464 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.550548 kubelet[2720]: E0508 00:44:39.550481 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.651743 kubelet[2720]: E0508 00:44:39.651700 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.651743 kubelet[2720]: W0508 00:44:39.651719 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.651743 kubelet[2720]: E0508 00:44:39.651736 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.730081 kubelet[2720]: E0508 00:44:39.730035 2720 secret.go:194] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition May 8 00:44:39.730199 kubelet[2720]: E0508 00:44:39.730126 2720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c599bb8a-cec0-40b6-bdae-86ef1b9f8d40-typha-certs podName:c599bb8a-cec0-40b6-bdae-86ef1b9f8d40 nodeName:}" failed. No retries permitted until 2025-05-08 00:44:40.230108258 +0000 UTC m=+22.421687771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/c599bb8a-cec0-40b6-bdae-86ef1b9f8d40-typha-certs") pod "calico-typha-67f7bf4987-wkd8v" (UID: "c599bb8a-cec0-40b6-bdae-86ef1b9f8d40") : failed to sync secret cache: timed out waiting for the condition May 8 00:44:39.752936 kubelet[2720]: E0508 00:44:39.752849 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.752936 kubelet[2720]: W0508 00:44:39.752871 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.752936 kubelet[2720]: E0508 00:44:39.752889 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.854167 kubelet[2720]: E0508 00:44:39.854090 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.854167 kubelet[2720]: W0508 00:44:39.854107 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.854167 kubelet[2720]: E0508 00:44:39.854122 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:39.955157 kubelet[2720]: E0508 00:44:39.955127 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:39.955157 kubelet[2720]: W0508 00:44:39.955150 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:39.955157 kubelet[2720]: E0508 00:44:39.955168 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:40.056312 kubelet[2720]: E0508 00:44:40.056282 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:40.056312 kubelet[2720]: W0508 00:44:40.056302 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:40.056312 kubelet[2720]: E0508 00:44:40.056319 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:40.157387 kubelet[2720]: E0508 00:44:40.157353 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:40.157387 kubelet[2720]: W0508 00:44:40.157375 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:40.157387 kubelet[2720]: E0508 00:44:40.157392 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:40.258285 kubelet[2720]: E0508 00:44:40.258248 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:40.258285 kubelet[2720]: W0508 00:44:40.258268 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:40.258285 kubelet[2720]: E0508 00:44:40.258285 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:40.258695 kubelet[2720]: E0508 00:44:40.258665 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:40.258695 kubelet[2720]: W0508 00:44:40.258680 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:40.258695 kubelet[2720]: E0508 00:44:40.258691 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:40.258905 kubelet[2720]: E0508 00:44:40.258879 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:40.258905 kubelet[2720]: W0508 00:44:40.258891 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:40.258905 kubelet[2720]: E0508 00:44:40.258900 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:40.259124 kubelet[2720]: E0508 00:44:40.259099 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:40.259124 kubelet[2720]: W0508 00:44:40.259118 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:40.259179 kubelet[2720]: E0508 00:44:40.259129 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:40.259389 kubelet[2720]: E0508 00:44:40.259368 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:40.259389 kubelet[2720]: W0508 00:44:40.259382 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:40.259439 kubelet[2720]: E0508 00:44:40.259391 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:40.262959 kubelet[2720]: E0508 00:44:40.262944 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:44:40.262959 kubelet[2720]: W0508 00:44:40.262958 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:44:40.263041 kubelet[2720]: E0508 00:44:40.262971 2720 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:44:40.397137 kubelet[2720]: E0508 00:44:40.397094 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:40.397710 containerd[1546]: time="2025-05-08T00:44:40.397671192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67f7bf4987-wkd8v,Uid:c599bb8a-cec0-40b6-bdae-86ef1b9f8d40,Namespace:calico-system,Attempt:0,}" May 8 00:44:40.420745 containerd[1546]: time="2025-05-08T00:44:40.420307559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:44:40.420745 containerd[1546]: time="2025-05-08T00:44:40.420675817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:44:40.420745 containerd[1546]: time="2025-05-08T00:44:40.420699615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:44:40.420954 containerd[1546]: time="2025-05-08T00:44:40.420821008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:44:40.460903 containerd[1546]: time="2025-05-08T00:44:40.460792829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67f7bf4987-wkd8v,Uid:c599bb8a-cec0-40b6-bdae-86ef1b9f8d40,Namespace:calico-system,Attempt:0,} returns sandbox id \"1164cf8c2a93ca0171f0cbca76f4893732bcb36c983e76045a51765bbbe17290\"" May 8 00:44:40.461477 kubelet[2720]: E0508 00:44:40.461440 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:40.505647 containerd[1546]: time="2025-05-08T00:44:40.505596512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:40.506184 containerd[1546]: time="2025-05-08T00:44:40.506140358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 8 00:44:40.506822 containerd[1546]: time="2025-05-08T00:44:40.506788959Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:40.509362 containerd[1546]: time="2025-05-08T00:44:40.509327602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:40.510679 containerd[1546]: time="2025-05-08T00:44:40.510641881Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.34602975s" May 8 00:44:40.510716 containerd[1546]: time="2025-05-08T00:44:40.510681119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 8 00:44:40.511747 containerd[1546]: time="2025-05-08T00:44:40.511585583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 8 00:44:40.518484 containerd[1546]: time="2025-05-08T00:44:40.518431802Z" level=info msg="CreateContainer within sandbox \"85c4ff0dee237edec9ca128fbc57bb3ee3120e588071bc96aa8cd402947e5812\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:44:40.550950 containerd[1546]: time="2025-05-08T00:44:40.550906204Z" level=info msg="CreateContainer within sandbox \"85c4ff0dee237edec9ca128fbc57bb3ee3120e588071bc96aa8cd402947e5812\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6119083be7f90000bcc1fbf5f881957ae1e04b76d3d5cc901cfd16d2a6d28a80\"" May 8 00:44:40.552125 containerd[1546]: time="2025-05-08T00:44:40.551325418Z" level=info msg="StartContainer for \"6119083be7f90000bcc1fbf5f881957ae1e04b76d3d5cc901cfd16d2a6d28a80\"" May 8 00:44:40.604789 containerd[1546]: time="2025-05-08T00:44:40.603512767Z" level=info msg="StartContainer for \"6119083be7f90000bcc1fbf5f881957ae1e04b76d3d5cc901cfd16d2a6d28a80\" returns successfully" May 8 00:44:40.669812 containerd[1546]: time="2025-05-08T00:44:40.669581862Z" level=info msg="shim disconnected" id=6119083be7f90000bcc1fbf5f881957ae1e04b76d3d5cc901cfd16d2a6d28a80 namespace=k8s.io May 8 00:44:40.669812 containerd[1546]: time="2025-05-08T00:44:40.669628980Z" level=warning msg="cleaning up after shim disconnected" id=6119083be7f90000bcc1fbf5f881957ae1e04b76d3d5cc901cfd16d2a6d28a80 namespace=k8s.io May 8 00:44:40.669812 containerd[1546]: time="2025-05-08T00:44:40.669637499Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:44:40.887899 kubelet[2720]: E0508 00:44:40.887867 2720 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ljmwb" podUID="2de83a08-9bcf-4ba2-8674-79ec949e7e5f" May 8 00:44:40.980306 kubelet[2720]: E0508 00:44:40.980277 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:41.916348 containerd[1546]: time="2025-05-08T00:44:41.916275236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:41.917239 containerd[1546]: time="2025-05-08T00:44:41.917172664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 8 00:44:41.918758 containerd[1546]: time="2025-05-08T00:44:41.918723614Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:41.921414 containerd[1546]: time="2025-05-08T00:44:41.921385621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:41.922041 containerd[1546]: time="2025-05-08T00:44:41.921993666Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.410377364s" May 8 00:44:41.922041 containerd[1546]: time="2025-05-08T00:44:41.922023384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 8 00:44:41.924917 containerd[1546]: time="2025-05-08T00:44:41.924470283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 00:44:41.935955 containerd[1546]: time="2025-05-08T00:44:41.935922022Z" level=info msg="CreateContainer within sandbox \"1164cf8c2a93ca0171f0cbca76f4893732bcb36c983e76045a51765bbbe17290\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 00:44:41.949132 containerd[1546]: time="2025-05-08T00:44:41.949079983Z" level=info msg="CreateContainer within sandbox \"1164cf8c2a93ca0171f0cbca76f4893732bcb36c983e76045a51765bbbe17290\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4dfeb79e3b500094d7431d0a0500ba80514abc5284cc51dd48c097f026d38bc2\"" May 8 00:44:41.950869 containerd[1546]: time="2025-05-08T00:44:41.949937494Z" level=info msg="StartContainer for \"4dfeb79e3b500094d7431d0a0500ba80514abc5284cc51dd48c097f026d38bc2\"" May 8 00:44:42.016434 containerd[1546]: time="2025-05-08T00:44:42.016367596Z" level=info msg="StartContainer for \"4dfeb79e3b500094d7431d0a0500ba80514abc5284cc51dd48c097f026d38bc2\" returns successfully" May 8 00:44:42.888446 kubelet[2720]: E0508 00:44:42.888243 2720 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ljmwb" podUID="2de83a08-9bcf-4ba2-8674-79ec949e7e5f" May 8 00:44:42.993276 kubelet[2720]: E0508 00:44:42.993217 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:43.997724 kubelet[2720]: I0508 00:44:43.997685 2720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:44:43.998469 kubelet[2720]: E0508 00:44:43.998433 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:44.580902 containerd[1546]: time="2025-05-08T00:44:44.580724420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:44.581517 containerd[1546]: time="2025-05-08T00:44:44.581430826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 8 00:44:44.582575 containerd[1546]: time="2025-05-08T00:44:44.582536734Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:44.584539 containerd[1546]: time="2025-05-08T00:44:44.584236413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:44.585110 containerd[1546]: time="2025-05-08T00:44:44.585004376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 2.659976406s" May 8 00:44:44.585110 containerd[1546]: time="2025-05-08T00:44:44.585033695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 8 00:44:44.588243 containerd[1546]: time="2025-05-08T00:44:44.588122428Z" level=info msg="CreateContainer within sandbox \"85c4ff0dee237edec9ca128fbc57bb3ee3120e588071bc96aa8cd402947e5812\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:44:44.599623 containerd[1546]: time="2025-05-08T00:44:44.599589323Z" level=info msg="CreateContainer within sandbox \"85c4ff0dee237edec9ca128fbc57bb3ee3120e588071bc96aa8cd402947e5812\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bc55040a94ad9a5ea96e7a085abfa077b75a2087d1c75d99ad7076f3ee98c67c\"" May 8 00:44:44.600498 containerd[1546]: time="2025-05-08T00:44:44.600089059Z" level=info msg="StartContainer for \"bc55040a94ad9a5ea96e7a085abfa077b75a2087d1c75d99ad7076f3ee98c67c\"" May 8 00:44:44.655279 containerd[1546]: time="2025-05-08T00:44:44.654845817Z" level=info msg="StartContainer for \"bc55040a94ad9a5ea96e7a085abfa077b75a2087d1c75d99ad7076f3ee98c67c\" returns successfully" May 8 00:44:44.887633 kubelet[2720]: E0508 00:44:44.887595 2720 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ljmwb" podUID="2de83a08-9bcf-4ba2-8674-79ec949e7e5f" May 8 00:44:44.997704 kubelet[2720]: E0508 00:44:44.997670 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:45.020710 kubelet[2720]: I0508 00:44:45.020635 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67f7bf4987-wkd8v" podStartSLOduration=5.558545058 podStartE2EDuration="7.02061897s" podCreationTimestamp="2025-05-08 00:44:38 +0000 UTC" firstStartedPulling="2025-05-08 00:44:40.462187783 +0000 UTC m=+22.653767296" lastFinishedPulling="2025-05-08 00:44:41.924261695 +0000 UTC m=+24.115841208" observedRunningTime="2025-05-08 00:44:43.003651855 +0000 UTC m=+25.195231328" watchObservedRunningTime="2025-05-08 00:44:45.02061897 +0000 UTC m=+27.212198483" May 8 00:44:45.256133 containerd[1546]: time="2025-05-08T00:44:45.255969923Z" level=info msg="shim disconnected" id=bc55040a94ad9a5ea96e7a085abfa077b75a2087d1c75d99ad7076f3ee98c67c namespace=k8s.io May 8 00:44:45.256133 containerd[1546]: time="2025-05-08T00:44:45.256023041Z" level=warning msg="cleaning up after shim disconnected" id=bc55040a94ad9a5ea96e7a085abfa077b75a2087d1c75d99ad7076f3ee98c67c namespace=k8s.io May 8 00:44:45.256133 containerd[1546]: time="2025-05-08T00:44:45.256031600Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:44:45.282307 kubelet[2720]: I0508 00:44:45.282284 2720 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 00:44:45.302910 kubelet[2720]: I0508 00:44:45.302734 2720 topology_manager.go:215] "Topology Admit Handler" podUID="d8549c49-1445-45ca-b3dc-fb39b68b2e91" podNamespace="kube-system" podName="coredns-7db6d8ff4d-d7mds" May 8 00:44:45.304486 kubelet[2720]: I0508 00:44:45.304460 2720 topology_manager.go:215] "Topology Admit Handler" podUID="997a723a-9d1c-45c2-917e-79affe9ca191" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rsrbh" May 8 00:44:45.304701 kubelet[2720]: I0508 00:44:45.304679 2720 topology_manager.go:215] "Topology Admit Handler" podUID="94c7629a-4f26-4590-8950-ecb623581f56" podNamespace="calico-apiserver" podName="calico-apiserver-79cbdfc48f-gn4vl" May 8 00:44:45.304874 kubelet[2720]: I0508 00:44:45.304828 2720 topology_manager.go:215] "Topology Admit Handler" podUID="1270cb28-66ea-435f-99a8-a51538f5c1d9" podNamespace="calico-apiserver" podName="calico-apiserver-79cbdfc48f-87wj8" May 8 00:44:45.307431 kubelet[2720]: I0508 00:44:45.307400 2720 topology_manager.go:215] "Topology Admit Handler" podUID="b66dddbc-0758-478a-8371-6dfe5c35d89f" podNamespace="calico-system" podName="calico-kube-controllers-657c88d8bb-lsvlt" May 8 00:44:45.393899 kubelet[2720]: I0508 00:44:45.393818 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7vch\" (UniqueName: \"kubernetes.io/projected/94c7629a-4f26-4590-8950-ecb623581f56-kube-api-access-l7vch\") pod \"calico-apiserver-79cbdfc48f-gn4vl\" (UID: \"94c7629a-4f26-4590-8950-ecb623581f56\") " pod="calico-apiserver/calico-apiserver-79cbdfc48f-gn4vl" May 8 00:44:45.393899 kubelet[2720]: I0508 00:44:45.393866 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/997a723a-9d1c-45c2-917e-79affe9ca191-config-volume\") pod \"coredns-7db6d8ff4d-rsrbh\" (UID: \"997a723a-9d1c-45c2-917e-79affe9ca191\") " pod="kube-system/coredns-7db6d8ff4d-rsrbh" May 8 00:44:45.393899 kubelet[2720]: I0508 00:44:45.393887 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgdz5\" (UniqueName: \"kubernetes.io/projected/997a723a-9d1c-45c2-917e-79affe9ca191-kube-api-access-cgdz5\") pod \"coredns-7db6d8ff4d-rsrbh\" (UID: \"997a723a-9d1c-45c2-917e-79affe9ca191\") " pod="kube-system/coredns-7db6d8ff4d-rsrbh" May 8 00:44:45.393899 kubelet[2720]: I0508 00:44:45.393906 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rps6l\" (UniqueName: \"kubernetes.io/projected/1270cb28-66ea-435f-99a8-a51538f5c1d9-kube-api-access-rps6l\") pod \"calico-apiserver-79cbdfc48f-87wj8\" (UID: \"1270cb28-66ea-435f-99a8-a51538f5c1d9\") " pod="calico-apiserver/calico-apiserver-79cbdfc48f-87wj8" May 8 00:44:45.394240 kubelet[2720]: I0508 00:44:45.393934 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1270cb28-66ea-435f-99a8-a51538f5c1d9-calico-apiserver-certs\") pod \"calico-apiserver-79cbdfc48f-87wj8\" (UID: \"1270cb28-66ea-435f-99a8-a51538f5c1d9\") " pod="calico-apiserver/calico-apiserver-79cbdfc48f-87wj8" May 8 00:44:45.394240 kubelet[2720]: I0508 00:44:45.393955 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/94c7629a-4f26-4590-8950-ecb623581f56-calico-apiserver-certs\") pod \"calico-apiserver-79cbdfc48f-gn4vl\" (UID: \"94c7629a-4f26-4590-8950-ecb623581f56\") " pod="calico-apiserver/calico-apiserver-79cbdfc48f-gn4vl" May 8 00:44:45.394240 kubelet[2720]: I0508 00:44:45.393972 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8549c49-1445-45ca-b3dc-fb39b68b2e91-config-volume\") pod \"coredns-7db6d8ff4d-d7mds\" (UID: \"d8549c49-1445-45ca-b3dc-fb39b68b2e91\") " pod="kube-system/coredns-7db6d8ff4d-d7mds" May 8 00:44:45.394240 kubelet[2720]: I0508 00:44:45.393987 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b66dddbc-0758-478a-8371-6dfe5c35d89f-tigera-ca-bundle\") pod \"calico-kube-controllers-657c88d8bb-lsvlt\" (UID: \"b66dddbc-0758-478a-8371-6dfe5c35d89f\") " pod="calico-system/calico-kube-controllers-657c88d8bb-lsvlt" May 8 00:44:45.394240 kubelet[2720]: I0508 00:44:45.394002 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb5qj\" (UniqueName: \"kubernetes.io/projected/b66dddbc-0758-478a-8371-6dfe5c35d89f-kube-api-access-lb5qj\") pod \"calico-kube-controllers-657c88d8bb-lsvlt\" (UID: \"b66dddbc-0758-478a-8371-6dfe5c35d89f\") " pod="calico-system/calico-kube-controllers-657c88d8bb-lsvlt" May 8 00:44:45.394350 kubelet[2720]: I0508 00:44:45.394022 2720 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvrwd\" (UniqueName: \"kubernetes.io/projected/d8549c49-1445-45ca-b3dc-fb39b68b2e91-kube-api-access-pvrwd\") pod \"coredns-7db6d8ff4d-d7mds\" (UID: \"d8549c49-1445-45ca-b3dc-fb39b68b2e91\") " pod="kube-system/coredns-7db6d8ff4d-d7mds" May 8 00:44:45.602349 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc55040a94ad9a5ea96e7a085abfa077b75a2087d1c75d99ad7076f3ee98c67c-rootfs.mount: Deactivated successfully. May 8 00:44:45.610693 containerd[1546]: time="2025-05-08T00:44:45.610656960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-657c88d8bb-lsvlt,Uid:b66dddbc-0758-478a-8371-6dfe5c35d89f,Namespace:calico-system,Attempt:0,}" May 8 00:44:45.612051 kubelet[2720]: E0508 00:44:45.612023 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:45.612880 containerd[1546]: time="2025-05-08T00:44:45.612372763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d7mds,Uid:d8549c49-1445-45ca-b3dc-fb39b68b2e91,Namespace:kube-system,Attempt:0,}" May 8 00:44:45.613857 containerd[1546]: time="2025-05-08T00:44:45.613822618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79cbdfc48f-gn4vl,Uid:94c7629a-4f26-4590-8950-ecb623581f56,Namespace:calico-apiserver,Attempt:0,}" May 8 00:44:45.622596 kubelet[2720]: E0508 00:44:45.622568 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:45.622953 containerd[1546]: time="2025-05-08T00:44:45.622895654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rsrbh,Uid:997a723a-9d1c-45c2-917e-79affe9ca191,Namespace:kube-system,Attempt:0,}" May 8 00:44:45.631387 containerd[1546]: time="2025-05-08T00:44:45.631338598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79cbdfc48f-87wj8,Uid:1270cb28-66ea-435f-99a8-a51538f5c1d9,Namespace:calico-apiserver,Attempt:0,}" May 8 00:44:46.001464 kubelet[2720]: E0508 00:44:46.000814 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:46.003088 containerd[1546]: time="2025-05-08T00:44:46.003048440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 00:44:46.017065 containerd[1546]: time="2025-05-08T00:44:46.017010897Z" level=error msg="Failed to destroy network for sandbox \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.017655 containerd[1546]: time="2025-05-08T00:44:46.017627271Z" level=error msg="encountered an error cleaning up failed sandbox \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.017797 containerd[1546]: time="2025-05-08T00:44:46.017774185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79cbdfc48f-gn4vl,Uid:94c7629a-4f26-4590-8950-ecb623581f56,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.018892 kubelet[2720]: E0508 00:44:46.018834 2720 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.018978 kubelet[2720]: E0508 00:44:46.018911 2720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79cbdfc48f-gn4vl" May 8 00:44:46.018978 kubelet[2720]: E0508 00:44:46.018931 2720 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79cbdfc48f-gn4vl" May 8 00:44:46.019041 kubelet[2720]: E0508 00:44:46.018983 2720 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79cbdfc48f-gn4vl_calico-apiserver(94c7629a-4f26-4590-8950-ecb623581f56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79cbdfc48f-gn4vl_calico-apiserver(94c7629a-4f26-4590-8950-ecb623581f56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79cbdfc48f-gn4vl" podUID="94c7629a-4f26-4590-8950-ecb623581f56" May 8 00:44:46.021538 containerd[1546]: time="2025-05-08T00:44:46.021504909Z" level=error msg="Failed to destroy network for sandbox \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.021954 containerd[1546]: time="2025-05-08T00:44:46.021925772Z" level=error msg="encountered an error cleaning up failed sandbox \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.021999 containerd[1546]: time="2025-05-08T00:44:46.021973290Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rsrbh,Uid:997a723a-9d1c-45c2-917e-79affe9ca191,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.022511 kubelet[2720]: E0508 00:44:46.022145 2720 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.022511 kubelet[2720]: E0508 00:44:46.022193 2720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rsrbh" May 8 00:44:46.022511 kubelet[2720]: E0508 00:44:46.022229 2720 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rsrbh" May 8 00:44:46.022857 kubelet[2720]: E0508 00:44:46.022264 2720 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rsrbh_kube-system(997a723a-9d1c-45c2-917e-79affe9ca191)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rsrbh_kube-system(997a723a-9d1c-45c2-917e-79affe9ca191)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rsrbh" podUID="997a723a-9d1c-45c2-917e-79affe9ca191" May 8 00:44:46.025962 containerd[1546]: time="2025-05-08T00:44:46.025839048Z" level=error msg="Failed to destroy network for sandbox \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.026535 containerd[1546]: time="2025-05-08T00:44:46.026436743Z" level=error msg="encountered an error cleaning up failed sandbox \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.026535 containerd[1546]: time="2025-05-08T00:44:46.026499981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d7mds,Uid:d8549c49-1445-45ca-b3dc-fb39b68b2e91,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.026869 kubelet[2720]: E0508 00:44:46.026829 2720 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.026964 kubelet[2720]: E0508 00:44:46.026883 2720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-d7mds" May 8 00:44:46.026964 kubelet[2720]: E0508 00:44:46.026900 2720 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-d7mds" May 8 00:44:46.026964 kubelet[2720]: E0508 00:44:46.026930 2720 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-d7mds_kube-system(d8549c49-1445-45ca-b3dc-fb39b68b2e91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-d7mds_kube-system(d8549c49-1445-45ca-b3dc-fb39b68b2e91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-d7mds" podUID="d8549c49-1445-45ca-b3dc-fb39b68b2e91" May 8 00:44:46.033706 containerd[1546]: time="2025-05-08T00:44:46.033217300Z" level=error msg="Failed to destroy network for sandbox \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.033706 containerd[1546]: time="2025-05-08T00:44:46.033557806Z" level=error msg="encountered an error cleaning up failed sandbox \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.033706 containerd[1546]: time="2025-05-08T00:44:46.033595484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79cbdfc48f-87wj8,Uid:1270cb28-66ea-435f-99a8-a51538f5c1d9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.034342 kubelet[2720]: E0508 00:44:46.033994 2720 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.034342 kubelet[2720]: E0508 00:44:46.034043 2720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79cbdfc48f-87wj8" May 8 00:44:46.034342 kubelet[2720]: E0508 00:44:46.034062 2720 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79cbdfc48f-87wj8" May 8 00:44:46.034636 kubelet[2720]: E0508 00:44:46.034093 2720 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79cbdfc48f-87wj8_calico-apiserver(1270cb28-66ea-435f-99a8-a51538f5c1d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79cbdfc48f-87wj8_calico-apiserver(1270cb28-66ea-435f-99a8-a51538f5c1d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79cbdfc48f-87wj8" podUID="1270cb28-66ea-435f-99a8-a51538f5c1d9" May 8 00:44:46.034827 containerd[1546]: time="2025-05-08T00:44:46.034572323Z" level=error msg="Failed to destroy network for sandbox \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.035161 containerd[1546]: time="2025-05-08T00:44:46.035064743Z" level=error msg="encountered an error cleaning up failed sandbox \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.035161 containerd[1546]: time="2025-05-08T00:44:46.035102981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-657c88d8bb-lsvlt,Uid:b66dddbc-0758-478a-8371-6dfe5c35d89f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.035411 kubelet[2720]: E0508 00:44:46.035379 2720 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.035496 kubelet[2720]: E0508 00:44:46.035425 2720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-657c88d8bb-lsvlt" May 8 00:44:46.035496 kubelet[2720]: E0508 00:44:46.035442 2720 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-657c88d8bb-lsvlt" May 8 00:44:46.035561 kubelet[2720]: E0508 00:44:46.035489 2720 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-657c88d8bb-lsvlt_calico-system(b66dddbc-0758-478a-8371-6dfe5c35d89f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-657c88d8bb-lsvlt_calico-system(b66dddbc-0758-478a-8371-6dfe5c35d89f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-657c88d8bb-lsvlt" podUID="b66dddbc-0758-478a-8371-6dfe5c35d89f" May 8 00:44:46.597324 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52-shm.mount: Deactivated successfully. May 8 00:44:46.597651 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c-shm.mount: Deactivated successfully. May 8 00:44:46.597911 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb-shm.mount: Deactivated successfully. May 8 00:44:46.890373 containerd[1546]: time="2025-05-08T00:44:46.889941473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ljmwb,Uid:2de83a08-9bcf-4ba2-8674-79ec949e7e5f,Namespace:calico-system,Attempt:0,}" May 8 00:44:46.961645 containerd[1546]: time="2025-05-08T00:44:46.961597640Z" level=error msg="Failed to destroy network for sandbox \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.963079 containerd[1546]: time="2025-05-08T00:44:46.962916385Z" level=error msg="encountered an error cleaning up failed sandbox \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.963079 containerd[1546]: time="2025-05-08T00:44:46.962979542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ljmwb,Uid:2de83a08-9bcf-4ba2-8674-79ec949e7e5f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.963669 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb-shm.mount: Deactivated successfully. May 8 00:44:46.964328 kubelet[2720]: E0508 00:44:46.963774 2720 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:46.964328 kubelet[2720]: E0508 00:44:46.963826 2720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ljmwb" May 8 00:44:46.964328 kubelet[2720]: E0508 00:44:46.963854 2720 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ljmwb" May 8 00:44:46.964416 kubelet[2720]: E0508 00:44:46.963890 2720 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ljmwb_calico-system(2de83a08-9bcf-4ba2-8674-79ec949e7e5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ljmwb_calico-system(2de83a08-9bcf-4ba2-8674-79ec949e7e5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ljmwb" podUID="2de83a08-9bcf-4ba2-8674-79ec949e7e5f" May 8 00:44:47.003424 kubelet[2720]: I0508 00:44:47.003380 2720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" May 8 00:44:47.004341 containerd[1546]: time="2025-05-08T00:44:47.004313386Z" level=info msg="StopPodSandbox for \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\"" May 8 00:44:47.004541 containerd[1546]: time="2025-05-08T00:44:47.004520258Z" level=info msg="Ensure that sandbox 520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace in task-service has been cleanup successfully" May 8 00:44:47.004963 kubelet[2720]: I0508 00:44:47.004924 2720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" May 8 00:44:47.006062 containerd[1546]: time="2025-05-08T00:44:47.005572816Z" level=info msg="StopPodSandbox for \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\"" May 8 00:44:47.006062 containerd[1546]: time="2025-05-08T00:44:47.005712851Z" level=info msg="Ensure that sandbox e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52 in task-service has been cleanup successfully" May 8 00:44:47.006608 kubelet[2720]: I0508 00:44:47.006585 2720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" May 8 00:44:47.007581 containerd[1546]: time="2025-05-08T00:44:47.007523220Z" level=info msg="StopPodSandbox for \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\"" May 8 00:44:47.007730 containerd[1546]: time="2025-05-08T00:44:47.007707173Z" level=info msg="Ensure that sandbox 52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c in task-service has been cleanup successfully" May 8 00:44:47.008306 kubelet[2720]: I0508 00:44:47.008220 2720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" May 8 00:44:47.010517 containerd[1546]: time="2025-05-08T00:44:47.010486064Z" level=info msg="StopPodSandbox for \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\"" May 8 00:44:47.010636 containerd[1546]: time="2025-05-08T00:44:47.010618619Z" level=info msg="Ensure that sandbox cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb in task-service has been cleanup successfully" May 8 00:44:47.012955 kubelet[2720]: I0508 00:44:47.011593 2720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" May 8 00:44:47.013417 containerd[1546]: time="2025-05-08T00:44:47.013386310Z" level=info msg="StopPodSandbox for \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\"" May 8 00:44:47.014726 containerd[1546]: time="2025-05-08T00:44:47.014693739Z" level=info msg="Ensure that sandbox 109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e in task-service has been cleanup successfully" May 8 00:44:47.014935 kubelet[2720]: I0508 00:44:47.014916 2720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" May 8 00:44:47.017273 containerd[1546]: time="2025-05-08T00:44:47.017242439Z" level=info msg="StopPodSandbox for \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\"" May 8 00:44:47.017421 containerd[1546]: time="2025-05-08T00:44:47.017401753Z" level=info msg="Ensure that sandbox f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb in task-service has been cleanup successfully" May 8 00:44:47.044285 kubelet[2720]: I0508 00:44:47.044248 2720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:44:47.045845 kubelet[2720]: E0508 00:44:47.045223 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:47.067566 containerd[1546]: time="2025-05-08T00:44:47.067508271Z" level=error msg="StopPodSandbox for \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\" failed" error="failed to destroy network for sandbox \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:47.069469 kubelet[2720]: E0508 00:44:47.068510 2720 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" May 8 00:44:47.069469 kubelet[2720]: E0508 00:44:47.068582 2720 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c"} May 8 00:44:47.069469 kubelet[2720]: E0508 00:44:47.068653 2720 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b66dddbc-0758-478a-8371-6dfe5c35d89f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:44:47.069469 kubelet[2720]: E0508 00:44:47.068674 2720 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b66dddbc-0758-478a-8371-6dfe5c35d89f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-657c88d8bb-lsvlt" podUID="b66dddbc-0758-478a-8371-6dfe5c35d89f" May 8 00:44:47.074136 containerd[1546]: time="2025-05-08T00:44:47.074092573Z" level=error msg="StopPodSandbox for \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\" failed" error="failed to destroy network for sandbox \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:47.076113 kubelet[2720]: E0508 00:44:47.076066 2720 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" May 8 00:44:47.076180 kubelet[2720]: E0508 00:44:47.076121 2720 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb"} May 8 00:44:47.076180 kubelet[2720]: E0508 00:44:47.076153 2720 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2de83a08-9bcf-4ba2-8674-79ec949e7e5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:44:47.076276 kubelet[2720]: E0508 00:44:47.076184 2720 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2de83a08-9bcf-4ba2-8674-79ec949e7e5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ljmwb" podUID="2de83a08-9bcf-4ba2-8674-79ec949e7e5f" May 8 00:44:47.078775 containerd[1546]: time="2025-05-08T00:44:47.078354766Z" level=error msg="StopPodSandbox for \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\" failed" error="failed to destroy network for sandbox \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:47.081013 kubelet[2720]: E0508 00:44:47.080972 2720 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" May 8 00:44:47.081092 kubelet[2720]: E0508 00:44:47.081019 2720 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52"} May 8 00:44:47.081092 kubelet[2720]: E0508 00:44:47.081054 2720 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94c7629a-4f26-4590-8950-ecb623581f56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:44:47.081092 kubelet[2720]: E0508 00:44:47.081074 2720 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94c7629a-4f26-4590-8950-ecb623581f56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79cbdfc48f-gn4vl" podUID="94c7629a-4f26-4590-8950-ecb623581f56" May 8 00:44:47.084571 containerd[1546]: time="2025-05-08T00:44:47.084518205Z" level=error msg="StopPodSandbox for \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\" failed" error="failed to destroy network for sandbox \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:47.084846 kubelet[2720]: E0508 00:44:47.084799 2720 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" May 8 00:44:47.084902 kubelet[2720]: E0508 00:44:47.084857 2720 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace"} May 8 00:44:47.084902 kubelet[2720]: E0508 00:44:47.084888 2720 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1270cb28-66ea-435f-99a8-a51538f5c1d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:44:47.084975 kubelet[2720]: E0508 00:44:47.084910 2720 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1270cb28-66ea-435f-99a8-a51538f5c1d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79cbdfc48f-87wj8" podUID="1270cb28-66ea-435f-99a8-a51538f5c1d9" May 8 00:44:47.090582 containerd[1546]: time="2025-05-08T00:44:47.090533289Z" level=error msg="StopPodSandbox for \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\" failed" error="failed to destroy network for sandbox \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:47.091217 kubelet[2720]: E0508 00:44:47.090736 2720 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" May 8 00:44:47.091217 kubelet[2720]: E0508 00:44:47.090789 2720 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e"} May 8 00:44:47.091217 kubelet[2720]: E0508 00:44:47.090819 2720 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"997a723a-9d1c-45c2-917e-79affe9ca191\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:44:47.091217 kubelet[2720]: E0508 00:44:47.090850 2720 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"997a723a-9d1c-45c2-917e-79affe9ca191\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rsrbh" podUID="997a723a-9d1c-45c2-917e-79affe9ca191" May 8 00:44:47.096931 containerd[1546]: time="2025-05-08T00:44:47.096887200Z" level=error msg="StopPodSandbox for \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\" failed" error="failed to destroy network for sandbox \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:44:47.097268 kubelet[2720]: E0508 00:44:47.097086 2720 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" May 8 00:44:47.097268 kubelet[2720]: E0508 00:44:47.097130 2720 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb"} May 8 00:44:47.097268 kubelet[2720]: E0508 00:44:47.097164 2720 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d8549c49-1445-45ca-b3dc-fb39b68b2e91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:44:47.097268 kubelet[2720]: E0508 00:44:47.097186 2720 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d8549c49-1445-45ca-b3dc-fb39b68b2e91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-d7mds" podUID="d8549c49-1445-45ca-b3dc-fb39b68b2e91" May 8 00:44:48.016944 kubelet[2720]: E0508 00:44:48.016901 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:48.157901 systemd[1]: Started sshd@7-10.0.0.155:22-10.0.0.1:45818.service - OpenSSH per-connection server daemon (10.0.0.1:45818). May 8 00:44:48.194299 sshd[3850]: Accepted publickey for core from 10.0.0.1 port 45818 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:44:48.195703 sshd[3850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:44:48.200258 systemd-logind[1522]: New session 8 of user core. May 8 00:44:48.208982 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:44:48.347641 sshd[3850]: pam_unix(sshd:session): session closed for user core May 8 00:44:48.351843 systemd-logind[1522]: Session 8 logged out. Waiting for processes to exit. May 8 00:44:48.353073 systemd[1]: sshd@7-10.0.0.155:22-10.0.0.1:45818.service: Deactivated successfully. May 8 00:44:48.355874 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:44:48.357624 systemd-logind[1522]: Removed session 8. May 8 00:44:49.700003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3325671409.mount: Deactivated successfully. May 8 00:44:49.855333 containerd[1546]: time="2025-05-08T00:44:49.855233081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:49.855803 containerd[1546]: time="2025-05-08T00:44:49.855753624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 8 00:44:49.856628 containerd[1546]: time="2025-05-08T00:44:49.856603794Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:49.858522 containerd[1546]: time="2025-05-08T00:44:49.858470330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:49.859466 containerd[1546]: time="2025-05-08T00:44:49.859035351Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 3.855945352s" May 8 00:44:49.859466 containerd[1546]: time="2025-05-08T00:44:49.859064910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 8 00:44:49.867157 containerd[1546]: time="2025-05-08T00:44:49.867121272Z" level=info msg="CreateContainer within sandbox \"85c4ff0dee237edec9ca128fbc57bb3ee3120e588071bc96aa8cd402947e5812\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:44:49.879005 containerd[1546]: time="2025-05-08T00:44:49.878961703Z" level=info msg="CreateContainer within sandbox \"85c4ff0dee237edec9ca128fbc57bb3ee3120e588071bc96aa8cd402947e5812\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"460936837b293bc6f97ba73554ad1ef7095f0262d557561139b4a765d815e1cd\"" May 8 00:44:49.879722 containerd[1546]: time="2025-05-08T00:44:49.879415846Z" level=info msg="StartContainer for \"460936837b293bc6f97ba73554ad1ef7095f0262d557561139b4a765d815e1cd\"" May 8 00:44:50.034179 containerd[1546]: time="2025-05-08T00:44:50.033958509Z" level=info msg="StartContainer for \"460936837b293bc6f97ba73554ad1ef7095f0262d557561139b4a765d815e1cd\" returns successfully" May 8 00:44:50.145077 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 00:44:50.145226 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 00:44:51.050276 kubelet[2720]: E0508 00:44:51.050215 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:51.067883 kubelet[2720]: I0508 00:44:51.067364 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z52mg" podStartSLOduration=2.370588662 podStartE2EDuration="13.067346427s" podCreationTimestamp="2025-05-08 00:44:38 +0000 UTC" firstStartedPulling="2025-05-08 00:44:39.162896884 +0000 UTC m=+21.354476397" lastFinishedPulling="2025-05-08 00:44:49.859654649 +0000 UTC m=+32.051234162" observedRunningTime="2025-05-08 00:44:51.067160068 +0000 UTC m=+33.258739581" watchObservedRunningTime="2025-05-08 00:44:51.067346427 +0000 UTC m=+33.258925940" May 8 00:44:51.549477 kernel: bpftool[4085]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 8 00:44:51.694520 systemd-networkd[1231]: vxlan.calico: Link UP May 8 00:44:51.694531 systemd-networkd[1231]: vxlan.calico: Gained carrier May 8 00:44:52.053221 kubelet[2720]: E0508 00:44:52.051866 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:53.014609 systemd-networkd[1231]: vxlan.calico: Gained IPv6LL May 8 00:44:53.357929 systemd[1]: Started sshd@8-10.0.0.155:22-10.0.0.1:48080.service - OpenSSH per-connection server daemon (10.0.0.1:48080). May 8 00:44:53.392430 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 48080 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:44:53.393877 sshd[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:44:53.399188 systemd-logind[1522]: New session 9 of user core. May 8 00:44:53.410747 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:44:53.526383 sshd[4179]: pam_unix(sshd:session): session closed for user core May 8 00:44:53.529700 systemd[1]: sshd@8-10.0.0.155:22-10.0.0.1:48080.service: Deactivated successfully. May 8 00:44:53.531508 systemd-logind[1522]: Session 9 logged out. Waiting for processes to exit. May 8 00:44:53.531622 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:44:53.533073 systemd-logind[1522]: Removed session 9. May 8 00:44:57.889143 containerd[1546]: time="2025-05-08T00:44:57.889045063Z" level=info msg="StopPodSandbox for \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\"" May 8 00:44:58.069435 containerd[1546]: 2025-05-08 00:44:57.982 [INFO][4222] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" May 8 00:44:58.069435 containerd[1546]: 2025-05-08 00:44:57.982 [INFO][4222] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" iface="eth0" netns="/var/run/netns/cni-d0ed82a5-dc1d-edaa-0031-4fdbad36a394" May 8 00:44:58.069435 containerd[1546]: 2025-05-08 00:44:57.983 [INFO][4222] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" iface="eth0" netns="/var/run/netns/cni-d0ed82a5-dc1d-edaa-0031-4fdbad36a394" May 8 00:44:58.069435 containerd[1546]: 2025-05-08 00:44:57.984 [INFO][4222] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" iface="eth0" netns="/var/run/netns/cni-d0ed82a5-dc1d-edaa-0031-4fdbad36a394" May 8 00:44:58.069435 containerd[1546]: 2025-05-08 00:44:57.984 [INFO][4222] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" May 8 00:44:58.069435 containerd[1546]: 2025-05-08 00:44:57.984 [INFO][4222] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" May 8 00:44:58.069435 containerd[1546]: 2025-05-08 00:44:58.055 [INFO][4230] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" HandleID="k8s-pod-network.520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0" May 8 00:44:58.069435 containerd[1546]: 2025-05-08 00:44:58.055 [INFO][4230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:44:58.069435 containerd[1546]: 2025-05-08 00:44:58.055 [INFO][4230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:44:58.069435 containerd[1546]: 2025-05-08 00:44:58.064 [WARNING][4230] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" HandleID="k8s-pod-network.520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0" May 8 00:44:58.069435 containerd[1546]: 2025-05-08 00:44:58.064 [INFO][4230] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" HandleID="k8s-pod-network.520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0" May 8 00:44:58.069435 containerd[1546]: 2025-05-08 00:44:58.065 [INFO][4230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:44:58.069435 containerd[1546]: 2025-05-08 00:44:58.067 [INFO][4222] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" May 8 00:44:58.072330 containerd[1546]: time="2025-05-08T00:44:58.071754086Z" level=info msg="TearDown network for sandbox \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\" successfully" May 8 00:44:58.072330 containerd[1546]: time="2025-05-08T00:44:58.071789605Z" level=info msg="StopPodSandbox for \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\" returns successfully" May 8 00:44:58.072067 systemd[1]: run-netns-cni\x2dd0ed82a5\x2ddc1d\x2dedaa\x2d0031\x2d4fdbad36a394.mount: Deactivated successfully. May 8 00:44:58.072746 containerd[1546]: time="2025-05-08T00:44:58.072411641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79cbdfc48f-87wj8,Uid:1270cb28-66ea-435f-99a8-a51538f5c1d9,Namespace:calico-apiserver,Attempt:1,}" May 8 00:44:58.198847 systemd-networkd[1231]: cali77276882db2: Link UP May 8 00:44:58.199067 systemd-networkd[1231]: cali77276882db2: Gained carrier May 8 00:44:58.214237 containerd[1546]: 2025-05-08 00:44:58.119 [INFO][4241] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0 calico-apiserver-79cbdfc48f- calico-apiserver 1270cb28-66ea-435f-99a8-a51538f5c1d9 836 0 2025-05-08 00:44:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79cbdfc48f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79cbdfc48f-87wj8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali77276882db2 [] []}} ContainerID="9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38" Namespace="calico-apiserver" Pod="calico-apiserver-79cbdfc48f-87wj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-" May 8 00:44:58.214237 containerd[1546]: 2025-05-08 00:44:58.119 [INFO][4241] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38" Namespace="calico-apiserver" Pod="calico-apiserver-79cbdfc48f-87wj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0" May 8 00:44:58.214237 containerd[1546]: 2025-05-08 00:44:58.148 [INFO][4255] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38" HandleID="k8s-pod-network.9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0" May 8 00:44:58.214237 containerd[1546]: 2025-05-08 00:44:58.166 [INFO][4255] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38" HandleID="k8s-pod-network.9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000305810), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79cbdfc48f-87wj8", "timestamp":"2025-05-08 00:44:58.148323801 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:44:58.214237 containerd[1546]: 2025-05-08 00:44:58.166 [INFO][4255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:44:58.214237 containerd[1546]: 2025-05-08 00:44:58.166 [INFO][4255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:44:58.214237 containerd[1546]: 2025-05-08 00:44:58.166 [INFO][4255] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:44:58.214237 containerd[1546]: 2025-05-08 00:44:58.168 [INFO][4255] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38" host="localhost" May 8 00:44:58.214237 containerd[1546]: 2025-05-08 00:44:58.173 [INFO][4255] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:44:58.214237 containerd[1546]: 2025-05-08 00:44:58.177 [INFO][4255] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:44:58.214237 containerd[1546]: 2025-05-08 00:44:58.179 [INFO][4255] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:44:58.214237 containerd[1546]: 2025-05-08 00:44:58.181 [INFO][4255] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:44:58.214237 containerd[1546]: 2025-05-08 00:44:58.182 [INFO][4255] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38" host="localhost" May 8 00:44:58.214237 containerd[1546]: 2025-05-08 00:44:58.183 [INFO][4255] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38 May 8 00:44:58.214237 containerd[1546]: 2025-05-08 00:44:58.188 [INFO][4255] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38" host="localhost" May 8 00:44:58.214237 containerd[1546]: 2025-05-08 00:44:58.193 [INFO][4255] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38" host="localhost" May 8 00:44:58.214237 containerd[1546]: 2025-05-08 00:44:58.193 [INFO][4255] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38" host="localhost" May 8 00:44:58.214237 containerd[1546]: 2025-05-08 00:44:58.194 [INFO][4255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:44:58.214237 containerd[1546]: 2025-05-08 00:44:58.194 [INFO][4255] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38" HandleID="k8s-pod-network.9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0" May 8 00:44:58.214990 containerd[1546]: 2025-05-08 00:44:58.196 [INFO][4241] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38" Namespace="calico-apiserver" Pod="calico-apiserver-79cbdfc48f-87wj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0", GenerateName:"calico-apiserver-79cbdfc48f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1270cb28-66ea-435f-99a8-a51538f5c1d9", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79cbdfc48f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79cbdfc48f-87wj8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77276882db2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:44:58.214990 containerd[1546]: 2025-05-08 00:44:58.196 [INFO][4241] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38" Namespace="calico-apiserver" Pod="calico-apiserver-79cbdfc48f-87wj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0" May 8 00:44:58.214990 containerd[1546]: 2025-05-08 00:44:58.196 [INFO][4241] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77276882db2 ContainerID="9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38" Namespace="calico-apiserver" Pod="calico-apiserver-79cbdfc48f-87wj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0" May 8 00:44:58.214990 containerd[1546]: 2025-05-08 00:44:58.199 [INFO][4241] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38" Namespace="calico-apiserver" Pod="calico-apiserver-79cbdfc48f-87wj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0" May 8 00:44:58.214990 containerd[1546]: 2025-05-08 00:44:58.200 [INFO][4241] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38" Namespace="calico-apiserver" Pod="calico-apiserver-79cbdfc48f-87wj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0", GenerateName:"calico-apiserver-79cbdfc48f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1270cb28-66ea-435f-99a8-a51538f5c1d9", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79cbdfc48f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38", Pod:"calico-apiserver-79cbdfc48f-87wj8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77276882db2", MAC:"2e:e4:0b:f8:1a:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:44:58.214990 containerd[1546]: 2025-05-08 00:44:58.208 [INFO][4241] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38" Namespace="calico-apiserver" Pod="calico-apiserver-79cbdfc48f-87wj8" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0" May 8 00:44:58.234668 containerd[1546]: time="2025-05-08T00:44:58.234432095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:44:58.234668 containerd[1546]: time="2025-05-08T00:44:58.234506975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:44:58.234668 containerd[1546]: time="2025-05-08T00:44:58.234517815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:44:58.234838 containerd[1546]: time="2025-05-08T00:44:58.234609134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:44:58.255420 systemd-resolved[1438]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:44:58.276196 containerd[1546]: time="2025-05-08T00:44:58.276159351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79cbdfc48f-87wj8,Uid:1270cb28-66ea-435f-99a8-a51538f5c1d9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38\"" May 8 00:44:58.278314 containerd[1546]: time="2025-05-08T00:44:58.277880900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:44:58.545705 systemd[1]: Started sshd@9-10.0.0.155:22-10.0.0.1:48086.service - OpenSSH per-connection server daemon (10.0.0.1:48086). May 8 00:44:58.584967 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 48086 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:44:58.586551 sshd[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:44:58.590514 systemd-logind[1522]: New session 10 of user core. May 8 00:44:58.596768 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:44:58.724874 sshd[4320]: pam_unix(sshd:session): session closed for user core May 8 00:44:58.730684 systemd[1]: Started sshd@10-10.0.0.155:22-10.0.0.1:48090.service - OpenSSH per-connection server daemon (10.0.0.1:48090). May 8 00:44:58.731050 systemd[1]: sshd@9-10.0.0.155:22-10.0.0.1:48086.service: Deactivated successfully. May 8 00:44:58.734548 systemd-logind[1522]: Session 10 logged out. Waiting for processes to exit. May 8 00:44:58.734873 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:44:58.736025 systemd-logind[1522]: Removed session 10. May 8 00:44:58.759681 sshd[4333]: Accepted publickey for core from 10.0.0.1 port 48090 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:44:58.760932 sshd[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:44:58.764680 systemd-logind[1522]: New session 11 of user core. May 8 00:44:58.776695 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:44:58.940212 sshd[4333]: pam_unix(sshd:session): session closed for user core May 8 00:44:58.951590 systemd[1]: Started sshd@11-10.0.0.155:22-10.0.0.1:48096.service - OpenSSH per-connection server daemon (10.0.0.1:48096). May 8 00:44:58.952951 systemd[1]: sshd@10-10.0.0.155:22-10.0.0.1:48090.service: Deactivated successfully. May 8 00:44:58.969558 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:44:58.971008 systemd-logind[1522]: Session 11 logged out. Waiting for processes to exit. May 8 00:44:58.982517 systemd-logind[1522]: Removed session 11. May 8 00:44:59.013007 sshd[4346]: Accepted publickey for core from 10.0.0.1 port 48096 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:44:59.014391 sshd[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:44:59.019681 systemd-logind[1522]: New session 12 of user core. May 8 00:44:59.026760 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:44:59.141409 sshd[4346]: pam_unix(sshd:session): session closed for user core May 8 00:44:59.147583 systemd[1]: sshd@11-10.0.0.155:22-10.0.0.1:48096.service: Deactivated successfully. May 8 00:44:59.150554 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:44:59.153468 systemd-logind[1522]: Session 12 logged out. Waiting for processes to exit. May 8 00:44:59.155409 systemd-logind[1522]: Removed session 12. May 8 00:44:59.798798 systemd-networkd[1231]: cali77276882db2: Gained IPv6LL May 8 00:44:59.889949 containerd[1546]: time="2025-05-08T00:44:59.889636241Z" level=info msg="StopPodSandbox for \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\"" May 8 00:44:59.895353 containerd[1546]: time="2025-05-08T00:44:59.895314966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:59.895777 containerd[1546]: time="2025-05-08T00:44:59.895744603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 8 00:44:59.896880 containerd[1546]: time="2025-05-08T00:44:59.896852957Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:59.898694 containerd[1546]: time="2025-05-08T00:44:59.898666985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:44:59.899681 containerd[1546]: time="2025-05-08T00:44:59.899648979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.621715519s" May 8 00:44:59.899737 containerd[1546]: time="2025-05-08T00:44:59.899685659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 8 00:44:59.902046 containerd[1546]: time="2025-05-08T00:44:59.901955525Z" level=info msg="CreateContainer within sandbox \"9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:44:59.914488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2520936376.mount: Deactivated successfully. May 8 00:44:59.917495 containerd[1546]: time="2025-05-08T00:44:59.917357230Z" level=info msg="CreateContainer within sandbox \"9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c04cf5fb6ad50daa6e48e9fc46e31481371fabfb1f5239d982eb5139f333bbd7\"" May 8 00:44:59.919445 containerd[1546]: time="2025-05-08T00:44:59.919416777Z" level=info msg="StartContainer for \"c04cf5fb6ad50daa6e48e9fc46e31481371fabfb1f5239d982eb5139f333bbd7\"" May 8 00:44:59.990427 containerd[1546]: time="2025-05-08T00:44:59.988289513Z" level=info msg="StartContainer for \"c04cf5fb6ad50daa6e48e9fc46e31481371fabfb1f5239d982eb5139f333bbd7\" returns successfully" May 8 00:45:00.003049 containerd[1546]: 2025-05-08 00:44:59.952 [INFO][4388] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" May 8 00:45:00.003049 containerd[1546]: 2025-05-08 00:44:59.952 [INFO][4388] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" iface="eth0" netns="/var/run/netns/cni-5a3c4192-3050-d0e4-0ecf-e7fb215ef44b" May 8 00:45:00.003049 containerd[1546]: 2025-05-08 00:44:59.952 [INFO][4388] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" iface="eth0" netns="/var/run/netns/cni-5a3c4192-3050-d0e4-0ecf-e7fb215ef44b" May 8 00:45:00.003049 containerd[1546]: 2025-05-08 00:44:59.952 [INFO][4388] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" iface="eth0" netns="/var/run/netns/cni-5a3c4192-3050-d0e4-0ecf-e7fb215ef44b" May 8 00:45:00.003049 containerd[1546]: 2025-05-08 00:44:59.953 [INFO][4388] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" May 8 00:45:00.003049 containerd[1546]: 2025-05-08 00:44:59.953 [INFO][4388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" May 8 00:45:00.003049 containerd[1546]: 2025-05-08 00:44:59.983 [INFO][4420] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" HandleID="k8s-pod-network.cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" Workload="localhost-k8s-csi--node--driver--ljmwb-eth0" May 8 00:45:00.003049 containerd[1546]: 2025-05-08 00:44:59.983 [INFO][4420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:00.003049 containerd[1546]: 2025-05-08 00:44:59.983 [INFO][4420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:00.003049 containerd[1546]: 2025-05-08 00:44:59.993 [WARNING][4420] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" HandleID="k8s-pod-network.cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" Workload="localhost-k8s-csi--node--driver--ljmwb-eth0" May 8 00:45:00.003049 containerd[1546]: 2025-05-08 00:44:59.993 [INFO][4420] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" HandleID="k8s-pod-network.cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" Workload="localhost-k8s-csi--node--driver--ljmwb-eth0" May 8 00:45:00.003049 containerd[1546]: 2025-05-08 00:44:59.996 [INFO][4420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:00.003049 containerd[1546]: 2025-05-08 00:44:59.999 [INFO][4388] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" May 8 00:45:00.003607 containerd[1546]: time="2025-05-08T00:45:00.003468780Z" level=info msg="TearDown network for sandbox \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\" successfully" May 8 00:45:00.003607 containerd[1546]: time="2025-05-08T00:45:00.003501340Z" level=info msg="StopPodSandbox for \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\" returns successfully" May 8 00:45:00.004205 containerd[1546]: time="2025-05-08T00:45:00.004177216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ljmwb,Uid:2de83a08-9bcf-4ba2-8674-79ec949e7e5f,Namespace:calico-system,Attempt:1,}" May 8 00:45:00.006380 systemd[1]: run-netns-cni\x2d5a3c4192\x2d3050\x2dd0e4\x2d0ecf\x2de7fb215ef44b.mount: Deactivated successfully. May 8 00:45:00.113498 kubelet[2720]: I0508 00:45:00.113354 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79cbdfc48f-87wj8" podStartSLOduration=21.490354609 podStartE2EDuration="23.113337121s" podCreationTimestamp="2025-05-08 00:44:37 +0000 UTC" firstStartedPulling="2025-05-08 00:44:58.277600342 +0000 UTC m=+40.469179815" lastFinishedPulling="2025-05-08 00:44:59.900582814 +0000 UTC m=+42.092162327" observedRunningTime="2025-05-08 00:45:00.113053322 +0000 UTC m=+42.304632875" watchObservedRunningTime="2025-05-08 00:45:00.113337121 +0000 UTC m=+42.304916634" May 8 00:45:00.160977 systemd-networkd[1231]: cali61a5a46cbe6: Link UP May 8 00:45:00.162081 systemd-networkd[1231]: cali61a5a46cbe6: Gained carrier May 8 00:45:00.174376 containerd[1546]: 2025-05-08 00:45:00.058 [INFO][4440] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ljmwb-eth0 csi-node-driver- calico-system 2de83a08-9bcf-4ba2-8674-79ec949e7e5f 865 0 2025-05-08 00:44:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ljmwb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali61a5a46cbe6 [] []}} ContainerID="8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5" Namespace="calico-system" Pod="csi-node-driver-ljmwb" WorkloadEndpoint="localhost-k8s-csi--node--driver--ljmwb-" May 8 00:45:00.174376 containerd[1546]: 2025-05-08 00:45:00.059 [INFO][4440] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5" Namespace="calico-system" Pod="csi-node-driver-ljmwb" WorkloadEndpoint="localhost-k8s-csi--node--driver--ljmwb-eth0" May 8 00:45:00.174376 containerd[1546]: 2025-05-08 00:45:00.105 [INFO][4456] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5" HandleID="k8s-pod-network.8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5" Workload="localhost-k8s-csi--node--driver--ljmwb-eth0" May 8 00:45:00.174376 containerd[1546]: 2025-05-08 00:45:00.129 [INFO][4456] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5" HandleID="k8s-pod-network.8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5" Workload="localhost-k8s-csi--node--driver--ljmwb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001330b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ljmwb", "timestamp":"2025-05-08 00:45:00.105024171 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:45:00.174376 containerd[1546]: 2025-05-08 00:45:00.129 [INFO][4456] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:00.174376 containerd[1546]: 2025-05-08 00:45:00.129 [INFO][4456] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:00.174376 containerd[1546]: 2025-05-08 00:45:00.129 [INFO][4456] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:45:00.174376 containerd[1546]: 2025-05-08 00:45:00.131 [INFO][4456] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5" host="localhost" May 8 00:45:00.174376 containerd[1546]: 2025-05-08 00:45:00.136 [INFO][4456] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:45:00.174376 containerd[1546]: 2025-05-08 00:45:00.140 [INFO][4456] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:45:00.174376 containerd[1546]: 2025-05-08 00:45:00.142 [INFO][4456] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:45:00.174376 containerd[1546]: 2025-05-08 00:45:00.144 [INFO][4456] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:45:00.174376 containerd[1546]: 2025-05-08 00:45:00.144 [INFO][4456] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5" host="localhost" May 8 00:45:00.174376 containerd[1546]: 2025-05-08 00:45:00.145 [INFO][4456] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5 May 8 00:45:00.174376 containerd[1546]: 2025-05-08 00:45:00.148 [INFO][4456] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5" host="localhost" May 8 00:45:00.174376 containerd[1546]: 2025-05-08 00:45:00.154 [INFO][4456] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5" host="localhost" May 8 00:45:00.174376 containerd[1546]: 2025-05-08 00:45:00.154 [INFO][4456] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5" host="localhost" May 8 00:45:00.174376 containerd[1546]: 2025-05-08 00:45:00.154 [INFO][4456] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:00.174376 containerd[1546]: 2025-05-08 00:45:00.154 [INFO][4456] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5" HandleID="k8s-pod-network.8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5" Workload="localhost-k8s-csi--node--driver--ljmwb-eth0" May 8 00:45:00.175168 containerd[1546]: 2025-05-08 00:45:00.157 [INFO][4440] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5" Namespace="calico-system" Pod="csi-node-driver-ljmwb" WorkloadEndpoint="localhost-k8s-csi--node--driver--ljmwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ljmwb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2de83a08-9bcf-4ba2-8674-79ec949e7e5f", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ljmwb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali61a5a46cbe6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:00.175168 containerd[1546]: 2025-05-08 00:45:00.158 [INFO][4440] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5" Namespace="calico-system" Pod="csi-node-driver-ljmwb" WorkloadEndpoint="localhost-k8s-csi--node--driver--ljmwb-eth0" May 8 00:45:00.175168 containerd[1546]: 2025-05-08 00:45:00.158 [INFO][4440] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61a5a46cbe6 ContainerID="8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5" Namespace="calico-system" Pod="csi-node-driver-ljmwb" WorkloadEndpoint="localhost-k8s-csi--node--driver--ljmwb-eth0" May 8 00:45:00.175168 containerd[1546]: 2025-05-08 00:45:00.161 [INFO][4440] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5" Namespace="calico-system" Pod="csi-node-driver-ljmwb" WorkloadEndpoint="localhost-k8s-csi--node--driver--ljmwb-eth0" May 8 00:45:00.175168 containerd[1546]: 2025-05-08 00:45:00.162 [INFO][4440] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5" Namespace="calico-system" Pod="csi-node-driver-ljmwb" WorkloadEndpoint="localhost-k8s-csi--node--driver--ljmwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ljmwb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2de83a08-9bcf-4ba2-8674-79ec949e7e5f", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5", Pod:"csi-node-driver-ljmwb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali61a5a46cbe6", MAC:"2a:98:f6:94:c4:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:00.175168 containerd[1546]: 2025-05-08 00:45:00.172 [INFO][4440] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5" Namespace="calico-system" Pod="csi-node-driver-ljmwb" WorkloadEndpoint="localhost-k8s-csi--node--driver--ljmwb-eth0" May 8 00:45:00.261352 containerd[1546]: time="2025-05-08T00:45:00.261021435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:45:00.261547 containerd[1546]: time="2025-05-08T00:45:00.261352553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:45:00.261547 containerd[1546]: time="2025-05-08T00:45:00.261376472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:00.262116 containerd[1546]: time="2025-05-08T00:45:00.262081468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:00.283082 systemd-resolved[1438]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:45:00.298062 containerd[1546]: time="2025-05-08T00:45:00.298028493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ljmwb,Uid:2de83a08-9bcf-4ba2-8674-79ec949e7e5f,Namespace:calico-system,Attempt:1,} returns sandbox id \"8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5\"" May 8 00:45:00.299438 containerd[1546]: time="2025-05-08T00:45:00.299419084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 00:45:00.888107 containerd[1546]: time="2025-05-08T00:45:00.888061833Z" level=info msg="StopPodSandbox for \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\"" May 8 00:45:00.888255 containerd[1546]: time="2025-05-08T00:45:00.888077393Z" level=info msg="StopPodSandbox for \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\"" May 8 00:45:01.003760 containerd[1546]: 2025-05-08 00:45:00.942 [INFO][4557] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" May 8 00:45:01.003760 containerd[1546]: 2025-05-08 00:45:00.942 [INFO][4557] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" iface="eth0" netns="/var/run/netns/cni-06479d74-308e-23d1-e558-160ad66ae958" May 8 00:45:01.003760 containerd[1546]: 2025-05-08 00:45:00.942 [INFO][4557] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" iface="eth0" netns="/var/run/netns/cni-06479d74-308e-23d1-e558-160ad66ae958" May 8 00:45:01.003760 containerd[1546]: 2025-05-08 00:45:00.942 [INFO][4557] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" iface="eth0" netns="/var/run/netns/cni-06479d74-308e-23d1-e558-160ad66ae958" May 8 00:45:01.003760 containerd[1546]: 2025-05-08 00:45:00.942 [INFO][4557] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" May 8 00:45:01.003760 containerd[1546]: 2025-05-08 00:45:00.942 [INFO][4557] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" May 8 00:45:01.003760 containerd[1546]: 2025-05-08 00:45:00.974 [INFO][4573] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" HandleID="k8s-pod-network.f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" Workload="localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0" May 8 00:45:01.003760 containerd[1546]: 2025-05-08 00:45:00.974 [INFO][4573] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:01.003760 containerd[1546]: 2025-05-08 00:45:00.974 [INFO][4573] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:01.003760 containerd[1546]: 2025-05-08 00:45:00.985 [WARNING][4573] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" HandleID="k8s-pod-network.f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" Workload="localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0" May 8 00:45:01.003760 containerd[1546]: 2025-05-08 00:45:00.985 [INFO][4573] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" HandleID="k8s-pod-network.f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" Workload="localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0" May 8 00:45:01.003760 containerd[1546]: 2025-05-08 00:45:00.997 [INFO][4573] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:01.003760 containerd[1546]: 2025-05-08 00:45:01.000 [INFO][4557] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" May 8 00:45:01.006521 systemd[1]: run-netns-cni\x2d06479d74\x2d308e\x2d23d1\x2de558\x2d160ad66ae958.mount: Deactivated successfully. May 8 00:45:01.006940 containerd[1546]: time="2025-05-08T00:45:01.006513043Z" level=info msg="TearDown network for sandbox \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\" successfully" May 8 00:45:01.006940 containerd[1546]: time="2025-05-08T00:45:01.006548203Z" level=info msg="StopPodSandbox for \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\" returns successfully" May 8 00:45:01.008324 kubelet[2720]: E0508 00:45:01.008184 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:01.011137 containerd[1546]: time="2025-05-08T00:45:01.009990263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d7mds,Uid:d8549c49-1445-45ca-b3dc-fb39b68b2e91,Namespace:kube-system,Attempt:1,}" May 8 00:45:01.017614 containerd[1546]: 2025-05-08 00:45:00.956 [INFO][4558] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" May 8 00:45:01.017614 containerd[1546]: 2025-05-08 00:45:00.956 [INFO][4558] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" iface="eth0" netns="/var/run/netns/cni-037b5625-05c9-4b1f-02bf-adb11186ff1d" May 8 00:45:01.017614 containerd[1546]: 2025-05-08 00:45:00.956 [INFO][4558] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" iface="eth0" netns="/var/run/netns/cni-037b5625-05c9-4b1f-02bf-adb11186ff1d" May 8 00:45:01.017614 containerd[1546]: 2025-05-08 00:45:00.956 [INFO][4558] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" iface="eth0" netns="/var/run/netns/cni-037b5625-05c9-4b1f-02bf-adb11186ff1d" May 8 00:45:01.017614 containerd[1546]: 2025-05-08 00:45:00.956 [INFO][4558] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" May 8 00:45:01.017614 containerd[1546]: 2025-05-08 00:45:00.956 [INFO][4558] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" May 8 00:45:01.017614 containerd[1546]: 2025-05-08 00:45:01.000 [INFO][4579] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" HandleID="k8s-pod-network.109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" Workload="localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0" May 8 00:45:01.017614 containerd[1546]: 2025-05-08 00:45:01.000 [INFO][4579] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:01.017614 containerd[1546]: 2025-05-08 00:45:01.000 [INFO][4579] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:01.017614 containerd[1546]: 2025-05-08 00:45:01.010 [WARNING][4579] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" HandleID="k8s-pod-network.109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" Workload="localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0" May 8 00:45:01.017614 containerd[1546]: 2025-05-08 00:45:01.010 [INFO][4579] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" HandleID="k8s-pod-network.109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" Workload="localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0" May 8 00:45:01.017614 containerd[1546]: 2025-05-08 00:45:01.013 [INFO][4579] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:01.017614 containerd[1546]: 2025-05-08 00:45:01.015 [INFO][4558] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" May 8 00:45:01.018570 containerd[1546]: time="2025-05-08T00:45:01.018539093Z" level=info msg="TearDown network for sandbox \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\" successfully" May 8 00:45:01.018570 containerd[1546]: time="2025-05-08T00:45:01.018569613Z" level=info msg="StopPodSandbox for \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\" returns successfully" May 8 00:45:01.018927 kubelet[2720]: E0508 00:45:01.018896 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:01.019895 containerd[1546]: time="2025-05-08T00:45:01.019872485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rsrbh,Uid:997a723a-9d1c-45c2-917e-79affe9ca191,Namespace:kube-system,Attempt:1,}" May 8 00:45:01.020244 systemd[1]: run-netns-cni\x2d037b5625\x2d05c9\x2d4b1f\x2d02bf\x2dadb11186ff1d.mount: Deactivated successfully. May 8 00:45:01.086647 kubelet[2720]: I0508 00:45:01.086620 2720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:45:01.149600 systemd-networkd[1231]: cali83062af07fe: Link UP May 8 00:45:01.150836 systemd-networkd[1231]: cali83062af07fe: Gained carrier May 8 00:45:01.167256 containerd[1546]: 2025-05-08 00:45:01.065 [INFO][4590] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0 coredns-7db6d8ff4d- kube-system d8549c49-1445-45ca-b3dc-fb39b68b2e91 891 0 2025-05-08 00:44:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-d7mds eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali83062af07fe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d7mds" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--d7mds-" May 8 00:45:01.167256 containerd[1546]: 2025-05-08 00:45:01.065 [INFO][4590] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d7mds" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0" May 8 00:45:01.167256 containerd[1546]: 2025-05-08 00:45:01.101 [INFO][4618] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad" HandleID="k8s-pod-network.45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad" Workload="localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0" May 8 00:45:01.167256 containerd[1546]: 2025-05-08 00:45:01.113 [INFO][4618] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad" HandleID="k8s-pod-network.45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad" Workload="localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400042baf0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-d7mds", "timestamp":"2025-05-08 00:45:01.101116931 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:45:01.167256 containerd[1546]: 2025-05-08 00:45:01.113 [INFO][4618] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:01.167256 containerd[1546]: 2025-05-08 00:45:01.113 [INFO][4618] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:01.167256 containerd[1546]: 2025-05-08 00:45:01.113 [INFO][4618] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:45:01.167256 containerd[1546]: 2025-05-08 00:45:01.119 [INFO][4618] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad" host="localhost" May 8 00:45:01.167256 containerd[1546]: 2025-05-08 00:45:01.123 [INFO][4618] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:45:01.167256 containerd[1546]: 2025-05-08 00:45:01.127 [INFO][4618] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:45:01.167256 containerd[1546]: 2025-05-08 00:45:01.128 [INFO][4618] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:45:01.167256 containerd[1546]: 2025-05-08 00:45:01.130 [INFO][4618] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:45:01.167256 containerd[1546]: 2025-05-08 00:45:01.130 [INFO][4618] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad" host="localhost" May 8 00:45:01.167256 containerd[1546]: 2025-05-08 00:45:01.132 [INFO][4618] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad May 8 00:45:01.167256 containerd[1546]: 2025-05-08 00:45:01.135 [INFO][4618] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad" host="localhost" May 8 00:45:01.167256 containerd[1546]: 2025-05-08 00:45:01.140 [INFO][4618] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad" host="localhost" May 8 00:45:01.167256 containerd[1546]: 2025-05-08 00:45:01.140 [INFO][4618] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad" host="localhost" May 8 00:45:01.167256 containerd[1546]: 2025-05-08 00:45:01.140 [INFO][4618] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:01.167256 containerd[1546]: 2025-05-08 00:45:01.140 [INFO][4618] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad" HandleID="k8s-pod-network.45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad" Workload="localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0" May 8 00:45:01.167860 containerd[1546]: 2025-05-08 00:45:01.143 [INFO][4590] cni-plugin/k8s.go 386: Populated endpoint ContainerID="45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d7mds" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d8549c49-1445-45ca-b3dc-fb39b68b2e91", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-d7mds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83062af07fe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:01.167860 containerd[1546]: 2025-05-08 00:45:01.143 [INFO][4590] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d7mds" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0" May 8 00:45:01.167860 containerd[1546]: 2025-05-08 00:45:01.143 [INFO][4590] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83062af07fe ContainerID="45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d7mds" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0" May 8 00:45:01.167860 containerd[1546]: 2025-05-08 00:45:01.150 [INFO][4590] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d7mds" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0" May 8 00:45:01.167860 containerd[1546]: 2025-05-08 00:45:01.153 [INFO][4590] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d7mds" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d8549c49-1445-45ca-b3dc-fb39b68b2e91", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad", Pod:"coredns-7db6d8ff4d-d7mds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83062af07fe", MAC:"86:9e:c9:2f:b6:43", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:01.167860 containerd[1546]: 2025-05-08 00:45:01.162 [INFO][4590] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d7mds" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0" May 8 00:45:01.190783 systemd-networkd[1231]: calie2b1cbda0f8: Link UP May 8 00:45:01.194176 containerd[1546]: time="2025-05-08T00:45:01.194102228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:45:01.194176 containerd[1546]: time="2025-05-08T00:45:01.194148747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:45:01.194176 containerd[1546]: time="2025-05-08T00:45:01.194159067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:01.194878 systemd-networkd[1231]: calie2b1cbda0f8: Gained carrier May 8 00:45:01.195859 containerd[1546]: time="2025-05-08T00:45:01.195615739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:01.206504 containerd[1546]: 2025-05-08 00:45:01.077 [INFO][4601] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0 coredns-7db6d8ff4d- kube-system 997a723a-9d1c-45c2-917e-79affe9ca191 892 0 2025-05-08 00:44:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-rsrbh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie2b1cbda0f8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rsrbh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rsrbh-" May 8 00:45:01.206504 containerd[1546]: 2025-05-08 00:45:01.077 [INFO][4601] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rsrbh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0" May 8 00:45:01.206504 containerd[1546]: 2025-05-08 00:45:01.113 [INFO][4624] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da" HandleID="k8s-pod-network.e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da" Workload="localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0" May 8 00:45:01.206504 containerd[1546]: 2025-05-08 00:45:01.123 [INFO][4624] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da" HandleID="k8s-pod-network.e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da" Workload="localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fd1c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-rsrbh", "timestamp":"2025-05-08 00:45:01.113322659 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:45:01.206504 containerd[1546]: 2025-05-08 00:45:01.123 [INFO][4624] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:01.206504 containerd[1546]: 2025-05-08 00:45:01.140 [INFO][4624] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:01.206504 containerd[1546]: 2025-05-08 00:45:01.140 [INFO][4624] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:45:01.206504 containerd[1546]: 2025-05-08 00:45:01.143 [INFO][4624] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da" host="localhost" May 8 00:45:01.206504 containerd[1546]: 2025-05-08 00:45:01.147 [INFO][4624] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:45:01.206504 containerd[1546]: 2025-05-08 00:45:01.155 [INFO][4624] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:45:01.206504 containerd[1546]: 2025-05-08 00:45:01.158 [INFO][4624] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:45:01.206504 containerd[1546]: 2025-05-08 00:45:01.160 [INFO][4624] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:45:01.206504 containerd[1546]: 2025-05-08 00:45:01.160 [INFO][4624] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da" host="localhost" May 8 00:45:01.206504 containerd[1546]: 2025-05-08 00:45:01.164 [INFO][4624] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da May 8 00:45:01.206504 containerd[1546]: 2025-05-08 00:45:01.171 [INFO][4624] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da" host="localhost" May 8 00:45:01.206504 containerd[1546]: 2025-05-08 00:45:01.179 [INFO][4624] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da" host="localhost" May 8 00:45:01.206504 containerd[1546]: 2025-05-08 00:45:01.179 [INFO][4624] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da" host="localhost" May 8 00:45:01.206504 containerd[1546]: 2025-05-08 00:45:01.179 [INFO][4624] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:01.206504 containerd[1546]: 2025-05-08 00:45:01.179 [INFO][4624] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da" HandleID="k8s-pod-network.e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da" Workload="localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0" May 8 00:45:01.207120 containerd[1546]: 2025-05-08 00:45:01.184 [INFO][4601] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rsrbh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"997a723a-9d1c-45c2-917e-79affe9ca191", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-rsrbh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2b1cbda0f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:01.207120 containerd[1546]: 2025-05-08 00:45:01.185 [INFO][4601] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rsrbh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0" May 8 00:45:01.207120 containerd[1546]: 2025-05-08 00:45:01.185 [INFO][4601] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2b1cbda0f8 ContainerID="e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rsrbh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0" May 8 00:45:01.207120 containerd[1546]: 2025-05-08 00:45:01.194 [INFO][4601] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rsrbh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0" May 8 00:45:01.207120 containerd[1546]: 2025-05-08 00:45:01.194 [INFO][4601] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rsrbh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"997a723a-9d1c-45c2-917e-79affe9ca191", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da", Pod:"coredns-7db6d8ff4d-rsrbh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2b1cbda0f8", MAC:"66:77:71:3d:40:41", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:01.207120 containerd[1546]: 2025-05-08 00:45:01.203 [INFO][4601] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rsrbh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0" May 8 00:45:01.225170 systemd-resolved[1438]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:45:01.234119 containerd[1546]: time="2025-05-08T00:45:01.233213439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:45:01.234119 containerd[1546]: time="2025-05-08T00:45:01.233270559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:45:01.234119 containerd[1546]: time="2025-05-08T00:45:01.233287959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:01.234119 containerd[1546]: time="2025-05-08T00:45:01.233379638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:01.247498 containerd[1546]: time="2025-05-08T00:45:01.247181278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d7mds,Uid:d8549c49-1445-45ca-b3dc-fb39b68b2e91,Namespace:kube-system,Attempt:1,} returns sandbox id \"45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad\"" May 8 00:45:01.248143 kubelet[2720]: E0508 00:45:01.247828 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:01.250879 containerd[1546]: time="2025-05-08T00:45:01.250778137Z" level=info msg="CreateContainer within sandbox \"45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:45:01.273409 systemd-resolved[1438]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:45:01.293428 containerd[1546]: time="2025-05-08T00:45:01.293391888Z" level=info msg="CreateContainer within sandbox \"45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4f2e2410b46da0b4d4b1c4c53aefa63fe57cbda6099802eaf1a01fd7f7f78808\"" May 8 00:45:01.293743 containerd[1546]: time="2025-05-08T00:45:01.293724766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rsrbh,Uid:997a723a-9d1c-45c2-917e-79affe9ca191,Namespace:kube-system,Attempt:1,} returns sandbox id \"e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da\"" May 8 00:45:01.295141 containerd[1546]: time="2025-05-08T00:45:01.295118278Z" level=info msg="StartContainer for \"4f2e2410b46da0b4d4b1c4c53aefa63fe57cbda6099802eaf1a01fd7f7f78808\"" May 8 00:45:01.295199 kubelet[2720]: E0508 00:45:01.295146 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:01.298193 containerd[1546]: time="2025-05-08T00:45:01.298165540Z" level=info msg="CreateContainer within sandbox \"e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:45:01.309603 containerd[1546]: time="2025-05-08T00:45:01.309560354Z" level=info msg="CreateContainer within sandbox \"e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eb240fc50bf270b91fbb68606b8c0ee117ef86cd18aa9e8c9edb5d04c3d58bc4\"" May 8 00:45:01.310247 containerd[1546]: time="2025-05-08T00:45:01.310105830Z" level=info msg="StartContainer for \"eb240fc50bf270b91fbb68606b8c0ee117ef86cd18aa9e8c9edb5d04c3d58bc4\"" May 8 00:45:01.355143 containerd[1546]: time="2025-05-08T00:45:01.355104248Z" level=info msg="StartContainer for \"4f2e2410b46da0b4d4b1c4c53aefa63fe57cbda6099802eaf1a01fd7f7f78808\" returns successfully" May 8 00:45:01.404178 containerd[1546]: time="2025-05-08T00:45:01.403638604Z" level=info msg="StartContainer for \"eb240fc50bf270b91fbb68606b8c0ee117ef86cd18aa9e8c9edb5d04c3d58bc4\" returns successfully" May 8 00:45:01.488506 containerd[1546]: time="2025-05-08T00:45:01.488444189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:01.489265 containerd[1546]: time="2025-05-08T00:45:01.489216105Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 8 00:45:01.490472 containerd[1546]: time="2025-05-08T00:45:01.490353498Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:01.493691 containerd[1546]: time="2025-05-08T00:45:01.493649439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:01.494255 containerd[1546]: time="2025-05-08T00:45:01.494220835Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.194774191s" May 8 00:45:01.494418 containerd[1546]: time="2025-05-08T00:45:01.494347075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 8 00:45:01.497697 containerd[1546]: time="2025-05-08T00:45:01.497662175Z" level=info msg="CreateContainer within sandbox \"8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 00:45:01.541849 containerd[1546]: time="2025-05-08T00:45:01.541706478Z" level=info msg="CreateContainer within sandbox \"8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4d49fe7029ed51d4f7037b61baae95e349b5453fce7427085ccce7fda6aca1c6\"" May 8 00:45:01.546996 containerd[1546]: time="2025-05-08T00:45:01.546927448Z" level=info msg="StartContainer for \"4d49fe7029ed51d4f7037b61baae95e349b5453fce7427085ccce7fda6aca1c6\"" May 8 00:45:01.639822 containerd[1546]: time="2025-05-08T00:45:01.639785025Z" level=info msg="StartContainer for \"4d49fe7029ed51d4f7037b61baae95e349b5453fce7427085ccce7fda6aca1c6\" returns successfully" May 8 00:45:01.649408 containerd[1546]: time="2025-05-08T00:45:01.649162011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 00:45:01.656474 systemd-networkd[1231]: cali61a5a46cbe6: Gained IPv6LL May 8 00:45:01.888707 containerd[1546]: time="2025-05-08T00:45:01.888430333Z" level=info msg="StopPodSandbox for \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\"" May 8 00:45:01.971288 containerd[1546]: 2025-05-08 00:45:01.936 [INFO][4889] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" May 8 00:45:01.971288 containerd[1546]: 2025-05-08 00:45:01.936 [INFO][4889] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" iface="eth0" netns="/var/run/netns/cni-6dfdb547-4f65-4485-0285-901c1e5ab077" May 8 00:45:01.971288 containerd[1546]: 2025-05-08 00:45:01.936 [INFO][4889] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" iface="eth0" netns="/var/run/netns/cni-6dfdb547-4f65-4485-0285-901c1e5ab077" May 8 00:45:01.971288 containerd[1546]: 2025-05-08 00:45:01.936 [INFO][4889] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" iface="eth0" netns="/var/run/netns/cni-6dfdb547-4f65-4485-0285-901c1e5ab077" May 8 00:45:01.971288 containerd[1546]: 2025-05-08 00:45:01.936 [INFO][4889] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" May 8 00:45:01.971288 containerd[1546]: 2025-05-08 00:45:01.936 [INFO][4889] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" May 8 00:45:01.971288 containerd[1546]: 2025-05-08 00:45:01.958 [INFO][4897] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" HandleID="k8s-pod-network.52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" Workload="localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0" May 8 00:45:01.971288 containerd[1546]: 2025-05-08 00:45:01.958 [INFO][4897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:01.971288 containerd[1546]: 2025-05-08 00:45:01.958 [INFO][4897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:01.971288 containerd[1546]: 2025-05-08 00:45:01.966 [WARNING][4897] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" HandleID="k8s-pod-network.52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" Workload="localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0" May 8 00:45:01.971288 containerd[1546]: 2025-05-08 00:45:01.966 [INFO][4897] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" HandleID="k8s-pod-network.52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" Workload="localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0" May 8 00:45:01.971288 containerd[1546]: 2025-05-08 00:45:01.967 [INFO][4897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:01.971288 containerd[1546]: 2025-05-08 00:45:01.969 [INFO][4889] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" May 8 00:45:01.971944 containerd[1546]: time="2025-05-08T00:45:01.971827046Z" level=info msg="TearDown network for sandbox \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\" successfully" May 8 00:45:01.971944 containerd[1546]: time="2025-05-08T00:45:01.971858326Z" level=info msg="StopPodSandbox for \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\" returns successfully" May 8 00:45:01.973730 containerd[1546]: time="2025-05-08T00:45:01.973695315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-657c88d8bb-lsvlt,Uid:b66dddbc-0758-478a-8371-6dfe5c35d89f,Namespace:calico-system,Attempt:1,}" May 8 00:45:01.974807 systemd[1]: run-netns-cni\x2d6dfdb547\x2d4f65\x2d4485\x2d0285\x2d901c1e5ab077.mount: Deactivated successfully. May 8 00:45:02.087318 systemd-networkd[1231]: calia9d33364fae: Link UP May 8 00:45:02.088169 systemd-networkd[1231]: calia9d33364fae: Gained carrier May 8 00:45:02.102000 kubelet[2720]: E0508 00:45:02.101203 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:02.109385 kubelet[2720]: E0508 00:45:02.108254 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:02.110654 containerd[1546]: 2025-05-08 00:45:02.022 [INFO][4906] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0 calico-kube-controllers-657c88d8bb- calico-system b66dddbc-0758-478a-8371-6dfe5c35d89f 916 0 2025-05-08 00:44:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:657c88d8bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-657c88d8bb-lsvlt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia9d33364fae [] []}} ContainerID="e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be" Namespace="calico-system" Pod="calico-kube-controllers-657c88d8bb-lsvlt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-" May 8 00:45:02.110654 containerd[1546]: 2025-05-08 00:45:02.022 [INFO][4906] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be" Namespace="calico-system" Pod="calico-kube-controllers-657c88d8bb-lsvlt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0" May 8 00:45:02.110654 containerd[1546]: 2025-05-08 00:45:02.048 [INFO][4919] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be" HandleID="k8s-pod-network.e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be" Workload="localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0" May 8 00:45:02.110654 containerd[1546]: 2025-05-08 00:45:02.058 [INFO][4919] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be" HandleID="k8s-pod-network.e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be" Workload="localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000295300), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-657c88d8bb-lsvlt", "timestamp":"2025-05-08 00:45:02.048399247 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:45:02.110654 containerd[1546]: 2025-05-08 00:45:02.058 [INFO][4919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:02.110654 containerd[1546]: 2025-05-08 00:45:02.058 [INFO][4919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:02.110654 containerd[1546]: 2025-05-08 00:45:02.058 [INFO][4919] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:45:02.110654 containerd[1546]: 2025-05-08 00:45:02.060 [INFO][4919] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be" host="localhost" May 8 00:45:02.110654 containerd[1546]: 2025-05-08 00:45:02.063 [INFO][4919] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:45:02.110654 containerd[1546]: 2025-05-08 00:45:02.067 [INFO][4919] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:45:02.110654 containerd[1546]: 2025-05-08 00:45:02.069 [INFO][4919] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:45:02.110654 containerd[1546]: 2025-05-08 00:45:02.071 [INFO][4919] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:45:02.110654 containerd[1546]: 2025-05-08 00:45:02.071 [INFO][4919] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be" host="localhost" May 8 00:45:02.110654 containerd[1546]: 2025-05-08 00:45:02.072 [INFO][4919] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be May 8 00:45:02.110654 containerd[1546]: 2025-05-08 00:45:02.076 [INFO][4919] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be" host="localhost" May 8 00:45:02.110654 containerd[1546]: 2025-05-08 00:45:02.082 [INFO][4919] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be" host="localhost" May 8 00:45:02.110654 containerd[1546]: 2025-05-08 00:45:02.082 [INFO][4919] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be" host="localhost" May 8 00:45:02.110654 containerd[1546]: 2025-05-08 00:45:02.082 [INFO][4919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:02.110654 containerd[1546]: 2025-05-08 00:45:02.082 [INFO][4919] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be" HandleID="k8s-pod-network.e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be" Workload="localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0" May 8 00:45:02.111442 containerd[1546]: 2025-05-08 00:45:02.084 [INFO][4906] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be" Namespace="calico-system" Pod="calico-kube-controllers-657c88d8bb-lsvlt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0", GenerateName:"calico-kube-controllers-657c88d8bb-", Namespace:"calico-system", SelfLink:"", UID:"b66dddbc-0758-478a-8371-6dfe5c35d89f", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"657c88d8bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-657c88d8bb-lsvlt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9d33364fae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:02.111442 containerd[1546]: 2025-05-08 00:45:02.085 [INFO][4906] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be" Namespace="calico-system" Pod="calico-kube-controllers-657c88d8bb-lsvlt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0" May 8 00:45:02.111442 containerd[1546]: 2025-05-08 00:45:02.085 [INFO][4906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9d33364fae ContainerID="e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be" Namespace="calico-system" Pod="calico-kube-controllers-657c88d8bb-lsvlt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0" May 8 00:45:02.111442 containerd[1546]: 2025-05-08 00:45:02.088 [INFO][4906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be" Namespace="calico-system" Pod="calico-kube-controllers-657c88d8bb-lsvlt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0" May 8 00:45:02.111442 containerd[1546]: 2025-05-08 00:45:02.089 [INFO][4906] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be" Namespace="calico-system" Pod="calico-kube-controllers-657c88d8bb-lsvlt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0", GenerateName:"calico-kube-controllers-657c88d8bb-", Namespace:"calico-system", SelfLink:"", UID:"b66dddbc-0758-478a-8371-6dfe5c35d89f", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"657c88d8bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be", Pod:"calico-kube-controllers-657c88d8bb-lsvlt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9d33364fae", MAC:"c6:e6:1f:c2:88:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:02.111442 containerd[1546]: 2025-05-08 00:45:02.098 [INFO][4906] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be" Namespace="calico-system" Pod="calico-kube-controllers-657c88d8bb-lsvlt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0" May 8 00:45:02.116445 kubelet[2720]: I0508 00:45:02.116383 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rsrbh" podStartSLOduration=30.11634426 podStartE2EDuration="30.11634426s" podCreationTimestamp="2025-05-08 00:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:45:02.115931383 +0000 UTC m=+44.307510936" watchObservedRunningTime="2025-05-08 00:45:02.11634426 +0000 UTC m=+44.307923773" May 8 00:45:02.134967 kubelet[2720]: I0508 00:45:02.134716 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-d7mds" podStartSLOduration=30.134698436 podStartE2EDuration="30.134698436s" podCreationTimestamp="2025-05-08 00:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:45:02.129616145 +0000 UTC m=+44.321195738" watchObservedRunningTime="2025-05-08 00:45:02.134698436 +0000 UTC m=+44.326277949" May 8 00:45:02.161192 containerd[1546]: time="2025-05-08T00:45:02.146292890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:45:02.161192 containerd[1546]: time="2025-05-08T00:45:02.146350770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:45:02.161192 containerd[1546]: time="2025-05-08T00:45:02.146373850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:02.161192 containerd[1546]: time="2025-05-08T00:45:02.148338799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:02.179313 systemd-resolved[1438]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:45:02.203858 containerd[1546]: time="2025-05-08T00:45:02.203820363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-657c88d8bb-lsvlt,Uid:b66dddbc-0758-478a-8371-6dfe5c35d89f,Namespace:calico-system,Attempt:1,} returns sandbox id \"e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be\"" May 8 00:45:02.230811 systemd-networkd[1231]: calie2b1cbda0f8: Gained IPv6LL May 8 00:45:02.295434 systemd-networkd[1231]: cali83062af07fe: Gained IPv6LL May 8 00:45:02.722506 containerd[1546]: time="2025-05-08T00:45:02.722448295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:02.723545 containerd[1546]: time="2025-05-08T00:45:02.723516569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 8 00:45:02.724114 containerd[1546]: time="2025-05-08T00:45:02.724090286Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:02.726618 containerd[1546]: time="2025-05-08T00:45:02.726584032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:02.727703 containerd[1546]: time="2025-05-08T00:45:02.727668386Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.078466376s" May 8 00:45:02.727750 containerd[1546]: time="2025-05-08T00:45:02.727703545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 8 00:45:02.728581 containerd[1546]: time="2025-05-08T00:45:02.728549301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 8 00:45:02.730687 containerd[1546]: time="2025-05-08T00:45:02.729676974Z" level=info msg="CreateContainer within sandbox \"8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 00:45:02.739148 containerd[1546]: time="2025-05-08T00:45:02.739114080Z" level=info msg="CreateContainer within sandbox \"8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f42fb33084bc7faa4f36e0655f9bd25c111974cc2ad347653930f1ac25e6b655\"" May 8 00:45:02.741477 containerd[1546]: time="2025-05-08T00:45:02.739535598Z" level=info msg="StartContainer for \"f42fb33084bc7faa4f36e0655f9bd25c111974cc2ad347653930f1ac25e6b655\"" May 8 00:45:02.786395 containerd[1546]: time="2025-05-08T00:45:02.785854695Z" level=info msg="StartContainer for \"f42fb33084bc7faa4f36e0655f9bd25c111974cc2ad347653930f1ac25e6b655\" returns successfully" May 8 00:45:02.888577 containerd[1546]: time="2025-05-08T00:45:02.888473911Z" level=info msg="StopPodSandbox for \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\"" May 8 00:45:02.977575 containerd[1546]: 2025-05-08 00:45:02.939 [INFO][5042] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" May 8 00:45:02.977575 containerd[1546]: 2025-05-08 00:45:02.939 [INFO][5042] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" iface="eth0" netns="/var/run/netns/cni-6dcab13e-e155-f295-b96f-3e0afb5cdc28" May 8 00:45:02.977575 containerd[1546]: 2025-05-08 00:45:02.940 [INFO][5042] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" iface="eth0" netns="/var/run/netns/cni-6dcab13e-e155-f295-b96f-3e0afb5cdc28" May 8 00:45:02.977575 containerd[1546]: 2025-05-08 00:45:02.940 [INFO][5042] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" iface="eth0" netns="/var/run/netns/cni-6dcab13e-e155-f295-b96f-3e0afb5cdc28" May 8 00:45:02.977575 containerd[1546]: 2025-05-08 00:45:02.940 [INFO][5042] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" May 8 00:45:02.977575 containerd[1546]: 2025-05-08 00:45:02.940 [INFO][5042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" May 8 00:45:02.977575 containerd[1546]: 2025-05-08 00:45:02.960 [INFO][5051] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" HandleID="k8s-pod-network.e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0" May 8 00:45:02.977575 containerd[1546]: 2025-05-08 00:45:02.960 [INFO][5051] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:02.977575 containerd[1546]: 2025-05-08 00:45:02.960 [INFO][5051] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:02.977575 containerd[1546]: 2025-05-08 00:45:02.969 [WARNING][5051] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" HandleID="k8s-pod-network.e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0" May 8 00:45:02.977575 containerd[1546]: 2025-05-08 00:45:02.969 [INFO][5051] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" HandleID="k8s-pod-network.e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0" May 8 00:45:02.977575 containerd[1546]: 2025-05-08 00:45:02.970 [INFO][5051] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:02.977575 containerd[1546]: 2025-05-08 00:45:02.975 [INFO][5042] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" May 8 00:45:02.980827 containerd[1546]: time="2025-05-08T00:45:02.977687244Z" level=info msg="TearDown network for sandbox \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\" successfully" May 8 00:45:02.980827 containerd[1546]: time="2025-05-08T00:45:02.977716684Z" level=info msg="StopPodSandbox for \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\" returns successfully" May 8 00:45:02.980464 systemd[1]: run-netns-cni\x2d6dcab13e\x2de155\x2df295\x2db96f\x2d3e0afb5cdc28.mount: Deactivated successfully. May 8 00:45:02.981696 containerd[1546]: time="2025-05-08T00:45:02.981660222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79cbdfc48f-gn4vl,Uid:94c7629a-4f26-4590-8950-ecb623581f56,Namespace:calico-apiserver,Attempt:1,}" May 8 00:45:02.988137 kubelet[2720]: I0508 00:45:02.988098 2720 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 00:45:02.998290 kubelet[2720]: I0508 00:45:02.998263 2720 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 00:45:03.101661 systemd-networkd[1231]: cali3cc5c4107e4: Link UP May 8 00:45:03.102482 systemd-networkd[1231]: cali3cc5c4107e4: Gained carrier May 8 00:45:03.117109 kubelet[2720]: E0508 00:45:03.117063 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:03.119585 kubelet[2720]: E0508 00:45:03.119547 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:03.122840 containerd[1546]: 2025-05-08 00:45:03.034 [INFO][5058] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0 calico-apiserver-79cbdfc48f- calico-apiserver 94c7629a-4f26-4590-8950-ecb623581f56 944 0 2025-05-08 00:44:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79cbdfc48f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79cbdfc48f-gn4vl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3cc5c4107e4 [] []}} ContainerID="59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32" Namespace="calico-apiserver" Pod="calico-apiserver-79cbdfc48f-gn4vl" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-" May 8 00:45:03.122840 containerd[1546]: 2025-05-08 00:45:03.034 [INFO][5058] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32" Namespace="calico-apiserver" Pod="calico-apiserver-79cbdfc48f-gn4vl" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0" May 8 00:45:03.122840 containerd[1546]: 2025-05-08 00:45:03.062 [INFO][5075] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32" HandleID="k8s-pod-network.59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0" May 8 00:45:03.122840 containerd[1546]: 2025-05-08 00:45:03.075 [INFO][5075] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32" HandleID="k8s-pod-network.59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004e9040), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79cbdfc48f-gn4vl", "timestamp":"2025-05-08 00:45:03.062024214 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:45:03.122840 containerd[1546]: 2025-05-08 00:45:03.075 [INFO][5075] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:03.122840 containerd[1546]: 2025-05-08 00:45:03.075 [INFO][5075] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:03.122840 containerd[1546]: 2025-05-08 00:45:03.075 [INFO][5075] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:45:03.122840 containerd[1546]: 2025-05-08 00:45:03.076 [INFO][5075] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32" host="localhost" May 8 00:45:03.122840 containerd[1546]: 2025-05-08 00:45:03.080 [INFO][5075] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:45:03.122840 containerd[1546]: 2025-05-08 00:45:03.083 [INFO][5075] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:45:03.122840 containerd[1546]: 2025-05-08 00:45:03.085 [INFO][5075] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:45:03.122840 containerd[1546]: 2025-05-08 00:45:03.087 [INFO][5075] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:45:03.122840 containerd[1546]: 2025-05-08 00:45:03.087 [INFO][5075] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32" host="localhost" May 8 00:45:03.122840 containerd[1546]: 2025-05-08 00:45:03.088 [INFO][5075] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32 May 8 00:45:03.122840 containerd[1546]: 2025-05-08 00:45:03.092 [INFO][5075] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32" host="localhost" May 8 00:45:03.122840 containerd[1546]: 2025-05-08 00:45:03.097 [INFO][5075] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32" host="localhost" May 8 00:45:03.122840 containerd[1546]: 2025-05-08 00:45:03.097 [INFO][5075] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32" host="localhost" May 8 00:45:03.122840 containerd[1546]: 2025-05-08 00:45:03.097 [INFO][5075] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:03.122840 containerd[1546]: 2025-05-08 00:45:03.097 [INFO][5075] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32" HandleID="k8s-pod-network.59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0" May 8 00:45:03.125954 containerd[1546]: 2025-05-08 00:45:03.099 [INFO][5058] cni-plugin/k8s.go 386: Populated endpoint ContainerID="59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32" Namespace="calico-apiserver" Pod="calico-apiserver-79cbdfc48f-gn4vl" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0", GenerateName:"calico-apiserver-79cbdfc48f-", Namespace:"calico-apiserver", SelfLink:"", UID:"94c7629a-4f26-4590-8950-ecb623581f56", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79cbdfc48f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79cbdfc48f-gn4vl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cc5c4107e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:03.125954 containerd[1546]: 2025-05-08 00:45:03.099 [INFO][5058] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32" Namespace="calico-apiserver" Pod="calico-apiserver-79cbdfc48f-gn4vl" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0" May 8 00:45:03.125954 containerd[1546]: 2025-05-08 00:45:03.099 [INFO][5058] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3cc5c4107e4 ContainerID="59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32" Namespace="calico-apiserver" Pod="calico-apiserver-79cbdfc48f-gn4vl" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0" May 8 00:45:03.125954 containerd[1546]: 2025-05-08 00:45:03.102 [INFO][5058] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32" Namespace="calico-apiserver" Pod="calico-apiserver-79cbdfc48f-gn4vl" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0" May 8 00:45:03.125954 containerd[1546]: 2025-05-08 00:45:03.102 [INFO][5058] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32" Namespace="calico-apiserver" Pod="calico-apiserver-79cbdfc48f-gn4vl" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0", GenerateName:"calico-apiserver-79cbdfc48f-", Namespace:"calico-apiserver", SelfLink:"", UID:"94c7629a-4f26-4590-8950-ecb623581f56", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79cbdfc48f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32", Pod:"calico-apiserver-79cbdfc48f-gn4vl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cc5c4107e4", MAC:"46:19:af:93:32:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:03.125954 containerd[1546]: 2025-05-08 00:45:03.115 [INFO][5058] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32" Namespace="calico-apiserver" Pod="calico-apiserver-79cbdfc48f-gn4vl" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0" May 8 00:45:03.127421 kubelet[2720]: I0508 00:45:03.125205 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ljmwb" podStartSLOduration=22.696053009 podStartE2EDuration="25.125189665s" podCreationTimestamp="2025-05-08 00:44:38 +0000 UTC" firstStartedPulling="2025-05-08 00:45:00.299129566 +0000 UTC m=+42.490709079" lastFinishedPulling="2025-05-08 00:45:02.728266222 +0000 UTC m=+44.919845735" observedRunningTime="2025-05-08 00:45:03.124831347 +0000 UTC m=+45.316410860" watchObservedRunningTime="2025-05-08 00:45:03.125189665 +0000 UTC m=+45.316769178" May 8 00:45:03.147504 containerd[1546]: time="2025-05-08T00:45:03.147420702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:45:03.147504 containerd[1546]: time="2025-05-08T00:45:03.147480501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:45:03.147504 containerd[1546]: time="2025-05-08T00:45:03.147491901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:03.147693 containerd[1546]: time="2025-05-08T00:45:03.147589341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:03.168991 systemd-resolved[1438]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:45:03.196462 containerd[1546]: time="2025-05-08T00:45:03.196412950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79cbdfc48f-gn4vl,Uid:94c7629a-4f26-4590-8950-ecb623581f56,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32\"" May 8 00:45:03.200700 containerd[1546]: time="2025-05-08T00:45:03.200433728Z" level=info msg="CreateContainer within sandbox \"59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:45:03.209702 containerd[1546]: time="2025-05-08T00:45:03.209653837Z" level=info msg="CreateContainer within sandbox \"59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a60085e1dcec874a3f0ca682beaa893a3777b77f42775cfe3d36fe6bf093d79a\"" May 8 00:45:03.210178 containerd[1546]: time="2025-05-08T00:45:03.210124075Z" level=info msg="StartContainer for \"a60085e1dcec874a3f0ca682beaa893a3777b77f42775cfe3d36fe6bf093d79a\"" May 8 00:45:03.258881 containerd[1546]: time="2025-05-08T00:45:03.258668246Z" level=info msg="StartContainer for \"a60085e1dcec874a3f0ca682beaa893a3777b77f42775cfe3d36fe6bf093d79a\" returns successfully" May 8 00:45:03.959610 systemd-networkd[1231]: calia9d33364fae: Gained IPv6LL May 8 00:45:04.127475 kubelet[2720]: E0508 00:45:04.124896 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:04.129661 kubelet[2720]: E0508 00:45:04.129636 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:04.149531 kubelet[2720]: I0508 00:45:04.149466 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79cbdfc48f-gn4vl" podStartSLOduration=27.149440858 podStartE2EDuration="27.149440858s" podCreationTimestamp="2025-05-08 00:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:45:04.14726231 +0000 UTC m=+46.338841823" watchObservedRunningTime="2025-05-08 00:45:04.149440858 +0000 UTC m=+46.341020371" May 8 00:45:04.151548 systemd[1]: Started sshd@12-10.0.0.155:22-10.0.0.1:53046.service - OpenSSH per-connection server daemon (10.0.0.1:53046). May 8 00:45:04.200828 sshd[5184]: Accepted publickey for core from 10.0.0.1 port 53046 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:45:04.202804 sshd[5184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:45:04.208524 systemd-logind[1522]: New session 13 of user core. May 8 00:45:04.215793 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:45:04.342709 systemd-networkd[1231]: cali3cc5c4107e4: Gained IPv6LL May 8 00:45:04.426467 containerd[1546]: time="2025-05-08T00:45:04.426415165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:04.427407 containerd[1546]: time="2025-05-08T00:45:04.427252161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 8 00:45:04.428500 containerd[1546]: time="2025-05-08T00:45:04.428085436Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:04.432612 containerd[1546]: time="2025-05-08T00:45:04.432568212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:04.433282 containerd[1546]: time="2025-05-08T00:45:04.433246049Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.704653109s" May 8 00:45:04.433315 containerd[1546]: time="2025-05-08T00:45:04.433282368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 8 00:45:04.448202 containerd[1546]: time="2025-05-08T00:45:04.448150648Z" level=info msg="CreateContainer within sandbox \"e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 8 00:45:04.464593 containerd[1546]: time="2025-05-08T00:45:04.464556640Z" level=info msg="CreateContainer within sandbox \"e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9267bd872d9a1ffcb5d04aea0698abc8947890cd35ee187e641d9bf0515bf5d7\"" May 8 00:45:04.465510 containerd[1546]: time="2025-05-08T00:45:04.465173477Z" level=info msg="StartContainer for \"9267bd872d9a1ffcb5d04aea0698abc8947890cd35ee187e641d9bf0515bf5d7\"" May 8 00:45:04.482659 sshd[5184]: pam_unix(sshd:session): session closed for user core May 8 00:45:04.487574 systemd-logind[1522]: Session 13 logged out. Waiting for processes to exit. May 8 00:45:04.487839 systemd[1]: sshd@12-10.0.0.155:22-10.0.0.1:53046.service: Deactivated successfully. May 8 00:45:04.491961 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:45:04.496226 systemd-logind[1522]: Removed session 13. May 8 00:45:04.523521 containerd[1546]: time="2025-05-08T00:45:04.523480882Z" level=info msg="StartContainer for \"9267bd872d9a1ffcb5d04aea0698abc8947890cd35ee187e641d9bf0515bf5d7\" returns successfully" May 8 00:45:05.127569 kubelet[2720]: I0508 00:45:05.127524 2720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:45:05.139064 kubelet[2720]: I0508 00:45:05.138714 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-657c88d8bb-lsvlt" podStartSLOduration=23.909219021 podStartE2EDuration="26.138696427s" podCreationTimestamp="2025-05-08 00:44:39 +0000 UTC" firstStartedPulling="2025-05-08 00:45:02.205078636 +0000 UTC m=+44.396658149" lastFinishedPulling="2025-05-08 00:45:04.434556042 +0000 UTC m=+46.626135555" observedRunningTime="2025-05-08 00:45:05.137787311 +0000 UTC m=+47.329366824" watchObservedRunningTime="2025-05-08 00:45:05.138696427 +0000 UTC m=+47.330275940" May 8 00:45:06.129759 kubelet[2720]: I0508 00:45:06.129719 2720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:45:07.777341 kubelet[2720]: I0508 00:45:07.777291 2720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:45:09.492692 systemd[1]: Started sshd@13-10.0.0.155:22-10.0.0.1:53062.service - OpenSSH per-connection server daemon (10.0.0.1:53062). May 8 00:45:09.528357 sshd[5280]: Accepted publickey for core from 10.0.0.1 port 53062 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:45:09.547495 sshd[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:45:09.551334 systemd-logind[1522]: New session 14 of user core. May 8 00:45:09.559852 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:45:09.787654 sshd[5280]: pam_unix(sshd:session): session closed for user core May 8 00:45:09.790912 systemd[1]: sshd@13-10.0.0.155:22-10.0.0.1:53062.service: Deactivated successfully. May 8 00:45:09.793816 systemd-logind[1522]: Session 14 logged out. Waiting for processes to exit. May 8 00:45:09.793979 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:45:09.794874 systemd-logind[1522]: Removed session 14. May 8 00:45:10.879082 kubelet[2720]: E0508 00:45:10.879054 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:14.800754 systemd[1]: Started sshd@14-10.0.0.155:22-10.0.0.1:60128.service - OpenSSH per-connection server daemon (10.0.0.1:60128). May 8 00:45:14.861719 sshd[5327]: Accepted publickey for core from 10.0.0.1 port 60128 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:45:14.864664 sshd[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:45:14.871196 systemd-logind[1522]: New session 15 of user core. May 8 00:45:14.886846 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:45:15.078031 sshd[5327]: pam_unix(sshd:session): session closed for user core May 8 00:45:15.083362 systemd[1]: sshd@14-10.0.0.155:22-10.0.0.1:60128.service: Deactivated successfully. May 8 00:45:15.085357 systemd-logind[1522]: Session 15 logged out. Waiting for processes to exit. May 8 00:45:15.085394 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:45:15.088753 systemd-logind[1522]: Removed session 15. May 8 00:45:17.644630 kubelet[2720]: I0508 00:45:17.644571 2720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:45:17.882700 containerd[1546]: time="2025-05-08T00:45:17.882662985Z" level=info msg="StopPodSandbox for \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\"" May 8 00:45:17.979976 containerd[1546]: 2025-05-08 00:45:17.928 [WARNING][5360] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"997a723a-9d1c-45c2-917e-79affe9ca191", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da", Pod:"coredns-7db6d8ff4d-rsrbh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2b1cbda0f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:17.979976 containerd[1546]: 2025-05-08 00:45:17.929 [INFO][5360] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" May 8 00:45:17.979976 containerd[1546]: 2025-05-08 00:45:17.929 [INFO][5360] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" iface="eth0" netns="" May 8 00:45:17.979976 containerd[1546]: 2025-05-08 00:45:17.929 [INFO][5360] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" May 8 00:45:17.979976 containerd[1546]: 2025-05-08 00:45:17.929 [INFO][5360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" May 8 00:45:17.979976 containerd[1546]: 2025-05-08 00:45:17.957 [INFO][5371] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" HandleID="k8s-pod-network.109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" Workload="localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0" May 8 00:45:17.979976 containerd[1546]: 2025-05-08 00:45:17.957 [INFO][5371] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:17.979976 containerd[1546]: 2025-05-08 00:45:17.957 [INFO][5371] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:17.979976 containerd[1546]: 2025-05-08 00:45:17.966 [WARNING][5371] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" HandleID="k8s-pod-network.109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" Workload="localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0" May 8 00:45:17.979976 containerd[1546]: 2025-05-08 00:45:17.966 [INFO][5371] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" HandleID="k8s-pod-network.109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" Workload="localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0" May 8 00:45:17.979976 containerd[1546]: 2025-05-08 00:45:17.969 [INFO][5371] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:17.979976 containerd[1546]: 2025-05-08 00:45:17.975 [INFO][5360] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" May 8 00:45:17.979976 containerd[1546]: time="2025-05-08T00:45:17.979636650Z" level=info msg="TearDown network for sandbox \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\" successfully" May 8 00:45:17.979976 containerd[1546]: time="2025-05-08T00:45:17.979659290Z" level=info msg="StopPodSandbox for \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\" returns successfully" May 8 00:45:17.982873 containerd[1546]: time="2025-05-08T00:45:17.982420799Z" level=info msg="RemovePodSandbox for \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\"" May 8 00:45:17.993733 containerd[1546]: time="2025-05-08T00:45:17.993183598Z" level=info msg="Forcibly stopping sandbox \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\"" May 8 00:45:18.089961 containerd[1546]: 2025-05-08 00:45:18.056 [WARNING][5392] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"997a723a-9d1c-45c2-917e-79affe9ca191", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e40ec2c55197d17a2a907d93c8d810cfa37962436f93443acd33c00d913116da", Pod:"coredns-7db6d8ff4d-rsrbh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2b1cbda0f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:18.089961 containerd[1546]: 2025-05-08 00:45:18.057 [INFO][5392] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" May 8 00:45:18.089961 containerd[1546]: 2025-05-08 00:45:18.057 [INFO][5392] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" iface="eth0" netns="" May 8 00:45:18.089961 containerd[1546]: 2025-05-08 00:45:18.057 [INFO][5392] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" May 8 00:45:18.089961 containerd[1546]: 2025-05-08 00:45:18.057 [INFO][5392] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" May 8 00:45:18.089961 containerd[1546]: 2025-05-08 00:45:18.076 [INFO][5401] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" HandleID="k8s-pod-network.109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" Workload="localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0" May 8 00:45:18.089961 containerd[1546]: 2025-05-08 00:45:18.076 [INFO][5401] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:18.089961 containerd[1546]: 2025-05-08 00:45:18.076 [INFO][5401] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:18.089961 containerd[1546]: 2025-05-08 00:45:18.085 [WARNING][5401] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" HandleID="k8s-pod-network.109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" Workload="localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0" May 8 00:45:18.089961 containerd[1546]: 2025-05-08 00:45:18.085 [INFO][5401] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" HandleID="k8s-pod-network.109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" Workload="localhost-k8s-coredns--7db6d8ff4d--rsrbh-eth0" May 8 00:45:18.089961 containerd[1546]: 2025-05-08 00:45:18.086 [INFO][5401] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:18.089961 containerd[1546]: 2025-05-08 00:45:18.088 [INFO][5392] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e" May 8 00:45:18.090912 containerd[1546]: time="2025-05-08T00:45:18.090440350Z" level=info msg="TearDown network for sandbox \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\" successfully" May 8 00:45:18.096418 containerd[1546]: time="2025-05-08T00:45:18.096385928Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:45:18.096601 containerd[1546]: time="2025-05-08T00:45:18.096582967Z" level=info msg="RemovePodSandbox \"109dba82423112f5cb64b579c47a22ea103bb7157cab902d5ea26adedc02613e\" returns successfully" May 8 00:45:18.097138 containerd[1546]: time="2025-05-08T00:45:18.097118085Z" level=info msg="StopPodSandbox for \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\"" May 8 00:45:18.165849 containerd[1546]: 2025-05-08 00:45:18.133 [WARNING][5424] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ljmwb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2de83a08-9bcf-4ba2-8674-79ec949e7e5f", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5", Pod:"csi-node-driver-ljmwb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali61a5a46cbe6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:18.165849 containerd[1546]: 2025-05-08 00:45:18.133 [INFO][5424] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" May 8 00:45:18.165849 containerd[1546]: 2025-05-08 00:45:18.133 [INFO][5424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" iface="eth0" netns="" May 8 00:45:18.165849 containerd[1546]: 2025-05-08 00:45:18.133 [INFO][5424] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" May 8 00:45:18.165849 containerd[1546]: 2025-05-08 00:45:18.133 [INFO][5424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" May 8 00:45:18.165849 containerd[1546]: 2025-05-08 00:45:18.153 [INFO][5433] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" HandleID="k8s-pod-network.cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" Workload="localhost-k8s-csi--node--driver--ljmwb-eth0" May 8 00:45:18.165849 containerd[1546]: 2025-05-08 00:45:18.153 [INFO][5433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:18.165849 containerd[1546]: 2025-05-08 00:45:18.153 [INFO][5433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:18.165849 containerd[1546]: 2025-05-08 00:45:18.161 [WARNING][5433] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" HandleID="k8s-pod-network.cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" Workload="localhost-k8s-csi--node--driver--ljmwb-eth0" May 8 00:45:18.165849 containerd[1546]: 2025-05-08 00:45:18.161 [INFO][5433] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" HandleID="k8s-pod-network.cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" Workload="localhost-k8s-csi--node--driver--ljmwb-eth0" May 8 00:45:18.165849 containerd[1546]: 2025-05-08 00:45:18.162 [INFO][5433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:18.165849 containerd[1546]: 2025-05-08 00:45:18.163 [INFO][5424] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" May 8 00:45:18.165849 containerd[1546]: time="2025-05-08T00:45:18.165756186Z" level=info msg="TearDown network for sandbox \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\" successfully" May 8 00:45:18.165849 containerd[1546]: time="2025-05-08T00:45:18.165777186Z" level=info msg="StopPodSandbox for \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\" returns successfully" May 8 00:45:18.166568 containerd[1546]: time="2025-05-08T00:45:18.166267945Z" level=info msg="RemovePodSandbox for \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\"" May 8 00:45:18.166568 containerd[1546]: time="2025-05-08T00:45:18.166294464Z" level=info msg="Forcibly stopping sandbox \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\"" May 8 00:45:18.245500 containerd[1546]: 2025-05-08 00:45:18.206 [WARNING][5455] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ljmwb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2de83a08-9bcf-4ba2-8674-79ec949e7e5f", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8dd2d74101bec4b8b46a30227229a55e3c7f70655db91c1c7c817de6006fe4a5", Pod:"csi-node-driver-ljmwb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali61a5a46cbe6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:18.245500 containerd[1546]: 2025-05-08 00:45:18.207 [INFO][5455] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" May 8 00:45:18.245500 containerd[1546]: 2025-05-08 00:45:18.207 [INFO][5455] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" iface="eth0" netns="" May 8 00:45:18.245500 containerd[1546]: 2025-05-08 00:45:18.207 [INFO][5455] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" May 8 00:45:18.245500 containerd[1546]: 2025-05-08 00:45:18.207 [INFO][5455] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" May 8 00:45:18.245500 containerd[1546]: 2025-05-08 00:45:18.229 [INFO][5464] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" HandleID="k8s-pod-network.cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" Workload="localhost-k8s-csi--node--driver--ljmwb-eth0" May 8 00:45:18.245500 containerd[1546]: 2025-05-08 00:45:18.229 [INFO][5464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:18.245500 containerd[1546]: 2025-05-08 00:45:18.229 [INFO][5464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:18.245500 containerd[1546]: 2025-05-08 00:45:18.239 [WARNING][5464] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" HandleID="k8s-pod-network.cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" Workload="localhost-k8s-csi--node--driver--ljmwb-eth0" May 8 00:45:18.245500 containerd[1546]: 2025-05-08 00:45:18.240 [INFO][5464] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" HandleID="k8s-pod-network.cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" Workload="localhost-k8s-csi--node--driver--ljmwb-eth0" May 8 00:45:18.245500 containerd[1546]: 2025-05-08 00:45:18.241 [INFO][5464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:18.245500 containerd[1546]: 2025-05-08 00:45:18.242 [INFO][5455] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb" May 8 00:45:18.245500 containerd[1546]: time="2025-05-08T00:45:18.244622369Z" level=info msg="TearDown network for sandbox \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\" successfully" May 8 00:45:18.247559 containerd[1546]: time="2025-05-08T00:45:18.247530478Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:45:18.247613 containerd[1546]: time="2025-05-08T00:45:18.247587238Z" level=info msg="RemovePodSandbox \"cad6ce28e69d71ef195e5939e02aeed56717a5e9b4ddb5b5bd54d74591c098cb\" returns successfully" May 8 00:45:18.248054 containerd[1546]: time="2025-05-08T00:45:18.248029436Z" level=info msg="StopPodSandbox for \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\"" May 8 00:45:18.315084 containerd[1546]: 2025-05-08 00:45:18.283 [WARNING][5487] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0", GenerateName:"calico-apiserver-79cbdfc48f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1270cb28-66ea-435f-99a8-a51538f5c1d9", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79cbdfc48f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38", Pod:"calico-apiserver-79cbdfc48f-87wj8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77276882db2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:18.315084 containerd[1546]: 2025-05-08 00:45:18.283 [INFO][5487] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" May 8 00:45:18.315084 containerd[1546]: 2025-05-08 00:45:18.283 [INFO][5487] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" iface="eth0" netns="" May 8 00:45:18.315084 containerd[1546]: 2025-05-08 00:45:18.283 [INFO][5487] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" May 8 00:45:18.315084 containerd[1546]: 2025-05-08 00:45:18.283 [INFO][5487] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" May 8 00:45:18.315084 containerd[1546]: 2025-05-08 00:45:18.302 [INFO][5495] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" HandleID="k8s-pod-network.520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0" May 8 00:45:18.315084 containerd[1546]: 2025-05-08 00:45:18.302 [INFO][5495] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:18.315084 containerd[1546]: 2025-05-08 00:45:18.302 [INFO][5495] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:18.315084 containerd[1546]: 2025-05-08 00:45:18.310 [WARNING][5495] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" HandleID="k8s-pod-network.520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0" May 8 00:45:18.315084 containerd[1546]: 2025-05-08 00:45:18.310 [INFO][5495] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" HandleID="k8s-pod-network.520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0" May 8 00:45:18.315084 containerd[1546]: 2025-05-08 00:45:18.311 [INFO][5495] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:18.315084 containerd[1546]: 2025-05-08 00:45:18.313 [INFO][5487] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" May 8 00:45:18.315498 containerd[1546]: time="2025-05-08T00:45:18.315130423Z" level=info msg="TearDown network for sandbox \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\" successfully" May 8 00:45:18.315498 containerd[1546]: time="2025-05-08T00:45:18.315156103Z" level=info msg="StopPodSandbox for \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\" returns successfully" May 8 00:45:18.315943 containerd[1546]: time="2025-05-08T00:45:18.315687501Z" level=info msg="RemovePodSandbox for \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\"" May 8 00:45:18.315943 containerd[1546]: time="2025-05-08T00:45:18.315721781Z" level=info msg="Forcibly stopping sandbox \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\"" May 8 00:45:18.395073 containerd[1546]: 2025-05-08 00:45:18.360 [WARNING][5518] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0", GenerateName:"calico-apiserver-79cbdfc48f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1270cb28-66ea-435f-99a8-a51538f5c1d9", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79cbdfc48f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ec2e7ab6833e59362d8b6042b9bad2a29c1a7f0fbf060fddd9b12d53d9bec38", Pod:"calico-apiserver-79cbdfc48f-87wj8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77276882db2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:18.395073 containerd[1546]: 2025-05-08 00:45:18.361 [INFO][5518] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" May 8 00:45:18.395073 containerd[1546]: 2025-05-08 00:45:18.361 [INFO][5518] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" iface="eth0" netns="" May 8 00:45:18.395073 containerd[1546]: 2025-05-08 00:45:18.361 [INFO][5518] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" May 8 00:45:18.395073 containerd[1546]: 2025-05-08 00:45:18.361 [INFO][5518] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" May 8 00:45:18.395073 containerd[1546]: 2025-05-08 00:45:18.382 [INFO][5528] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" HandleID="k8s-pod-network.520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0" May 8 00:45:18.395073 containerd[1546]: 2025-05-08 00:45:18.382 [INFO][5528] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:18.395073 containerd[1546]: 2025-05-08 00:45:18.382 [INFO][5528] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:18.395073 containerd[1546]: 2025-05-08 00:45:18.390 [WARNING][5528] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" HandleID="k8s-pod-network.520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0" May 8 00:45:18.395073 containerd[1546]: 2025-05-08 00:45:18.390 [INFO][5528] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" HandleID="k8s-pod-network.520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--87wj8-eth0" May 8 00:45:18.395073 containerd[1546]: 2025-05-08 00:45:18.391 [INFO][5528] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:18.395073 containerd[1546]: 2025-05-08 00:45:18.393 [INFO][5518] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace" May 8 00:45:18.396213 containerd[1546]: time="2025-05-08T00:45:18.395534840Z" level=info msg="TearDown network for sandbox \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\" successfully" May 8 00:45:18.398007 containerd[1546]: time="2025-05-08T00:45:18.397971511Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:45:18.398053 containerd[1546]: time="2025-05-08T00:45:18.398027831Z" level=info msg="RemovePodSandbox \"520ab02e076fcbff22f2a1039ad9f8b5a63baeb9ca23b280b101d72d64711ace\" returns successfully" May 8 00:45:18.398772 containerd[1546]: time="2025-05-08T00:45:18.398530229Z" level=info msg="StopPodSandbox for \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\"" May 8 00:45:18.472172 containerd[1546]: 2025-05-08 00:45:18.432 [WARNING][5551] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d8549c49-1445-45ca-b3dc-fb39b68b2e91", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad", Pod:"coredns-7db6d8ff4d-d7mds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83062af07fe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:18.472172 containerd[1546]: 2025-05-08 00:45:18.433 [INFO][5551] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" May 8 00:45:18.472172 containerd[1546]: 2025-05-08 00:45:18.433 [INFO][5551] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" iface="eth0" netns="" May 8 00:45:18.472172 containerd[1546]: 2025-05-08 00:45:18.433 [INFO][5551] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" May 8 00:45:18.472172 containerd[1546]: 2025-05-08 00:45:18.433 [INFO][5551] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" May 8 00:45:18.472172 containerd[1546]: 2025-05-08 00:45:18.455 [INFO][5559] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" HandleID="k8s-pod-network.f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" Workload="localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0" May 8 00:45:18.472172 containerd[1546]: 2025-05-08 00:45:18.456 [INFO][5559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:18.472172 containerd[1546]: 2025-05-08 00:45:18.456 [INFO][5559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:18.472172 containerd[1546]: 2025-05-08 00:45:18.467 [WARNING][5559] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" HandleID="k8s-pod-network.f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" Workload="localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0" May 8 00:45:18.472172 containerd[1546]: 2025-05-08 00:45:18.467 [INFO][5559] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" HandleID="k8s-pod-network.f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" Workload="localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0" May 8 00:45:18.472172 containerd[1546]: 2025-05-08 00:45:18.468 [INFO][5559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:18.472172 containerd[1546]: 2025-05-08 00:45:18.470 [INFO][5551] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" May 8 00:45:18.472795 containerd[1546]: time="2025-05-08T00:45:18.472655989Z" level=info msg="TearDown network for sandbox \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\" successfully" May 8 00:45:18.472795 containerd[1546]: time="2025-05-08T00:45:18.472686989Z" level=info msg="StopPodSandbox for \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\" returns successfully" May 8 00:45:18.473512 containerd[1546]: time="2025-05-08T00:45:18.473291067Z" level=info msg="RemovePodSandbox for \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\"" May 8 00:45:18.473512 containerd[1546]: time="2025-05-08T00:45:18.473320027Z" level=info msg="Forcibly stopping sandbox \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\"" May 8 00:45:18.543553 containerd[1546]: 2025-05-08 00:45:18.508 [WARNING][5583] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d8549c49-1445-45ca-b3dc-fb39b68b2e91", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"45dad554dcdc41757d56a3d73ff812b51dad4a60444537ae96d367a1860f78ad", Pod:"coredns-7db6d8ff4d-d7mds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83062af07fe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:18.543553 containerd[1546]: 2025-05-08 00:45:18.508 [INFO][5583] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" May 8 00:45:18.543553 containerd[1546]: 2025-05-08 00:45:18.508 [INFO][5583] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" iface="eth0" netns="" May 8 00:45:18.543553 containerd[1546]: 2025-05-08 00:45:18.508 [INFO][5583] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" May 8 00:45:18.543553 containerd[1546]: 2025-05-08 00:45:18.508 [INFO][5583] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" May 8 00:45:18.543553 containerd[1546]: 2025-05-08 00:45:18.529 [INFO][5592] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" HandleID="k8s-pod-network.f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" Workload="localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0" May 8 00:45:18.543553 containerd[1546]: 2025-05-08 00:45:18.529 [INFO][5592] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:18.543553 containerd[1546]: 2025-05-08 00:45:18.530 [INFO][5592] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:18.543553 containerd[1546]: 2025-05-08 00:45:18.538 [WARNING][5592] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" HandleID="k8s-pod-network.f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" Workload="localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0" May 8 00:45:18.543553 containerd[1546]: 2025-05-08 00:45:18.538 [INFO][5592] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" HandleID="k8s-pod-network.f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" Workload="localhost-k8s-coredns--7db6d8ff4d--d7mds-eth0" May 8 00:45:18.543553 containerd[1546]: 2025-05-08 00:45:18.539 [INFO][5592] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:18.543553 containerd[1546]: 2025-05-08 00:45:18.541 [INFO][5583] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb" May 8 00:45:18.543553 containerd[1546]: time="2025-05-08T00:45:18.543089244Z" level=info msg="TearDown network for sandbox \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\" successfully" May 8 00:45:18.547395 containerd[1546]: time="2025-05-08T00:45:18.547361588Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:45:18.547528 containerd[1546]: time="2025-05-08T00:45:18.547429867Z" level=info msg="RemovePodSandbox \"f361e734652f21032afefce7ffac2480365d098d9c31d66271da023ef63a95bb\" returns successfully" May 8 00:45:18.548015 containerd[1546]: time="2025-05-08T00:45:18.547991265Z" level=info msg="StopPodSandbox for \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\"" May 8 00:45:18.616577 containerd[1546]: 2025-05-08 00:45:18.584 [WARNING][5614] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0", GenerateName:"calico-apiserver-79cbdfc48f-", Namespace:"calico-apiserver", SelfLink:"", UID:"94c7629a-4f26-4590-8950-ecb623581f56", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79cbdfc48f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32", Pod:"calico-apiserver-79cbdfc48f-gn4vl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cc5c4107e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:18.616577 containerd[1546]: 2025-05-08 00:45:18.584 [INFO][5614] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" May 8 00:45:18.616577 containerd[1546]: 2025-05-08 00:45:18.584 [INFO][5614] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" iface="eth0" netns="" May 8 00:45:18.616577 containerd[1546]: 2025-05-08 00:45:18.584 [INFO][5614] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" May 8 00:45:18.616577 containerd[1546]: 2025-05-08 00:45:18.584 [INFO][5614] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" May 8 00:45:18.616577 containerd[1546]: 2025-05-08 00:45:18.603 [INFO][5623] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" HandleID="k8s-pod-network.e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0" May 8 00:45:18.616577 containerd[1546]: 2025-05-08 00:45:18.603 [INFO][5623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:18.616577 containerd[1546]: 2025-05-08 00:45:18.603 [INFO][5623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:18.616577 containerd[1546]: 2025-05-08 00:45:18.611 [WARNING][5623] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" HandleID="k8s-pod-network.e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0" May 8 00:45:18.616577 containerd[1546]: 2025-05-08 00:45:18.611 [INFO][5623] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" HandleID="k8s-pod-network.e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0" May 8 00:45:18.616577 containerd[1546]: 2025-05-08 00:45:18.612 [INFO][5623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:18.616577 containerd[1546]: 2025-05-08 00:45:18.614 [INFO][5614] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" May 8 00:45:18.616967 containerd[1546]: time="2025-05-08T00:45:18.616607167Z" level=info msg="TearDown network for sandbox \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\" successfully" May 8 00:45:18.616967 containerd[1546]: time="2025-05-08T00:45:18.616633166Z" level=info msg="StopPodSandbox for \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\" returns successfully" May 8 00:45:18.626589 containerd[1546]: time="2025-05-08T00:45:18.626482329Z" level=info msg="RemovePodSandbox for \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\"" May 8 00:45:18.626589 containerd[1546]: time="2025-05-08T00:45:18.626523329Z" level=info msg="Forcibly stopping sandbox \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\"" May 8 00:45:18.697496 containerd[1546]: 2025-05-08 00:45:18.661 [WARNING][5645] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0", GenerateName:"calico-apiserver-79cbdfc48f-", Namespace:"calico-apiserver", SelfLink:"", UID:"94c7629a-4f26-4590-8950-ecb623581f56", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79cbdfc48f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59b13fe18930c70b82eb18087d6343e09234ffd05d3b3f3a2cd63ef7f80d9c32", Pod:"calico-apiserver-79cbdfc48f-gn4vl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cc5c4107e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:18.697496 containerd[1546]: 2025-05-08 00:45:18.662 [INFO][5645] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" May 8 00:45:18.697496 containerd[1546]: 2025-05-08 00:45:18.662 [INFO][5645] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" iface="eth0" netns="" May 8 00:45:18.697496 containerd[1546]: 2025-05-08 00:45:18.662 [INFO][5645] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" May 8 00:45:18.697496 containerd[1546]: 2025-05-08 00:45:18.662 [INFO][5645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" May 8 00:45:18.697496 containerd[1546]: 2025-05-08 00:45:18.682 [INFO][5653] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" HandleID="k8s-pod-network.e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0" May 8 00:45:18.697496 containerd[1546]: 2025-05-08 00:45:18.683 [INFO][5653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:18.697496 containerd[1546]: 2025-05-08 00:45:18.683 [INFO][5653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:18.697496 containerd[1546]: 2025-05-08 00:45:18.691 [WARNING][5653] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" HandleID="k8s-pod-network.e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0" May 8 00:45:18.697496 containerd[1546]: 2025-05-08 00:45:18.691 [INFO][5653] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" HandleID="k8s-pod-network.e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" Workload="localhost-k8s-calico--apiserver--79cbdfc48f--gn4vl-eth0" May 8 00:45:18.697496 containerd[1546]: 2025-05-08 00:45:18.693 [INFO][5653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:18.697496 containerd[1546]: 2025-05-08 00:45:18.694 [INFO][5645] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52" May 8 00:45:18.697496 containerd[1546]: time="2025-05-08T00:45:18.696729464Z" level=info msg="TearDown network for sandbox \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\" successfully" May 8 00:45:18.699405 containerd[1546]: time="2025-05-08T00:45:18.699356855Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:45:18.699498 containerd[1546]: time="2025-05-08T00:45:18.699416414Z" level=info msg="RemovePodSandbox \"e8e0ff64c1a79cc970f43685c74a2989f6459fa8cbb1e4113b9e9248d734de52\" returns successfully" May 8 00:45:18.699926 containerd[1546]: time="2025-05-08T00:45:18.699888693Z" level=info msg="StopPodSandbox for \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\"" May 8 00:45:18.777492 containerd[1546]: 2025-05-08 00:45:18.743 [WARNING][5676] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0", GenerateName:"calico-kube-controllers-657c88d8bb-", Namespace:"calico-system", SelfLink:"", UID:"b66dddbc-0758-478a-8371-6dfe5c35d89f", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"657c88d8bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be", Pod:"calico-kube-controllers-657c88d8bb-lsvlt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9d33364fae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:18.777492 containerd[1546]: 2025-05-08 00:45:18.744 [INFO][5676] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" May 8 00:45:18.777492 containerd[1546]: 2025-05-08 00:45:18.744 [INFO][5676] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" iface="eth0" netns="" May 8 00:45:18.777492 containerd[1546]: 2025-05-08 00:45:18.744 [INFO][5676] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" May 8 00:45:18.777492 containerd[1546]: 2025-05-08 00:45:18.744 [INFO][5676] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" May 8 00:45:18.777492 containerd[1546]: 2025-05-08 00:45:18.764 [INFO][5685] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" HandleID="k8s-pod-network.52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" Workload="localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0" May 8 00:45:18.777492 containerd[1546]: 2025-05-08 00:45:18.764 [INFO][5685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:18.777492 containerd[1546]: 2025-05-08 00:45:18.764 [INFO][5685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:18.777492 containerd[1546]: 2025-05-08 00:45:18.771 [WARNING][5685] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" HandleID="k8s-pod-network.52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" Workload="localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0" May 8 00:45:18.777492 containerd[1546]: 2025-05-08 00:45:18.771 [INFO][5685] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" HandleID="k8s-pod-network.52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" Workload="localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0" May 8 00:45:18.777492 containerd[1546]: 2025-05-08 00:45:18.774 [INFO][5685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:18.777492 containerd[1546]: 2025-05-08 00:45:18.775 [INFO][5676] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" May 8 00:45:18.777947 containerd[1546]: time="2025-05-08T00:45:18.777531720Z" level=info msg="TearDown network for sandbox \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\" successfully" May 8 00:45:18.777947 containerd[1546]: time="2025-05-08T00:45:18.777567800Z" level=info msg="StopPodSandbox for \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\" returns successfully" May 8 00:45:18.778208 containerd[1546]: time="2025-05-08T00:45:18.778177277Z" level=info msg="RemovePodSandbox for \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\"" May 8 00:45:18.778242 containerd[1546]: time="2025-05-08T00:45:18.778211797Z" level=info msg="Forcibly stopping sandbox \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\"" May 8 00:45:18.850675 containerd[1546]: 2025-05-08 00:45:18.812 [WARNING][5707] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0", GenerateName:"calico-kube-controllers-657c88d8bb-", Namespace:"calico-system", SelfLink:"", UID:"b66dddbc-0758-478a-8371-6dfe5c35d89f", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 44, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"657c88d8bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e5172026546364e1e561a13bc0890616f86d786ff9d559c05f07c49ea5f667be", Pod:"calico-kube-controllers-657c88d8bb-lsvlt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9d33364fae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:45:18.850675 containerd[1546]: 2025-05-08 00:45:18.813 [INFO][5707] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" May 8 00:45:18.850675 containerd[1546]: 2025-05-08 00:45:18.813 [INFO][5707] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" iface="eth0" netns="" May 8 00:45:18.850675 containerd[1546]: 2025-05-08 00:45:18.813 [INFO][5707] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" May 8 00:45:18.850675 containerd[1546]: 2025-05-08 00:45:18.813 [INFO][5707] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" May 8 00:45:18.850675 containerd[1546]: 2025-05-08 00:45:18.835 [INFO][5716] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" HandleID="k8s-pod-network.52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" Workload="localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0" May 8 00:45:18.850675 containerd[1546]: 2025-05-08 00:45:18.835 [INFO][5716] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:45:18.850675 containerd[1546]: 2025-05-08 00:45:18.835 [INFO][5716] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:45:18.850675 containerd[1546]: 2025-05-08 00:45:18.843 [WARNING][5716] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" HandleID="k8s-pod-network.52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" Workload="localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0" May 8 00:45:18.850675 containerd[1546]: 2025-05-08 00:45:18.843 [INFO][5716] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" HandleID="k8s-pod-network.52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" Workload="localhost-k8s-calico--kube--controllers--657c88d8bb--lsvlt-eth0" May 8 00:45:18.850675 containerd[1546]: 2025-05-08 00:45:18.845 [INFO][5716] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:45:18.850675 containerd[1546]: 2025-05-08 00:45:18.846 [INFO][5707] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c" May 8 00:45:18.850675 containerd[1546]: time="2025-05-08T00:45:18.850632844Z" level=info msg="TearDown network for sandbox \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\" successfully" May 8 00:45:18.854119 containerd[1546]: time="2025-05-08T00:45:18.854070311Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:45:18.854199 containerd[1546]: time="2025-05-08T00:45:18.854137431Z" level=info msg="RemovePodSandbox \"52c2e935485a11b80f282ca4fccb12931ebbee349abce5514d36b3642866145c\" returns successfully" May 8 00:45:20.090765 systemd[1]: Started sshd@15-10.0.0.155:22-10.0.0.1:60132.service - OpenSSH per-connection server daemon (10.0.0.1:60132). May 8 00:45:20.128846 sshd[5723]: Accepted publickey for core from 10.0.0.1 port 60132 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:45:20.130361 sshd[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:45:20.136521 systemd-logind[1522]: New session 16 of user core. May 8 00:45:20.144690 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:45:20.340369 sshd[5723]: pam_unix(sshd:session): session closed for user core May 8 00:45:20.349678 systemd[1]: Started sshd@16-10.0.0.155:22-10.0.0.1:60140.service - OpenSSH per-connection server daemon (10.0.0.1:60140). May 8 00:45:20.350120 systemd[1]: sshd@15-10.0.0.155:22-10.0.0.1:60132.service: Deactivated successfully. May 8 00:45:20.354131 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:45:20.355856 systemd-logind[1522]: Session 16 logged out. Waiting for processes to exit. May 8 00:45:20.356914 systemd-logind[1522]: Removed session 16. May 8 00:45:20.380418 sshd[5736]: Accepted publickey for core from 10.0.0.1 port 60140 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:45:20.382122 sshd[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:45:20.385752 systemd-logind[1522]: New session 17 of user core. May 8 00:45:20.396766 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:45:20.679875 sshd[5736]: pam_unix(sshd:session): session closed for user core May 8 00:45:20.685698 systemd[1]: Started sshd@17-10.0.0.155:22-10.0.0.1:60144.service - OpenSSH per-connection server daemon (10.0.0.1:60144). May 8 00:45:20.686095 systemd[1]: sshd@16-10.0.0.155:22-10.0.0.1:60140.service: Deactivated successfully. May 8 00:45:20.688821 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:45:20.689358 systemd-logind[1522]: Session 17 logged out. Waiting for processes to exit. May 8 00:45:20.690447 systemd-logind[1522]: Removed session 17. May 8 00:45:20.738214 sshd[5748]: Accepted publickey for core from 10.0.0.1 port 60144 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:45:20.740172 sshd[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:45:20.749012 systemd-logind[1522]: New session 18 of user core. May 8 00:45:20.754751 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:45:22.315555 sshd[5748]: pam_unix(sshd:session): session closed for user core May 8 00:45:22.322760 systemd[1]: Started sshd@18-10.0.0.155:22-10.0.0.1:60160.service - OpenSSH per-connection server daemon (10.0.0.1:60160). May 8 00:45:22.323134 systemd[1]: sshd@17-10.0.0.155:22-10.0.0.1:60144.service: Deactivated successfully. May 8 00:45:22.332730 systemd-logind[1522]: Session 18 logged out. Waiting for processes to exit. May 8 00:45:22.335674 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:45:22.339578 systemd-logind[1522]: Removed session 18. May 8 00:45:22.385135 sshd[5766]: Accepted publickey for core from 10.0.0.1 port 60160 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:45:22.387307 sshd[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:45:22.393961 systemd-logind[1522]: New session 19 of user core. May 8 00:45:22.402754 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:45:22.696641 sshd[5766]: pam_unix(sshd:session): session closed for user core May 8 00:45:22.709167 systemd[1]: Started sshd@19-10.0.0.155:22-10.0.0.1:55062.service - OpenSSH per-connection server daemon (10.0.0.1:55062). May 8 00:45:22.712870 systemd[1]: sshd@18-10.0.0.155:22-10.0.0.1:60160.service: Deactivated successfully. May 8 00:45:22.714334 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:45:22.718119 systemd-logind[1522]: Session 19 logged out. Waiting for processes to exit. May 8 00:45:22.719445 systemd-logind[1522]: Removed session 19. May 8 00:45:22.747931 sshd[5782]: Accepted publickey for core from 10.0.0.1 port 55062 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:45:22.748433 sshd[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:45:22.753553 systemd-logind[1522]: New session 20 of user core. May 8 00:45:22.760827 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:45:22.912431 sshd[5782]: pam_unix(sshd:session): session closed for user core May 8 00:45:22.915020 systemd[1]: sshd@19-10.0.0.155:22-10.0.0.1:55062.service: Deactivated successfully. May 8 00:45:22.918244 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:45:22.919561 systemd-logind[1522]: Session 20 logged out. Waiting for processes to exit. May 8 00:45:22.920819 systemd-logind[1522]: Removed session 20. May 8 00:45:27.924712 systemd[1]: Started sshd@20-10.0.0.155:22-10.0.0.1:55074.service - OpenSSH per-connection server daemon (10.0.0.1:55074). May 8 00:45:27.956850 sshd[5824]: Accepted publickey for core from 10.0.0.1 port 55074 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:45:27.958222 sshd[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:45:27.963731 systemd-logind[1522]: New session 21 of user core. May 8 00:45:27.972841 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:45:28.118112 sshd[5824]: pam_unix(sshd:session): session closed for user core May 8 00:45:28.120994 systemd[1]: sshd@20-10.0.0.155:22-10.0.0.1:55074.service: Deactivated successfully. May 8 00:45:28.126031 systemd-logind[1522]: Session 21 logged out. Waiting for processes to exit. May 8 00:45:28.126048 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:45:28.127865 systemd-logind[1522]: Removed session 21. May 8 00:45:33.132738 systemd[1]: Started sshd@21-10.0.0.155:22-10.0.0.1:59962.service - OpenSSH per-connection server daemon (10.0.0.1:59962). May 8 00:45:33.171575 sshd[5845]: Accepted publickey for core from 10.0.0.1 port 59962 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:45:33.173018 sshd[5845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:45:33.179549 systemd-logind[1522]: New session 22 of user core. May 8 00:45:33.185750 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:45:33.341217 sshd[5845]: pam_unix(sshd:session): session closed for user core May 8 00:45:33.345248 systemd[1]: sshd@21-10.0.0.155:22-10.0.0.1:59962.service: Deactivated successfully. May 8 00:45:33.347339 systemd-logind[1522]: Session 22 logged out. Waiting for processes to exit. May 8 00:45:33.347422 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:45:33.348603 systemd-logind[1522]: Removed session 22. May 8 00:45:35.889265 kubelet[2720]: E0508 00:45:35.888850 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:37.889094 kubelet[2720]: E0508 00:45:37.889011 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:38.358724 systemd[1]: Started sshd@22-10.0.0.155:22-10.0.0.1:59978.service - OpenSSH per-connection server daemon (10.0.0.1:59978). May 8 00:45:38.376936 kubelet[2720]: I0508 00:45:38.376850 2720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:45:38.388843 sshd[5882]: Accepted publickey for core from 10.0.0.1 port 59978 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:45:38.391435 sshd[5882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:45:38.398661 systemd-logind[1522]: New session 23 of user core. May 8 00:45:38.409659 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:45:38.606061 sshd[5882]: pam_unix(sshd:session): session closed for user core May 8 00:45:38.608648 systemd[1]: sshd@22-10.0.0.155:22-10.0.0.1:59978.service: Deactivated successfully. May 8 00:45:38.612752 systemd-logind[1522]: Session 23 logged out. Waiting for processes to exit. May 8 00:45:38.613369 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:45:38.614691 systemd-logind[1522]: Removed session 23.