May 8 00:21:35.895899 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 8 00:21:35.895919 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed May 7 22:57:52 -00 2025 May 8 00:21:35.895929 kernel: KASLR enabled May 8 00:21:35.895935 kernel: efi: EFI v2.7 by EDK II May 8 00:21:35.895941 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 8 00:21:35.895947 kernel: random: crng init done May 8 00:21:35.895954 kernel: ACPI: Early table checksum verification disabled May 8 00:21:35.895959 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 8 00:21:35.895966 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 8 00:21:35.895973 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:21:35.895979 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:21:35.895985 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:21:35.895991 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:21:35.895997 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:21:35.896004 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:21:35.896012 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:21:35.896018 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:21:35.896025 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:21:35.896031 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 8 00:21:35.896037 kernel: NUMA: Failed to initialise from firmware May 8 00:21:35.896043 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:21:35.896050 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 8 00:21:35.896056 kernel: Zone ranges: May 8 00:21:35.896062 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:21:35.896068 kernel: DMA32 empty May 8 00:21:35.896076 kernel: Normal empty May 8 00:21:35.896082 kernel: Movable zone start for each node May 8 00:21:35.896088 kernel: Early memory node ranges May 8 00:21:35.896094 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 8 00:21:35.896100 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 8 00:21:35.896107 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 8 00:21:35.896113 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 8 00:21:35.896119 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 8 00:21:35.896125 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 8 00:21:35.896131 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 8 00:21:35.896138 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:21:35.896144 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 8 00:21:35.896151 kernel: psci: probing for conduit method from ACPI. May 8 00:21:35.896157 kernel: psci: PSCIv1.1 detected in firmware. May 8 00:21:35.896164 kernel: psci: Using standard PSCI v0.2 function IDs May 8 00:21:35.896173 kernel: psci: Trusted OS migration not required May 8 00:21:35.896179 kernel: psci: SMC Calling Convention v1.1 May 8 00:21:35.896186 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 8 00:21:35.896194 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 May 8 00:21:35.896200 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 May 8 00:21:35.896207 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 8 00:21:35.896214 kernel: Detected PIPT I-cache on CPU0 May 8 00:21:35.896220 kernel: CPU features: detected: GIC system register CPU interface May 8 00:21:35.896227 kernel: CPU features: detected: Hardware dirty bit management May 8 00:21:35.896234 kernel: CPU features: detected: Spectre-v4 May 8 00:21:35.896240 kernel: CPU features: detected: Spectre-BHB May 8 00:21:35.896247 kernel: CPU features: kernel page table isolation forced ON by KASLR May 8 00:21:35.896254 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 8 00:21:35.896262 kernel: CPU features: detected: ARM erratum 1418040 May 8 00:21:35.896268 kernel: CPU features: detected: SSBS not fully self-synchronizing May 8 00:21:35.896275 kernel: alternatives: applying boot alternatives May 8 00:21:35.896282 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ed66668e4cab2597a697b6f83cdcbc6a64a98dbc7e2125304191704297c07daf May 8 00:21:35.896290 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:21:35.896296 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:21:35.896303 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:21:35.896310 kernel: Fallback order for Node 0: 0 May 8 00:21:35.896316 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 8 00:21:35.896323 kernel: Policy zone: DMA May 8 00:21:35.896330 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:21:35.896337 kernel: software IO TLB: area num 4. May 8 00:21:35.896344 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 8 00:21:35.896351 kernel: Memory: 2386468K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185820K reserved, 0K cma-reserved) May 8 00:21:35.896358 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:21:35.896365 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:21:35.896372 kernel: rcu: RCU event tracing is enabled. May 8 00:21:35.896379 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:21:35.896385 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:21:35.896392 kernel: Tracing variant of Tasks RCU enabled. May 8 00:21:35.896399 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:21:35.896406 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:21:35.896413 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 00:21:35.896420 kernel: GICv3: 256 SPIs implemented May 8 00:21:35.896427 kernel: GICv3: 0 Extended SPIs implemented May 8 00:21:35.896434 kernel: Root IRQ handler: gic_handle_irq May 8 00:21:35.896440 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 8 00:21:35.896447 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 8 00:21:35.896453 kernel: ITS [mem 0x08080000-0x0809ffff] May 8 00:21:35.896460 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 8 00:21:35.896467 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 8 00:21:35.896474 kernel: GICv3: using LPI property table @0x00000000400f0000 May 8 00:21:35.896481 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 8 00:21:35.896487 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:21:35.896495 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:21:35.896502 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 8 00:21:35.896508 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 8 00:21:35.896515 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 8 00:21:35.896522 kernel: arm-pv: using stolen time PV May 8 00:21:35.896529 kernel: Console: colour dummy device 80x25 May 8 00:21:35.896536 kernel: ACPI: Core revision 20230628 May 8 00:21:35.896543 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 8 00:21:35.896549 kernel: pid_max: default: 32768 minimum: 301 May 8 00:21:35.896556 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:21:35.896564 kernel: landlock: Up and running. May 8 00:21:35.896571 kernel: SELinux: Initializing. May 8 00:21:35.896578 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:21:35.896585 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:21:35.896592 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:21:35.896599 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:21:35.896606 kernel: rcu: Hierarchical SRCU implementation. May 8 00:21:35.896613 kernel: rcu: Max phase no-delay instances is 400. May 8 00:21:35.896620 kernel: Platform MSI: ITS@0x8080000 domain created May 8 00:21:35.896628 kernel: PCI/MSI: ITS@0x8080000 domain created May 8 00:21:35.896634 kernel: Remapping and enabling EFI services. May 8 00:21:35.896641 kernel: smp: Bringing up secondary CPUs ... May 8 00:21:35.896648 kernel: Detected PIPT I-cache on CPU1 May 8 00:21:35.896655 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 8 00:21:35.896662 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 8 00:21:35.896669 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:21:35.896676 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 8 00:21:35.896683 kernel: Detected PIPT I-cache on CPU2 May 8 00:21:35.896689 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 8 00:21:35.896697 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 8 00:21:35.896704 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:21:35.896724 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 8 00:21:35.896733 kernel: Detected PIPT I-cache on CPU3 May 8 00:21:35.896740 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 8 00:21:35.896771 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 8 00:21:35.896780 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:21:35.896787 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 8 00:21:35.896795 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:21:35.896804 kernel: SMP: Total of 4 processors activated. May 8 00:21:35.896811 kernel: CPU features: detected: 32-bit EL0 Support May 8 00:21:35.896819 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 8 00:21:35.896826 kernel: CPU features: detected: Common not Private translations May 8 00:21:35.896833 kernel: CPU features: detected: CRC32 instructions May 8 00:21:35.896841 kernel: CPU features: detected: Enhanced Virtualization Traps May 8 00:21:35.896848 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 8 00:21:35.896855 kernel: CPU features: detected: LSE atomic instructions May 8 00:21:35.896864 kernel: CPU features: detected: Privileged Access Never May 8 00:21:35.896871 kernel: CPU features: detected: RAS Extension Support May 8 00:21:35.896878 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 8 00:21:35.896886 kernel: CPU: All CPU(s) started at EL1 May 8 00:21:35.896893 kernel: alternatives: applying system-wide alternatives May 8 00:21:35.896900 kernel: devtmpfs: initialized May 8 00:21:35.896907 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:21:35.896914 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:21:35.896922 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:21:35.896930 kernel: SMBIOS 3.0.0 present. May 8 00:21:35.896937 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 8 00:21:35.896945 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:21:35.896952 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 8 00:21:35.896959 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 00:21:35.896967 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 00:21:35.896974 kernel: audit: initializing netlink subsys (disabled) May 8 00:21:35.896981 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 May 8 00:21:35.896988 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:21:35.896997 kernel: cpuidle: using governor menu May 8 00:21:35.897004 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 00:21:35.897011 kernel: ASID allocator initialised with 32768 entries May 8 00:21:35.897018 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:21:35.897025 kernel: Serial: AMBA PL011 UART driver May 8 00:21:35.897033 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 8 00:21:35.897040 kernel: Modules: 0 pages in range for non-PLT usage May 8 00:21:35.897048 kernel: Modules: 509024 pages in range for PLT usage May 8 00:21:35.897055 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:21:35.897063 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:21:35.897071 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 8 00:21:35.897078 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 8 00:21:35.897085 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:21:35.897092 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:21:35.897099 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 8 00:21:35.897107 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 8 00:21:35.897114 kernel: ACPI: Added _OSI(Module Device) May 8 00:21:35.897121 kernel: ACPI: Added _OSI(Processor Device) May 8 00:21:35.897129 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:21:35.897136 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:21:35.897144 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:21:35.897151 kernel: ACPI: Interpreter enabled May 8 00:21:35.897158 kernel: ACPI: Using GIC for interrupt routing May 8 00:21:35.897165 kernel: ACPI: MCFG table detected, 1 entries May 8 00:21:35.897172 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 8 00:21:35.897179 kernel: printk: console [ttyAMA0] enabled May 8 00:21:35.897187 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:21:35.897316 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:21:35.897389 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 8 00:21:35.897453 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 8 00:21:35.897516 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 8 00:21:35.897579 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 8 00:21:35.897588 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 8 00:21:35.897595 kernel: PCI host bridge to bus 0000:00 May 8 00:21:35.897665 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 8 00:21:35.897736 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 8 00:21:35.897822 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 8 00:21:35.897881 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:21:35.897960 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 8 00:21:35.898034 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:21:35.898104 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 8 00:21:35.898167 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 8 00:21:35.898231 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:21:35.898295 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:21:35.898359 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 8 00:21:35.898424 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 8 00:21:35.898481 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 8 00:21:35.898540 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 8 00:21:35.898596 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 8 00:21:35.898606 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 8 00:21:35.898613 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 8 00:21:35.898620 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 8 00:21:35.898627 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 8 00:21:35.898634 kernel: iommu: Default domain type: Translated May 8 00:21:35.898641 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 00:21:35.898649 kernel: efivars: Registered efivars operations May 8 00:21:35.898658 kernel: vgaarb: loaded May 8 00:21:35.898665 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 00:21:35.898672 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:21:35.898680 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:21:35.898687 kernel: pnp: PnP ACPI init May 8 00:21:35.898783 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 8 00:21:35.898795 kernel: pnp: PnP ACPI: found 1 devices May 8 00:21:35.898802 kernel: NET: Registered PF_INET protocol family May 8 00:21:35.898812 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:21:35.898820 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:21:35.898827 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:21:35.898834 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:21:35.898842 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:21:35.898849 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:21:35.898856 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:21:35.898864 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:21:35.898871 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:21:35.898879 kernel: PCI: CLS 0 bytes, default 64 May 8 00:21:35.898886 kernel: kvm [1]: HYP mode not available May 8 00:21:35.898893 kernel: Initialise system trusted keyrings May 8 00:21:35.898901 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:21:35.898908 kernel: Key type asymmetric registered May 8 00:21:35.898915 kernel: Asymmetric key parser 'x509' registered May 8 00:21:35.898922 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 8 00:21:35.898929 kernel: io scheduler mq-deadline registered May 8 00:21:35.898936 kernel: io scheduler kyber registered May 8 00:21:35.898945 kernel: io scheduler bfq registered May 8 00:21:35.898952 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 8 00:21:35.898960 kernel: ACPI: button: Power Button [PWRB] May 8 00:21:35.898967 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 8 00:21:35.899035 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 8 00:21:35.899045 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:21:35.899052 kernel: thunder_xcv, ver 1.0 May 8 00:21:35.899059 kernel: thunder_bgx, ver 1.0 May 8 00:21:35.899066 kernel: nicpf, ver 1.0 May 8 00:21:35.899075 kernel: nicvf, ver 1.0 May 8 00:21:35.899150 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 00:21:35.899211 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T00:21:35 UTC (1746663695) May 8 00:21:35.899221 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 00:21:35.899228 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 8 00:21:35.899236 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 8 00:21:35.899243 kernel: watchdog: Hard watchdog permanently disabled May 8 00:21:35.899251 kernel: NET: Registered PF_INET6 protocol family May 8 00:21:35.899260 kernel: Segment Routing with IPv6 May 8 00:21:35.899267 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:21:35.899274 kernel: NET: Registered PF_PACKET protocol family May 8 00:21:35.899281 kernel: Key type dns_resolver registered May 8 00:21:35.899288 kernel: registered taskstats version 1 May 8 00:21:35.899296 kernel: Loading compiled-in X.509 certificates May 8 00:21:35.899303 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e350a514a19a92525be490be8fe368f9972240ea' May 8 00:21:35.899310 kernel: Key type .fscrypt registered May 8 00:21:35.899317 kernel: Key type fscrypt-provisioning registered May 8 00:21:35.899326 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:21:35.899333 kernel: ima: Allocated hash algorithm: sha1 May 8 00:21:35.899340 kernel: ima: No architecture policies found May 8 00:21:35.899348 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 00:21:35.899355 kernel: clk: Disabling unused clocks May 8 00:21:35.899362 kernel: Freeing unused kernel memory: 39424K May 8 00:21:35.899369 kernel: Run /init as init process May 8 00:21:35.899376 kernel: with arguments: May 8 00:21:35.899383 kernel: /init May 8 00:21:35.899392 kernel: with environment: May 8 00:21:35.899399 kernel: HOME=/ May 8 00:21:35.899406 kernel: TERM=linux May 8 00:21:35.899413 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:21:35.899422 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:21:35.899431 systemd[1]: Detected virtualization kvm. May 8 00:21:35.899439 systemd[1]: Detected architecture arm64. May 8 00:21:35.899447 systemd[1]: Running in initrd. May 8 00:21:35.899455 systemd[1]: No hostname configured, using default hostname. May 8 00:21:35.899462 systemd[1]: Hostname set to . May 8 00:21:35.899470 systemd[1]: Initializing machine ID from VM UUID. May 8 00:21:35.899478 systemd[1]: Queued start job for default target initrd.target. May 8 00:21:35.899485 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:21:35.899493 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:21:35.899501 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:21:35.899511 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:21:35.899519 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:21:35.899527 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:21:35.899536 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:21:35.899544 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:21:35.899551 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:21:35.899559 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:21:35.899568 systemd[1]: Reached target paths.target - Path Units. May 8 00:21:35.899576 systemd[1]: Reached target slices.target - Slice Units. May 8 00:21:35.899583 systemd[1]: Reached target swap.target - Swaps. May 8 00:21:35.899591 systemd[1]: Reached target timers.target - Timer Units. May 8 00:21:35.899599 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:21:35.899606 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:21:35.899614 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:21:35.899622 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:21:35.899629 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:21:35.899638 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:21:35.899646 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:21:35.899654 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:21:35.899661 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:21:35.899669 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:21:35.899677 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:21:35.899684 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:21:35.899692 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:21:35.899701 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:21:35.899709 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:21:35.899725 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:21:35.899733 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:21:35.899741 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:21:35.899756 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:21:35.899782 systemd-journald[239]: Collecting audit messages is disabled. May 8 00:21:35.899801 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:21:35.899809 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:21:35.899819 systemd-journald[239]: Journal started May 8 00:21:35.899838 systemd-journald[239]: Runtime Journal (/run/log/journal/de13a5aacc9c43b6911b0d60bd7c3b92) is 5.9M, max 47.3M, 41.4M free. May 8 00:21:35.891410 systemd-modules-load[240]: Inserted module 'overlay' May 8 00:21:35.902781 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:21:35.902806 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:21:35.905670 systemd-modules-load[240]: Inserted module 'br_netfilter' May 8 00:21:35.906561 kernel: Bridge firewalling registered May 8 00:21:35.907780 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:21:35.920990 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:21:35.922652 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:21:35.924585 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:21:35.927906 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:21:35.932239 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:21:35.936924 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:21:35.939201 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:21:35.952894 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:21:35.954031 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:21:35.957059 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:21:35.970052 dracut-cmdline[281]: dracut-dracut-053 May 8 00:21:35.972438 dracut-cmdline[281]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ed66668e4cab2597a697b6f83cdcbc6a64a98dbc7e2125304191704297c07daf May 8 00:21:35.983981 systemd-resolved[278]: Positive Trust Anchors: May 8 00:21:35.983996 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:21:35.984028 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:21:35.988635 systemd-resolved[278]: Defaulting to hostname 'linux'. May 8 00:21:35.989914 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:21:35.992790 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:21:36.038772 kernel: SCSI subsystem initialized May 8 00:21:36.043769 kernel: Loading iSCSI transport class v2.0-870. May 8 00:21:36.052796 kernel: iscsi: registered transport (tcp) May 8 00:21:36.065780 kernel: iscsi: registered transport (qla4xxx) May 8 00:21:36.065805 kernel: QLogic iSCSI HBA Driver May 8 00:21:36.107931 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:21:36.115925 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:21:36.132799 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:21:36.134021 kernel: device-mapper: uevent: version 1.0.3 May 8 00:21:36.134042 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:21:36.178776 kernel: raid6: neonx8 gen() 15747 MB/s May 8 00:21:36.195764 kernel: raid6: neonx4 gen() 15603 MB/s May 8 00:21:36.212765 kernel: raid6: neonx2 gen() 13193 MB/s May 8 00:21:36.229783 kernel: raid6: neonx1 gen() 10454 MB/s May 8 00:21:36.246761 kernel: raid6: int64x8 gen() 6943 MB/s May 8 00:21:36.263763 kernel: raid6: int64x4 gen() 7344 MB/s May 8 00:21:36.280766 kernel: raid6: int64x2 gen() 6112 MB/s May 8 00:21:36.297770 kernel: raid6: int64x1 gen() 5049 MB/s May 8 00:21:36.297818 kernel: raid6: using algorithm neonx8 gen() 15747 MB/s May 8 00:21:36.314771 kernel: raid6: .... xor() 11912 MB/s, rmw enabled May 8 00:21:36.314796 kernel: raid6: using neon recovery algorithm May 8 00:21:36.322040 kernel: xor: measuring software checksum speed May 8 00:21:36.322059 kernel: 8regs : 19788 MB/sec May 8 00:21:36.322068 kernel: 32regs : 19688 MB/sec May 8 00:21:36.322987 kernel: arm64_neon : 27043 MB/sec May 8 00:21:36.323003 kernel: xor: using function: arm64_neon (27043 MB/sec) May 8 00:21:36.374792 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:21:36.385111 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:21:36.393903 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:21:36.404999 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 8 00:21:36.408116 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:21:36.422925 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:21:36.433816 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation May 8 00:21:36.460841 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:21:36.467905 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:21:36.507929 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:21:36.515880 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:21:36.528786 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:21:36.530302 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:21:36.531757 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:21:36.533945 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:21:36.543901 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:21:36.547781 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 8 00:21:36.567040 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:21:36.567152 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:21:36.567164 kernel: GPT:9289727 != 19775487 May 8 00:21:36.567181 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:21:36.567192 kernel: GPT:9289727 != 19775487 May 8 00:21:36.567201 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:21:36.567210 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:21:36.557477 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:21:36.567073 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:21:36.567172 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:21:36.569443 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:21:36.570488 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:21:36.570617 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:21:36.572450 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:21:36.581996 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:21:36.588782 kernel: BTRFS: device fsid 0be52225-f929-4b89-9354-df54a643ece0 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (521) May 8 00:21:36.590868 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (512) May 8 00:21:36.595927 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 00:21:36.597342 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:21:36.606605 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 00:21:36.610684 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 00:21:36.611884 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 00:21:36.617399 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:21:36.636881 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:21:36.638569 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:21:36.644772 disk-uuid[551]: Primary Header is updated. May 8 00:21:36.644772 disk-uuid[551]: Secondary Entries is updated. May 8 00:21:36.644772 disk-uuid[551]: Secondary Header is updated. May 8 00:21:36.650799 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:21:36.664039 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:21:37.664726 disk-uuid[552]: The operation has completed successfully. May 8 00:21:37.665836 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:21:37.687956 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:21:37.688056 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:21:37.705911 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:21:37.708821 sh[575]: Success May 8 00:21:37.719813 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 00:21:37.749123 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:21:37.766020 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:21:37.767654 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:21:37.777874 kernel: BTRFS info (device dm-0): first mount of filesystem 0be52225-f929-4b89-9354-df54a643ece0 May 8 00:21:37.777912 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 8 00:21:37.777923 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:21:37.779222 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:21:37.779237 kernel: BTRFS info (device dm-0): using free space tree May 8 00:21:37.783297 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:21:37.784456 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:21:37.795915 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:21:37.797251 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:21:37.805093 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:21:37.805127 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:21:37.805142 kernel: BTRFS info (device vda6): using free space tree May 8 00:21:37.806777 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:21:37.813308 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:21:37.814813 kernel: BTRFS info (device vda6): last unmount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:21:37.819533 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:21:37.824909 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:21:37.888861 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:21:37.906906 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:21:37.916326 ignition[665]: Ignition 2.19.0 May 8 00:21:37.916335 ignition[665]: Stage: fetch-offline May 8 00:21:37.916368 ignition[665]: no configs at "/usr/lib/ignition/base.d" May 8 00:21:37.916376 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:21:37.916527 ignition[665]: parsed url from cmdline: "" May 8 00:21:37.916530 ignition[665]: no config URL provided May 8 00:21:37.916534 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:21:37.916541 ignition[665]: no config at "/usr/lib/ignition/user.ign" May 8 00:21:37.916561 ignition[665]: op(1): [started] loading QEMU firmware config module May 8 00:21:37.916565 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:21:37.922639 ignition[665]: op(1): [finished] loading QEMU firmware config module May 8 00:21:37.928859 systemd-networkd[766]: lo: Link UP May 8 00:21:37.928870 systemd-networkd[766]: lo: Gained carrier May 8 00:21:37.929499 systemd-networkd[766]: Enumeration completed May 8 00:21:37.929576 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:21:37.930072 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:21:37.930075 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:21:37.930953 systemd[1]: Reached target network.target - Network. May 8 00:21:37.930965 systemd-networkd[766]: eth0: Link UP May 8 00:21:37.930968 systemd-networkd[766]: eth0: Gained carrier May 8 00:21:37.930974 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:21:37.951809 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.58/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:21:37.972402 ignition[665]: parsing config with SHA512: c9256d7e0860a87ea0af6c44b043b6586bfe3507558cae1ba5dfa8cc90158ec054562910e1aa7368e811a210f308b42e3d97ed6e6a9141b7612e78e7f6b97586 May 8 00:21:37.977499 unknown[665]: fetched base config from "system" May 8 00:21:37.977518 unknown[665]: fetched user config from "qemu" May 8 00:21:37.978872 ignition[665]: fetch-offline: fetch-offline passed May 8 00:21:37.978962 ignition[665]: Ignition finished successfully May 8 00:21:37.980612 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:21:37.981743 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:21:37.989955 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:21:38.001140 ignition[773]: Ignition 2.19.0 May 8 00:21:38.001150 ignition[773]: Stage: kargs May 8 00:21:38.001311 ignition[773]: no configs at "/usr/lib/ignition/base.d" May 8 00:21:38.001321 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:21:38.003956 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:21:38.002193 ignition[773]: kargs: kargs passed May 8 00:21:38.002235 ignition[773]: Ignition finished successfully May 8 00:21:38.013882 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:21:38.023205 ignition[781]: Ignition 2.19.0 May 8 00:21:38.023214 ignition[781]: Stage: disks May 8 00:21:38.023374 ignition[781]: no configs at "/usr/lib/ignition/base.d" May 8 00:21:38.023383 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:21:38.025870 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:21:38.024231 ignition[781]: disks: disks passed May 8 00:21:38.024274 ignition[781]: Ignition finished successfully May 8 00:21:38.029075 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:21:38.030509 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:21:38.032489 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:21:38.034511 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:21:38.036359 systemd[1]: Reached target basic.target - Basic System. May 8 00:21:38.048894 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:21:38.058130 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:21:38.062111 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:21:38.064134 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:21:38.106608 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:21:38.108131 kernel: EXT4-fs (vda9): mounted filesystem f1546e2a-34df-485a-a644-37e10cd925e0 r/w with ordered data mode. Quota mode: none. May 8 00:21:38.107882 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:21:38.117851 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:21:38.120017 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:21:38.120988 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:21:38.121024 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:21:38.121044 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:21:38.126910 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:21:38.129203 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:21:38.134147 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) May 8 00:21:38.134169 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:21:38.134179 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:21:38.134195 kernel: BTRFS info (device vda6): using free space tree May 8 00:21:38.134205 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:21:38.136244 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:21:38.171066 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:21:38.174557 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory May 8 00:21:38.178595 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:21:38.182377 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:21:38.254272 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:21:38.263870 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:21:38.266154 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:21:38.270785 kernel: BTRFS info (device vda6): last unmount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:21:38.284946 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:21:38.286797 ignition[913]: INFO : Ignition 2.19.0 May 8 00:21:38.286797 ignition[913]: INFO : Stage: mount May 8 00:21:38.288313 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:21:38.288313 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:21:38.288313 ignition[913]: INFO : mount: mount passed May 8 00:21:38.288313 ignition[913]: INFO : Ignition finished successfully May 8 00:21:38.291832 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:21:38.297852 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:21:38.777347 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:21:38.786932 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:21:38.791766 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926) May 8 00:21:38.793952 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:21:38.793977 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:21:38.793988 kernel: BTRFS info (device vda6): using free space tree May 8 00:21:38.795764 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:21:38.796887 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:21:38.811568 ignition[943]: INFO : Ignition 2.19.0 May 8 00:21:38.811568 ignition[943]: INFO : Stage: files May 8 00:21:38.813196 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:21:38.813196 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:21:38.813196 ignition[943]: DEBUG : files: compiled without relabeling support, skipping May 8 00:21:38.816639 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:21:38.816639 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:21:38.816639 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:21:38.816639 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:21:38.816639 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:21:38.815575 unknown[943]: wrote ssh authorized keys file for user: core May 8 00:21:38.823839 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 00:21:38.823839 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 8 00:21:38.912030 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:21:39.099138 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 00:21:39.099138 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 8 00:21:39.102888 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:21:39.102888 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:21:39.102888 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:21:39.102888 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:21:39.102888 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:21:39.102888 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:21:39.102888 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:21:39.102888 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:21:39.102888 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:21:39.102888 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:21:39.102888 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:21:39.102888 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:21:39.102888 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 8 00:21:39.413294 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 8 00:21:39.741818 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:21:39.741818 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 8 00:21:39.745476 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:21:39.745476 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:21:39.745476 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 8 00:21:39.745476 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 8 00:21:39.745476 ignition[943]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:21:39.745476 ignition[943]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:21:39.745476 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 8 00:21:39.745476 ignition[943]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:21:39.770488 ignition[943]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:21:39.773937 ignition[943]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:21:39.776634 ignition[943]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:21:39.776634 ignition[943]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 8 00:21:39.776634 ignition[943]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:21:39.776634 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:21:39.776634 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:21:39.776634 ignition[943]: INFO : files: files passed May 8 00:21:39.776634 ignition[943]: INFO : Ignition finished successfully May 8 00:21:39.776505 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:21:39.787896 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:21:39.790417 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:21:39.792542 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:21:39.794035 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:21:39.798470 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory May 8 00:21:39.801821 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:21:39.801821 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:21:39.805145 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:21:39.804343 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:21:39.806637 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:21:39.819885 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:21:39.838653 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:21:39.838782 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:21:39.841031 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:21:39.842831 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:21:39.844641 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:21:39.846937 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:21:39.861010 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:21:39.870916 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:21:39.878263 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:21:39.879542 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:21:39.881710 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:21:39.883502 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:21:39.883626 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:21:39.886157 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:21:39.888157 systemd[1]: Stopped target basic.target - Basic System. May 8 00:21:39.889791 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:21:39.891604 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:21:39.893649 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:21:39.893915 systemd-networkd[766]: eth0: Gained IPv6LL May 8 00:21:39.895812 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:21:39.897735 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:21:39.899483 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:21:39.901248 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:21:39.903228 systemd[1]: Stopped target swap.target - Swaps. May 8 00:21:39.904865 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:21:39.904983 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:21:39.907410 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:21:39.908635 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:21:39.910642 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:21:39.913809 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:21:39.915130 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:21:39.915251 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:21:39.918148 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:21:39.918267 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:21:39.920265 systemd[1]: Stopped target paths.target - Path Units. May 8 00:21:39.921879 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:21:39.922861 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:21:39.924977 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:21:39.926550 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:21:39.928263 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:21:39.928382 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:21:39.930408 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:21:39.930524 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:21:39.932002 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:21:39.932141 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:21:39.933907 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:21:39.934044 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:21:39.947999 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:21:39.949702 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:21:39.949890 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:21:39.954988 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:21:39.955826 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:21:39.956000 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:21:39.961712 ignition[998]: INFO : Ignition 2.19.0 May 8 00:21:39.961712 ignition[998]: INFO : Stage: umount May 8 00:21:39.961712 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:21:39.961712 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:21:39.961712 ignition[998]: INFO : umount: umount passed May 8 00:21:39.961712 ignition[998]: INFO : Ignition finished successfully May 8 00:21:39.958629 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:21:39.958797 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:21:39.963161 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:21:39.963246 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:21:39.967119 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:21:39.967280 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:21:39.969351 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:21:39.970054 systemd[1]: Stopped target network.target - Network. May 8 00:21:39.971079 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:21:39.971146 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:21:39.972242 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:21:39.972287 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:21:39.974788 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:21:39.974838 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:21:39.976664 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:21:39.976725 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:21:39.978717 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:21:39.980474 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:21:39.987337 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:21:39.987445 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:21:39.990171 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:21:39.990224 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:21:39.992097 systemd-networkd[766]: eth0: DHCPv6 lease lost May 8 00:21:39.994285 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:21:39.994420 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:21:39.996150 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:21:39.996180 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:21:40.006883 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:21:40.008512 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:21:40.008571 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:21:40.010604 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:21:40.010648 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:21:40.012477 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:21:40.012518 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:21:40.014848 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:21:40.025906 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:21:40.026018 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:21:40.029814 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:21:40.029954 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:21:40.032283 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:21:40.032362 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:21:40.034435 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:21:40.034491 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:21:40.035833 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:21:40.035867 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:21:40.037562 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:21:40.037608 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:21:40.040405 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:21:40.040449 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:21:40.043117 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:21:40.043166 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:21:40.046065 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:21:40.046110 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:21:40.057952 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:21:40.059062 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:21:40.059120 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:21:40.061290 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:21:40.061334 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:21:40.063475 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:21:40.063567 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:21:40.065652 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:21:40.067821 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:21:40.076823 systemd[1]: Switching root. May 8 00:21:40.110409 systemd-journald[239]: Journal stopped May 8 00:21:40.816976 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 8 00:21:40.817032 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:21:40.817045 kernel: SELinux: policy capability open_perms=1 May 8 00:21:40.819202 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:21:40.819240 kernel: SELinux: policy capability always_check_network=0 May 8 00:21:40.819250 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:21:40.819263 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:21:40.819273 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:21:40.819286 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:21:40.819295 kernel: audit: type=1403 audit(1746663700.240:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:21:40.819307 systemd[1]: Successfully loaded SELinux policy in 29.409ms. May 8 00:21:40.819320 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.980ms. May 8 00:21:40.819331 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:21:40.819345 systemd[1]: Detected virtualization kvm. May 8 00:21:40.819355 systemd[1]: Detected architecture arm64. May 8 00:21:40.819365 systemd[1]: Detected first boot. May 8 00:21:40.819375 systemd[1]: Initializing machine ID from VM UUID. May 8 00:21:40.819386 zram_generator::config[1043]: No configuration found. May 8 00:21:40.819397 systemd[1]: Populated /etc with preset unit settings. May 8 00:21:40.819407 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:21:40.819418 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:21:40.819433 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:21:40.819445 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:21:40.819455 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:21:40.819466 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:21:40.819476 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:21:40.819487 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:21:40.819499 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:21:40.819509 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:21:40.819521 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:21:40.819531 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:21:40.819542 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:21:40.819552 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:21:40.819563 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:21:40.819573 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:21:40.819584 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:21:40.819594 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 8 00:21:40.819605 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:21:40.819615 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:21:40.819626 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:21:40.819637 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:21:40.819647 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:21:40.819657 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:21:40.819668 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:21:40.819678 systemd[1]: Reached target slices.target - Slice Units. May 8 00:21:40.819700 systemd[1]: Reached target swap.target - Swaps. May 8 00:21:40.819715 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:21:40.819726 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:21:40.819737 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:21:40.819766 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:21:40.819780 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:21:40.819790 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:21:40.819801 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:21:40.819811 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:21:40.819821 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:21:40.819834 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:21:40.819844 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:21:40.819854 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:21:40.819865 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:21:40.819876 systemd[1]: Reached target machines.target - Containers. May 8 00:21:40.819886 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:21:40.819896 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:21:40.819907 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:21:40.819917 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:21:40.819929 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:21:40.819940 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:21:40.819950 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:21:40.819960 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:21:40.819970 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:21:40.819982 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:21:40.819992 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:21:40.820003 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:21:40.820014 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:21:40.820025 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:21:40.820035 kernel: fuse: init (API version 7.39) May 8 00:21:40.820046 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:21:40.820055 kernel: loop: module loaded May 8 00:21:40.820065 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:21:40.820076 kernel: ACPI: bus type drm_connector registered May 8 00:21:40.820085 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:21:40.820096 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:21:40.820108 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:21:40.820118 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:21:40.820129 systemd[1]: Stopped verity-setup.service. May 8 00:21:40.820139 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:21:40.820149 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:21:40.820159 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:21:40.820170 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:21:40.820203 systemd-journald[1103]: Collecting audit messages is disabled. May 8 00:21:40.820226 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:21:40.820238 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:21:40.820248 systemd-journald[1103]: Journal started May 8 00:21:40.820270 systemd-journald[1103]: Runtime Journal (/run/log/journal/de13a5aacc9c43b6911b0d60bd7c3b92) is 5.9M, max 47.3M, 41.4M free. May 8 00:21:40.619881 systemd[1]: Queued start job for default target multi-user.target. May 8 00:21:40.644391 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 00:21:40.644738 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:21:40.825183 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:21:40.825966 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:21:40.827450 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:21:40.828891 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:21:40.829025 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:21:40.830429 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:21:40.830593 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:21:40.832124 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:21:40.832258 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:21:40.833676 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:21:40.833831 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:21:40.835262 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:21:40.835401 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:21:40.836698 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:21:40.836859 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:21:40.838247 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:21:40.839645 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:21:40.843145 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:21:40.855921 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:21:40.862854 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:21:40.864936 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:21:40.866124 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:21:40.866178 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:21:40.868210 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 00:21:40.870494 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:21:40.872618 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:21:40.873807 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:21:40.875265 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:21:40.879513 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:21:40.880714 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:21:40.881934 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:21:40.883142 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:21:40.885788 systemd-journald[1103]: Time spent on flushing to /var/log/journal/de13a5aacc9c43b6911b0d60bd7c3b92 is 19.711ms for 851 entries. May 8 00:21:40.885788 systemd-journald[1103]: System Journal (/var/log/journal/de13a5aacc9c43b6911b0d60bd7c3b92) is 8.0M, max 195.6M, 187.6M free. May 8 00:21:40.910398 systemd-journald[1103]: Received client request to flush runtime journal. May 8 00:21:40.910442 kernel: loop0: detected capacity change from 0 to 194096 May 8 00:21:40.886134 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:21:40.891929 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:21:40.897961 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:21:40.901592 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:21:40.903131 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:21:40.905001 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:21:40.908151 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:21:40.909663 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:21:40.911321 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:21:40.917494 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:21:40.920183 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:21:40.922767 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:21:40.931962 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 00:21:40.934337 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:21:40.946896 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 00:21:40.953253 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:21:40.958776 kernel: loop1: detected capacity change from 0 to 114432 May 8 00:21:40.965181 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:21:40.970872 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:21:40.971481 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 00:21:40.994917 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. May 8 00:21:40.995778 kernel: loop2: detected capacity change from 0 to 114328 May 8 00:21:40.994933 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. May 8 00:21:40.998666 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:21:41.023768 kernel: loop3: detected capacity change from 0 to 194096 May 8 00:21:41.029846 kernel: loop4: detected capacity change from 0 to 114432 May 8 00:21:41.033770 kernel: loop5: detected capacity change from 0 to 114328 May 8 00:21:41.036488 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 00:21:41.036871 (sd-merge)[1179]: Merged extensions into '/usr'. May 8 00:21:41.043020 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:21:41.043041 systemd[1]: Reloading... May 8 00:21:41.109778 zram_generator::config[1209]: No configuration found. May 8 00:21:41.193849 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:21:41.207359 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:21:41.252274 systemd[1]: Reloading finished in 208 ms. May 8 00:21:41.281835 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:21:41.285131 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:21:41.298947 systemd[1]: Starting ensure-sysext.service... May 8 00:21:41.300903 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:21:41.317507 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:21:41.317786 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:21:41.317803 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... May 8 00:21:41.317814 systemd[1]: Reloading... May 8 00:21:41.318395 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:21:41.318592 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. May 8 00:21:41.318636 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. May 8 00:21:41.323422 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:21:41.323432 systemd-tmpfiles[1242]: Skipping /boot May 8 00:21:41.330040 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:21:41.330055 systemd-tmpfiles[1242]: Skipping /boot May 8 00:21:41.365801 zram_generator::config[1269]: No configuration found. May 8 00:21:41.458101 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:21:41.503093 systemd[1]: Reloading finished in 184 ms. May 8 00:21:41.516954 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:21:41.526198 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:21:41.534058 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:21:41.539266 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:21:41.541484 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:21:41.547676 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:21:41.552220 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:21:41.555523 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:21:41.562316 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:21:41.565249 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:21:41.571022 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:21:41.573254 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:21:41.578472 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:21:41.580530 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:21:41.586950 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:21:41.590399 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:21:41.593959 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:21:41.594120 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:21:41.595816 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:21:41.595947 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:21:41.597553 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:21:41.597694 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:21:41.603565 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:21:41.605868 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:21:41.611362 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:21:41.616214 systemd-udevd[1316]: Using default interface naming scheme 'v255'. May 8 00:21:41.617521 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:21:41.619424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:21:41.621973 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:21:41.624793 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:21:41.638474 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:21:41.640352 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:21:41.642031 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:21:41.645723 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:21:41.648043 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:21:41.648241 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:21:41.650168 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:21:41.650312 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:21:41.658396 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:21:41.668247 systemd[1]: Finished ensure-sysext.service. May 8 00:21:41.671985 augenrules[1368]: No rules May 8 00:21:41.677110 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:21:41.682333 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 8 00:21:41.682611 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:21:41.690923 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:21:41.693924 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:21:41.698898 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:21:41.701912 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:21:41.703020 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:21:41.708009 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:21:41.713620 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:21:41.715263 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:21:41.715676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:21:41.716816 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:21:41.717421 systemd-resolved[1310]: Positive Trust Anchors: May 8 00:21:41.717640 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:21:41.717676 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:21:41.728533 systemd-resolved[1310]: Defaulting to hostname 'linux'. May 8 00:21:41.732499 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1351) May 8 00:21:41.734642 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:21:41.734842 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:21:41.737925 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:21:41.738060 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:21:41.739204 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:21:41.740783 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:21:41.741900 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:21:41.763197 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:21:41.764951 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:21:41.775076 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:21:41.775986 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:21:41.776040 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:21:41.776174 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:21:41.777203 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:21:41.793791 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:21:41.811159 systemd-networkd[1383]: lo: Link UP May 8 00:21:41.811169 systemd-networkd[1383]: lo: Gained carrier May 8 00:21:41.813652 systemd-networkd[1383]: Enumeration completed May 8 00:21:41.815032 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:21:41.816059 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:21:41.817329 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:21:41.817338 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:21:41.817953 systemd-networkd[1383]: eth0: Link UP May 8 00:21:41.817961 systemd-networkd[1383]: eth0: Gained carrier May 8 00:21:41.817974 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:21:41.819636 systemd[1]: Reached target network.target - Network. May 8 00:21:41.821536 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:21:41.827511 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:21:41.828824 systemd-networkd[1383]: eth0: DHCPv4 address 10.0.0.58/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:21:41.829797 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. May 8 00:21:41.829975 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:21:41.359502 systemd-resolved[1310]: Clock change detected. Flushing caches. May 8 00:21:41.363948 systemd-journald[1103]: Time jumped backwards, rotating. May 8 00:21:41.359565 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:21:41.359613 systemd-timesyncd[1385]: Initial clock synchronization to Thu 2025-05-08 00:21:41.359464 UTC. May 8 00:21:41.381743 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:21:41.385679 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:21:41.412170 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:21:41.413321 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:21:41.414216 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:21:41.415091 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:21:41.416003 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:21:41.417140 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:21:41.418025 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:21:41.419040 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:21:41.419953 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:21:41.419985 systemd[1]: Reached target paths.target - Path Units. May 8 00:21:41.420619 systemd[1]: Reached target timers.target - Timer Units. May 8 00:21:41.422145 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:21:41.424223 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:21:41.434711 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:21:41.436834 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:21:41.438110 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:21:41.439076 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:21:41.439800 systemd[1]: Reached target basic.target - Basic System. May 8 00:21:41.440495 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:21:41.440523 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:21:41.441467 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:21:41.443314 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:21:41.445861 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:21:41.446605 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:21:41.449118 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:21:41.450892 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:21:41.452133 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:21:41.457706 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:21:41.460706 jq[1414]: false May 8 00:21:41.460890 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:21:41.464644 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:21:41.470322 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:21:41.472751 extend-filesystems[1415]: Found loop3 May 8 00:21:41.472751 extend-filesystems[1415]: Found loop4 May 8 00:21:41.474923 extend-filesystems[1415]: Found loop5 May 8 00:21:41.474923 extend-filesystems[1415]: Found vda May 8 00:21:41.474923 extend-filesystems[1415]: Found vda1 May 8 00:21:41.474923 extend-filesystems[1415]: Found vda2 May 8 00:21:41.474923 extend-filesystems[1415]: Found vda3 May 8 00:21:41.474923 extend-filesystems[1415]: Found usr May 8 00:21:41.474923 extend-filesystems[1415]: Found vda4 May 8 00:21:41.474923 extend-filesystems[1415]: Found vda6 May 8 00:21:41.474923 extend-filesystems[1415]: Found vda7 May 8 00:21:41.474923 extend-filesystems[1415]: Found vda9 May 8 00:21:41.474923 extend-filesystems[1415]: Checking size of /dev/vda9 May 8 00:21:41.473112 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:21:41.481895 dbus-daemon[1413]: [system] SELinux support is enabled May 8 00:21:41.473568 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:21:41.475101 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:21:41.479716 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:21:41.481570 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:21:41.482688 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:21:41.489158 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:21:41.489346 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:21:41.489601 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:21:41.489910 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:21:41.494042 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:21:41.494192 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:21:41.496789 jq[1429]: true May 8 00:21:41.506260 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:21:41.506361 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:21:41.508063 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:21:41.508096 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:21:41.513245 extend-filesystems[1415]: Resized partition /dev/vda9 May 8 00:21:41.519922 extend-filesystems[1448]: resize2fs 1.47.1 (20-May-2024) May 8 00:21:41.515231 (ntainerd)[1437]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:21:41.524328 jq[1438]: true May 8 00:21:41.532672 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:21:41.532790 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1351) May 8 00:21:41.541969 tar[1434]: linux-arm64/helm May 8 00:21:41.545988 update_engine[1428]: I20250508 00:21:41.545686 1428 main.cc:92] Flatcar Update Engine starting May 8 00:21:41.552163 systemd[1]: Started update-engine.service - Update Engine. May 8 00:21:41.553544 update_engine[1428]: I20250508 00:21:41.552145 1428 update_check_scheduler.cc:74] Next update check in 10m15s May 8 00:21:41.562938 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:21:41.576764 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:21:41.594370 systemd-logind[1426]: Watching system buttons on /dev/input/event0 (Power Button) May 8 00:21:41.609706 systemd-logind[1426]: New seat seat0. May 8 00:21:41.610553 extend-filesystems[1448]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:21:41.610553 extend-filesystems[1448]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:21:41.610553 extend-filesystems[1448]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:21:41.618962 extend-filesystems[1415]: Resized filesystem in /dev/vda9 May 8 00:21:41.611729 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:21:41.612842 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:21:41.613011 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:21:41.627276 bash[1467]: Updated "/home/core/.ssh/authorized_keys" May 8 00:21:41.629934 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:21:41.631492 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:21:41.654580 locksmithd[1458]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:21:41.770145 containerd[1437]: time="2025-05-08T00:21:41.770059556Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 00:21:41.793588 containerd[1437]: time="2025-05-08T00:21:41.793454436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:21:41.795004 containerd[1437]: time="2025-05-08T00:21:41.794968196Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:21:41.795004 containerd[1437]: time="2025-05-08T00:21:41.795001236Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:21:41.795072 containerd[1437]: time="2025-05-08T00:21:41.795016396Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:21:41.795203 containerd[1437]: time="2025-05-08T00:21:41.795175396Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:21:41.795203 containerd[1437]: time="2025-05-08T00:21:41.795198476Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:21:41.795270 containerd[1437]: time="2025-05-08T00:21:41.795255436Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:21:41.795292 containerd[1437]: time="2025-05-08T00:21:41.795271036Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:21:41.795446 containerd[1437]: time="2025-05-08T00:21:41.795426196Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:21:41.795474 containerd[1437]: time="2025-05-08T00:21:41.795446876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:21:41.795474 containerd[1437]: time="2025-05-08T00:21:41.795464796Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:21:41.795505 containerd[1437]: time="2025-05-08T00:21:41.795474116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:21:41.795572 containerd[1437]: time="2025-05-08T00:21:41.795556876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:21:41.795812 containerd[1437]: time="2025-05-08T00:21:41.795791516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:21:41.795938 containerd[1437]: time="2025-05-08T00:21:41.795893636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:21:41.795938 containerd[1437]: time="2025-05-08T00:21:41.795910636Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:21:41.796009 containerd[1437]: time="2025-05-08T00:21:41.795994116Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:21:41.796051 containerd[1437]: time="2025-05-08T00:21:41.796039836Z" level=info msg="metadata content store policy set" policy=shared May 8 00:21:41.799627 containerd[1437]: time="2025-05-08T00:21:41.799593596Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:21:41.799691 containerd[1437]: time="2025-05-08T00:21:41.799639476Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:21:41.799691 containerd[1437]: time="2025-05-08T00:21:41.799660996Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:21:41.799691 containerd[1437]: time="2025-05-08T00:21:41.799676676Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:21:41.799691 containerd[1437]: time="2025-05-08T00:21:41.799689916Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:21:41.799869 containerd[1437]: time="2025-05-08T00:21:41.799848956Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:21:41.800218 containerd[1437]: time="2025-05-08T00:21:41.800186676Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:21:41.801047 containerd[1437]: time="2025-05-08T00:21:41.800675196Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:21:41.801047 containerd[1437]: time="2025-05-08T00:21:41.800711156Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:21:41.801047 containerd[1437]: time="2025-05-08T00:21:41.800763516Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:21:41.801047 containerd[1437]: time="2025-05-08T00:21:41.800787236Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:21:41.801047 containerd[1437]: time="2025-05-08T00:21:41.800806516Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:21:41.801047 containerd[1437]: time="2025-05-08T00:21:41.800825916Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:21:41.801047 containerd[1437]: time="2025-05-08T00:21:41.800841796Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:21:41.801047 containerd[1437]: time="2025-05-08T00:21:41.800861356Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:21:41.801047 containerd[1437]: time="2025-05-08T00:21:41.800878916Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:21:41.801047 containerd[1437]: time="2025-05-08T00:21:41.800897236Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:21:41.801047 containerd[1437]: time="2025-05-08T00:21:41.800912716Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:21:41.801047 containerd[1437]: time="2025-05-08T00:21:41.800938276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:21:41.801047 containerd[1437]: time="2025-05-08T00:21:41.800957836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:21:41.801047 containerd[1437]: time="2025-05-08T00:21:41.800974996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:21:41.801315 containerd[1437]: time="2025-05-08T00:21:41.800990036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:21:41.801315 containerd[1437]: time="2025-05-08T00:21:41.801013356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:21:41.801315 containerd[1437]: time="2025-05-08T00:21:41.801031756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:21:41.801682 containerd[1437]: time="2025-05-08T00:21:41.801640876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:21:41.801742 containerd[1437]: time="2025-05-08T00:21:41.801687396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:21:41.801742 containerd[1437]: time="2025-05-08T00:21:41.801707716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:21:41.801778 containerd[1437]: time="2025-05-08T00:21:41.801739916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:21:41.801778 containerd[1437]: time="2025-05-08T00:21:41.801754676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:21:41.801825 containerd[1437]: time="2025-05-08T00:21:41.801774316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:21:41.801825 containerd[1437]: time="2025-05-08T00:21:41.801790796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:21:41.801825 containerd[1437]: time="2025-05-08T00:21:41.801811156Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:21:41.801871 containerd[1437]: time="2025-05-08T00:21:41.801840076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:21:41.801871 containerd[1437]: time="2025-05-08T00:21:41.801856756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:21:41.801943 containerd[1437]: time="2025-05-08T00:21:41.801871556Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:21:41.802046 containerd[1437]: time="2025-05-08T00:21:41.801985076Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:21:41.802046 containerd[1437]: time="2025-05-08T00:21:41.802010916Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:21:41.802046 containerd[1437]: time="2025-05-08T00:21:41.802023596Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:21:41.802046 containerd[1437]: time="2025-05-08T00:21:41.802040156Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:21:41.802170 containerd[1437]: time="2025-05-08T00:21:41.802053916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:21:41.802170 containerd[1437]: time="2025-05-08T00:21:41.802069836Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:21:41.802170 containerd[1437]: time="2025-05-08T00:21:41.802080236Z" level=info msg="NRI interface is disabled by configuration." May 8 00:21:41.802170 containerd[1437]: time="2025-05-08T00:21:41.802093716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:21:41.802552 containerd[1437]: time="2025-05-08T00:21:41.802444676Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:21:41.802552 containerd[1437]: time="2025-05-08T00:21:41.802522036Z" level=info msg="Connect containerd service" May 8 00:21:41.802698 containerd[1437]: time="2025-05-08T00:21:41.802622996Z" level=info msg="using legacy CRI server" May 8 00:21:41.802698 containerd[1437]: time="2025-05-08T00:21:41.802630876Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:21:41.802897 containerd[1437]: time="2025-05-08T00:21:41.802768556Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:21:41.804185 containerd[1437]: time="2025-05-08T00:21:41.804154676Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:21:41.804961 containerd[1437]: time="2025-05-08T00:21:41.804511876Z" level=info msg="Start subscribing containerd event" May 8 00:21:41.804961 containerd[1437]: time="2025-05-08T00:21:41.804561236Z" level=info msg="Start recovering state" May 8 00:21:41.804961 containerd[1437]: time="2025-05-08T00:21:41.804629516Z" level=info msg="Start event monitor" May 8 00:21:41.804961 containerd[1437]: time="2025-05-08T00:21:41.804639836Z" level=info msg="Start snapshots syncer" May 8 00:21:41.804961 containerd[1437]: time="2025-05-08T00:21:41.804660356Z" level=info msg="Start cni network conf syncer for default" May 8 00:21:41.804961 containerd[1437]: time="2025-05-08T00:21:41.804667436Z" level=info msg="Start streaming server" May 8 00:21:41.805368 containerd[1437]: time="2025-05-08T00:21:41.805347076Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:21:41.805466 containerd[1437]: time="2025-05-08T00:21:41.805452836Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:21:41.805660 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:21:41.807190 containerd[1437]: time="2025-05-08T00:21:41.807167716Z" level=info msg="containerd successfully booted in 0.039635s" May 8 00:21:41.928774 tar[1434]: linux-arm64/LICENSE May 8 00:21:41.928853 tar[1434]: linux-arm64/README.md May 8 00:21:41.938961 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:21:42.000941 sshd_keygen[1435]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:21:42.024869 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:21:42.038454 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:21:42.042973 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:21:42.043141 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:21:42.046349 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:21:42.057763 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:21:42.060486 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:21:42.062566 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 8 00:21:42.063898 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:21:42.684939 systemd-networkd[1383]: eth0: Gained IPv6LL May 8 00:21:42.687423 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:21:42.689192 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:21:42.701976 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 00:21:42.704188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:21:42.706156 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:21:42.719985 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:21:42.720168 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 00:21:42.721708 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:21:42.726758 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:21:43.187831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:21:43.189471 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:21:43.191014 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:21:43.195651 systemd[1]: Startup finished in 545ms (kernel) + 4.539s (initrd) + 3.455s (userspace) = 8.541s. May 8 00:21:43.654235 kubelet[1526]: E0508 00:21:43.654187 1526 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:21:43.656883 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:21:43.657037 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:21:47.998517 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:21:47.999602 systemd[1]: Started sshd@0-10.0.0.58:22-10.0.0.1:40350.service - OpenSSH per-connection server daemon (10.0.0.1:40350). May 8 00:21:48.044124 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 40350 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:21:48.045706 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:21:48.059886 systemd-logind[1426]: New session 1 of user core. May 8 00:21:48.060121 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:21:48.070001 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:21:48.078210 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:21:48.080154 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:21:48.085962 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:21:48.156256 systemd[1544]: Queued start job for default target default.target. May 8 00:21:48.167656 systemd[1544]: Created slice app.slice - User Application Slice. May 8 00:21:48.167685 systemd[1544]: Reached target paths.target - Paths. May 8 00:21:48.167698 systemd[1544]: Reached target timers.target - Timers. May 8 00:21:48.168930 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:21:48.178361 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:21:48.178424 systemd[1544]: Reached target sockets.target - Sockets. May 8 00:21:48.178436 systemd[1544]: Reached target basic.target - Basic System. May 8 00:21:48.178472 systemd[1544]: Reached target default.target - Main User Target. May 8 00:21:48.178497 systemd[1544]: Startup finished in 87ms. May 8 00:21:48.178692 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:21:48.179974 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:21:48.238201 systemd[1]: Started sshd@1-10.0.0.58:22-10.0.0.1:40360.service - OpenSSH per-connection server daemon (10.0.0.1:40360). May 8 00:21:48.269524 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 40360 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:21:48.270661 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:21:48.274752 systemd-logind[1426]: New session 2 of user core. May 8 00:21:48.286864 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:21:48.339331 sshd[1555]: pam_unix(sshd:session): session closed for user core May 8 00:21:48.354102 systemd[1]: sshd@1-10.0.0.58:22-10.0.0.1:40360.service: Deactivated successfully. May 8 00:21:48.356965 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:21:48.358143 systemd-logind[1426]: Session 2 logged out. Waiting for processes to exit. May 8 00:21:48.359270 systemd[1]: Started sshd@2-10.0.0.58:22-10.0.0.1:40374.service - OpenSSH per-connection server daemon (10.0.0.1:40374). May 8 00:21:48.360095 systemd-logind[1426]: Removed session 2. May 8 00:21:48.390221 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 40374 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:21:48.391365 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:21:48.395202 systemd-logind[1426]: New session 3 of user core. May 8 00:21:48.410878 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:21:48.458433 sshd[1562]: pam_unix(sshd:session): session closed for user core May 8 00:21:48.477136 systemd[1]: sshd@2-10.0.0.58:22-10.0.0.1:40374.service: Deactivated successfully. May 8 00:21:48.478451 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:21:48.479682 systemd-logind[1426]: Session 3 logged out. Waiting for processes to exit. May 8 00:21:48.485998 systemd[1]: Started sshd@3-10.0.0.58:22-10.0.0.1:40380.service - OpenSSH per-connection server daemon (10.0.0.1:40380). May 8 00:21:48.487104 systemd-logind[1426]: Removed session 3. May 8 00:21:48.514798 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 40380 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:21:48.516186 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:21:48.519739 systemd-logind[1426]: New session 4 of user core. May 8 00:21:48.534896 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:21:48.588835 sshd[1569]: pam_unix(sshd:session): session closed for user core May 8 00:21:48.599433 systemd[1]: sshd@3-10.0.0.58:22-10.0.0.1:40380.service: Deactivated successfully. May 8 00:21:48.601216 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:21:48.603900 systemd-logind[1426]: Session 4 logged out. Waiting for processes to exit. May 8 00:21:48.618053 systemd[1]: Started sshd@4-10.0.0.58:22-10.0.0.1:40390.service - OpenSSH per-connection server daemon (10.0.0.1:40390). May 8 00:21:48.618818 systemd-logind[1426]: Removed session 4. May 8 00:21:48.650173 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 40390 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:21:48.651648 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:21:48.655512 systemd-logind[1426]: New session 5 of user core. May 8 00:21:48.666877 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:21:48.727329 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:21:48.727599 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:21:48.740464 sudo[1579]: pam_unix(sudo:session): session closed for user root May 8 00:21:48.742237 sshd[1576]: pam_unix(sshd:session): session closed for user core May 8 00:21:48.755260 systemd[1]: sshd@4-10.0.0.58:22-10.0.0.1:40390.service: Deactivated successfully. May 8 00:21:48.756837 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:21:48.759288 systemd-logind[1426]: Session 5 logged out. Waiting for processes to exit. May 8 00:21:48.760536 systemd[1]: Started sshd@5-10.0.0.58:22-10.0.0.1:40394.service - OpenSSH per-connection server daemon (10.0.0.1:40394). May 8 00:21:48.761297 systemd-logind[1426]: Removed session 5. May 8 00:21:48.793102 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 40394 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:21:48.794512 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:21:48.798427 systemd-logind[1426]: New session 6 of user core. May 8 00:21:48.812892 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:21:48.865351 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:21:48.865637 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:21:48.868594 sudo[1588]: pam_unix(sudo:session): session closed for user root May 8 00:21:48.872961 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 00:21:48.873221 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:21:48.891025 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 8 00:21:48.892120 auditctl[1591]: No rules May 8 00:21:48.892949 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:21:48.893808 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 8 00:21:48.895521 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:21:48.917505 augenrules[1609]: No rules May 8 00:21:48.918774 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:21:48.919810 sudo[1587]: pam_unix(sudo:session): session closed for user root May 8 00:21:48.921387 sshd[1584]: pam_unix(sshd:session): session closed for user core May 8 00:21:48.928137 systemd[1]: sshd@5-10.0.0.58:22-10.0.0.1:40394.service: Deactivated successfully. May 8 00:21:48.930473 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:21:48.931621 systemd-logind[1426]: Session 6 logged out. Waiting for processes to exit. May 8 00:21:48.940044 systemd[1]: Started sshd@6-10.0.0.58:22-10.0.0.1:40408.service - OpenSSH per-connection server daemon (10.0.0.1:40408). May 8 00:21:48.940876 systemd-logind[1426]: Removed session 6. May 8 00:21:48.967485 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 40408 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:21:48.968618 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:21:48.972627 systemd-logind[1426]: New session 7 of user core. May 8 00:21:48.981852 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:21:49.031797 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:21:49.032327 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:21:49.334926 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:21:49.335065 (dockerd)[1638]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:21:49.595193 dockerd[1638]: time="2025-05-08T00:21:49.595067916Z" level=info msg="Starting up" May 8 00:21:49.732822 dockerd[1638]: time="2025-05-08T00:21:49.732767556Z" level=info msg="Loading containers: start." May 8 00:21:49.822751 kernel: Initializing XFRM netlink socket May 8 00:21:49.880162 systemd-networkd[1383]: docker0: Link UP May 8 00:21:49.893956 dockerd[1638]: time="2025-05-08T00:21:49.893912716Z" level=info msg="Loading containers: done." May 8 00:21:49.904947 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck676933084-merged.mount: Deactivated successfully. May 8 00:21:49.905930 dockerd[1638]: time="2025-05-08T00:21:49.905888156Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:21:49.906378 dockerd[1638]: time="2025-05-08T00:21:49.906079636Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 8 00:21:49.906378 dockerd[1638]: time="2025-05-08T00:21:49.906193916Z" level=info msg="Daemon has completed initialization" May 8 00:21:49.932530 dockerd[1638]: time="2025-05-08T00:21:49.932398716Z" level=info msg="API listen on /run/docker.sock" May 8 00:21:49.932646 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:21:50.660808 containerd[1437]: time="2025-05-08T00:21:50.660765956Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 00:21:51.360026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3135567959.mount: Deactivated successfully. May 8 00:21:52.376419 containerd[1437]: time="2025-05-08T00:21:52.376360516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:52.376764 containerd[1437]: time="2025-05-08T00:21:52.376744516Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 8 00:21:52.377750 containerd[1437]: time="2025-05-08T00:21:52.377641316Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:52.380523 containerd[1437]: time="2025-05-08T00:21:52.380494036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:52.382676 containerd[1437]: time="2025-05-08T00:21:52.382637916Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.7218328s" May 8 00:21:52.382676 containerd[1437]: time="2025-05-08T00:21:52.382675956Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 8 00:21:52.400673 containerd[1437]: time="2025-05-08T00:21:52.400623156Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 00:21:53.903899 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:21:53.912922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:21:53.921464 containerd[1437]: time="2025-05-08T00:21:53.921404476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:53.937156 containerd[1437]: time="2025-05-08T00:21:53.937095036Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 8 00:21:54.005512 containerd[1437]: time="2025-05-08T00:21:54.005462516Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:54.010671 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:21:54.014485 (kubelet)[1869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:21:54.023775 containerd[1437]: time="2025-05-08T00:21:54.023678796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:54.024990 containerd[1437]: time="2025-05-08T00:21:54.024820556Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.62415308s" May 8 00:21:54.024990 containerd[1437]: time="2025-05-08T00:21:54.024861996Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 8 00:21:54.052283 containerd[1437]: time="2025-05-08T00:21:54.052201676Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 00:21:54.067121 kubelet[1869]: E0508 00:21:54.067023 1869 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:21:54.069946 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:21:54.070079 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:21:54.940222 containerd[1437]: time="2025-05-08T00:21:54.940172796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:54.941218 containerd[1437]: time="2025-05-08T00:21:54.941016156Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 8 00:21:54.942149 containerd[1437]: time="2025-05-08T00:21:54.942115516Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:54.945146 containerd[1437]: time="2025-05-08T00:21:54.945118676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:54.946191 containerd[1437]: time="2025-05-08T00:21:54.946150556Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 893.91484ms" May 8 00:21:54.946252 containerd[1437]: time="2025-05-08T00:21:54.946190676Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 8 00:21:54.965469 containerd[1437]: time="2025-05-08T00:21:54.965440436Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 00:21:55.835533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2346597540.mount: Deactivated successfully. May 8 00:21:56.045093 containerd[1437]: time="2025-05-08T00:21:56.044887036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:56.048474 containerd[1437]: time="2025-05-08T00:21:56.048123236Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 8 00:21:56.050585 containerd[1437]: time="2025-05-08T00:21:56.049681316Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:56.052020 containerd[1437]: time="2025-05-08T00:21:56.051991356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:56.052826 containerd[1437]: time="2025-05-08T00:21:56.052796556Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.0873222s" May 8 00:21:56.052927 containerd[1437]: time="2025-05-08T00:21:56.052909116Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 8 00:21:56.070808 containerd[1437]: time="2025-05-08T00:21:56.070785276Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:21:56.596134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount68642341.mount: Deactivated successfully. May 8 00:21:57.296616 containerd[1437]: time="2025-05-08T00:21:57.295585676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:57.297882 containerd[1437]: time="2025-05-08T00:21:57.297853396Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 8 00:21:57.298918 containerd[1437]: time="2025-05-08T00:21:57.298881196Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:57.302621 containerd[1437]: time="2025-05-08T00:21:57.302591476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:57.303655 containerd[1437]: time="2025-05-08T00:21:57.303629076Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.232726s" May 8 00:21:57.303742 containerd[1437]: time="2025-05-08T00:21:57.303715436Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 8 00:21:57.322068 containerd[1437]: time="2025-05-08T00:21:57.322014916Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 00:21:57.740679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3635309083.mount: Deactivated successfully. May 8 00:21:57.745324 containerd[1437]: time="2025-05-08T00:21:57.745281436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:57.746007 containerd[1437]: time="2025-05-08T00:21:57.745826876Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 8 00:21:57.746774 containerd[1437]: time="2025-05-08T00:21:57.746739636Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:57.749022 containerd[1437]: time="2025-05-08T00:21:57.748989676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:57.750238 containerd[1437]: time="2025-05-08T00:21:57.750206756Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 428.01392ms" May 8 00:21:57.750830 containerd[1437]: time="2025-05-08T00:21:57.750670196Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 8 00:21:57.768494 containerd[1437]: time="2025-05-08T00:21:57.768295276Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 00:21:58.273775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2529541035.mount: Deactivated successfully. May 8 00:21:59.659512 containerd[1437]: time="2025-05-08T00:21:59.659467676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:59.661346 containerd[1437]: time="2025-05-08T00:21:59.661091836Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 8 00:21:59.664183 containerd[1437]: time="2025-05-08T00:21:59.662571596Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:59.665443 containerd[1437]: time="2025-05-08T00:21:59.665392396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:59.666787 containerd[1437]: time="2025-05-08T00:21:59.666605156Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 1.89827616s" May 8 00:21:59.666787 containerd[1437]: time="2025-05-08T00:21:59.666638276Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 8 00:22:04.153868 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:22:04.163958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:22:04.256453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:22:04.259779 (kubelet)[2096]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:22:04.301337 kubelet[2096]: E0508 00:22:04.301276 2096 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:22:04.304023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:22:04.304209 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:22:04.799671 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:22:04.812997 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:22:04.834444 systemd[1]: Reloading requested from client PID 2113 ('systemctl') (unit session-7.scope)... May 8 00:22:04.834601 systemd[1]: Reloading... May 8 00:22:04.905751 zram_generator::config[2155]: No configuration found. May 8 00:22:05.047024 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:22:05.111425 systemd[1]: Reloading finished in 276 ms. May 8 00:22:05.152880 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 8 00:22:05.152969 systemd[1]: kubelet.service: Failed with result 'signal'. May 8 00:22:05.153818 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:22:05.155396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:22:05.241633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:22:05.245385 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:22:05.290154 kubelet[2198]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:22:05.290154 kubelet[2198]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:22:05.290154 kubelet[2198]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:22:05.291120 kubelet[2198]: I0508 00:22:05.291064 2198 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:22:05.907942 kubelet[2198]: I0508 00:22:05.907908 2198 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:22:05.908905 kubelet[2198]: I0508 00:22:05.908079 2198 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:22:05.908905 kubelet[2198]: I0508 00:22:05.908399 2198 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:22:05.928667 kubelet[2198]: I0508 00:22:05.928616 2198 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:22:05.928774 kubelet[2198]: E0508 00:22:05.928642 2198 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.58:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.58:6443: connect: connection refused May 8 00:22:05.936639 kubelet[2198]: I0508 00:22:05.936584 2198 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:22:05.938666 kubelet[2198]: I0508 00:22:05.938625 2198 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:22:05.938834 kubelet[2198]: I0508 00:22:05.938662 2198 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:22:05.938919 kubelet[2198]: I0508 00:22:05.938901 2198 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:22:05.938919 kubelet[2198]: I0508 00:22:05.938911 2198 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:22:05.939177 kubelet[2198]: I0508 00:22:05.939152 2198 state_mem.go:36] "Initialized new in-memory state store" May 8 00:22:05.941926 kubelet[2198]: I0508 00:22:05.939995 2198 kubelet.go:400] "Attempting to sync node with API server" May 8 00:22:05.941926 kubelet[2198]: I0508 00:22:05.940014 2198 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:22:05.941926 kubelet[2198]: I0508 00:22:05.940310 2198 kubelet.go:312] "Adding apiserver pod source" May 8 00:22:05.941926 kubelet[2198]: I0508 00:22:05.940564 2198 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:22:05.944221 kubelet[2198]: I0508 00:22:05.942361 2198 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:22:05.944221 kubelet[2198]: I0508 00:22:05.942850 2198 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:22:05.944221 kubelet[2198]: W0508 00:22:05.942966 2198 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:22:05.944221 kubelet[2198]: I0508 00:22:05.943792 2198 server.go:1264] "Started kubelet" May 8 00:22:05.945596 kubelet[2198]: W0508 00:22:05.945548 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 8 00:22:05.945671 kubelet[2198]: E0508 00:22:05.945604 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 8 00:22:05.945789 kubelet[2198]: W0508 00:22:05.945759 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 8 00:22:05.945869 kubelet[2198]: E0508 00:22:05.945860 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 8 00:22:05.945910 kubelet[2198]: I0508 00:22:05.945798 2198 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:22:05.947011 kubelet[2198]: I0508 00:22:05.946991 2198 server.go:455] "Adding debug handlers to kubelet server" May 8 00:22:05.948617 kubelet[2198]: I0508 00:22:05.948343 2198 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:22:05.948867 kubelet[2198]: I0508 00:22:05.948753 2198 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:22:05.949103 kubelet[2198]: I0508 00:22:05.949078 2198 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:22:05.955314 kubelet[2198]: E0508 00:22:05.949244 2198 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.58:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.58:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d65678ef0549c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:22:05.943772316 +0000 UTC m=+0.695222241,LastTimestamp:2025-05-08 00:22:05.943772316 +0000 UTC m=+0.695222241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:22:05.955802 kubelet[2198]: I0508 00:22:05.955781 2198 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:22:05.955958 kubelet[2198]: I0508 00:22:05.955898 2198 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:22:05.956175 kubelet[2198]: I0508 00:22:05.956149 2198 reconciler.go:26] "Reconciler: start to sync state" May 8 00:22:05.956775 kubelet[2198]: W0508 00:22:05.956711 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 8 00:22:05.956843 kubelet[2198]: E0508 00:22:05.956780 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 8 00:22:05.957197 kubelet[2198]: I0508 00:22:05.957170 2198 factory.go:221] Registration of the systemd container factory successfully May 8 00:22:05.957293 kubelet[2198]: I0508 00:22:05.957272 2198 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:22:05.958654 kubelet[2198]: E0508 00:22:05.958606 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="200ms" May 8 00:22:05.958827 kubelet[2198]: I0508 00:22:05.958803 2198 factory.go:221] Registration of the containerd container factory successfully May 8 00:22:05.961308 kubelet[2198]: E0508 00:22:05.961282 2198 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:22:05.965387 kubelet[2198]: I0508 00:22:05.965352 2198 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:22:05.966345 kubelet[2198]: I0508 00:22:05.966326 2198 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:22:05.966502 kubelet[2198]: I0508 00:22:05.966485 2198 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:22:05.966536 kubelet[2198]: I0508 00:22:05.966513 2198 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:22:05.966574 kubelet[2198]: E0508 00:22:05.966555 2198 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:22:05.967163 kubelet[2198]: W0508 00:22:05.967036 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 8 00:22:05.967163 kubelet[2198]: E0508 00:22:05.967082 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 8 00:22:05.971672 kubelet[2198]: I0508 00:22:05.971655 2198 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:22:05.971781 kubelet[2198]: I0508 00:22:05.971770 2198 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:22:05.971841 kubelet[2198]: I0508 00:22:05.971833 2198 state_mem.go:36] "Initialized new in-memory state store" May 8 00:22:06.029192 kubelet[2198]: I0508 00:22:06.029158 2198 policy_none.go:49] "None policy: Start" May 8 00:22:06.030065 kubelet[2198]: I0508 00:22:06.030038 2198 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:22:06.030121 kubelet[2198]: I0508 00:22:06.030071 2198 state_mem.go:35] "Initializing new in-memory state store" May 8 00:22:06.036301 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:22:06.052077 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:22:06.055069 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:22:06.057026 kubelet[2198]: I0508 00:22:06.056754 2198 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:22:06.057105 kubelet[2198]: E0508 00:22:06.057086 2198 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" May 8 00:22:06.067567 kubelet[2198]: E0508 00:22:06.067533 2198 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:22:06.069781 kubelet[2198]: I0508 00:22:06.069504 2198 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:22:06.069781 kubelet[2198]: I0508 00:22:06.069702 2198 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:22:06.069879 kubelet[2198]: I0508 00:22:06.069830 2198 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:22:06.071855 kubelet[2198]: E0508 00:22:06.071824 2198 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:22:06.160819 kubelet[2198]: E0508 00:22:06.159870 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="400ms" May 8 00:22:06.258428 kubelet[2198]: I0508 00:22:06.258377 2198 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:22:06.258782 kubelet[2198]: E0508 00:22:06.258743 2198 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" May 8 00:22:06.268065 kubelet[2198]: I0508 00:22:06.267987 2198 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:22:06.269360 kubelet[2198]: I0508 00:22:06.269321 2198 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:22:06.270516 kubelet[2198]: I0508 00:22:06.270476 2198 topology_manager.go:215] "Topology Admit Handler" podUID="04dc6c3105e9ad34677f29b8912a104f" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:22:06.280950 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 8 00:22:06.302651 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 8 00:22:06.320393 systemd[1]: Created slice kubepods-burstable-pod04dc6c3105e9ad34677f29b8912a104f.slice - libcontainer container kubepods-burstable-pod04dc6c3105e9ad34677f29b8912a104f.slice. May 8 00:22:06.358026 kubelet[2198]: I0508 00:22:06.357984 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04dc6c3105e9ad34677f29b8912a104f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"04dc6c3105e9ad34677f29b8912a104f\") " pod="kube-system/kube-apiserver-localhost" May 8 00:22:06.358026 kubelet[2198]: I0508 00:22:06.358025 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04dc6c3105e9ad34677f29b8912a104f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"04dc6c3105e9ad34677f29b8912a104f\") " pod="kube-system/kube-apiserver-localhost" May 8 00:22:06.358355 kubelet[2198]: I0508 00:22:06.358049 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:22:06.358355 kubelet[2198]: I0508 00:22:06.358064 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:22:06.358355 kubelet[2198]: I0508 00:22:06.358103 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:22:06.358355 kubelet[2198]: I0508 00:22:06.358134 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:22:06.358355 kubelet[2198]: I0508 00:22:06.358168 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:22:06.358461 kubelet[2198]: I0508 00:22:06.358193 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:22:06.358461 kubelet[2198]: I0508 00:22:06.358207 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04dc6c3105e9ad34677f29b8912a104f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"04dc6c3105e9ad34677f29b8912a104f\") " pod="kube-system/kube-apiserver-localhost" May 8 00:22:06.562383 kubelet[2198]: E0508 00:22:06.562216 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="800ms" May 8 00:22:06.601508 kubelet[2198]: E0508 00:22:06.601462 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:06.602188 containerd[1437]: time="2025-05-08T00:22:06.602128396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 8 00:22:06.618937 kubelet[2198]: E0508 00:22:06.618891 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:06.619372 containerd[1437]: time="2025-05-08T00:22:06.619337516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 8 00:22:06.623184 kubelet[2198]: E0508 00:22:06.622941 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:06.623457 containerd[1437]: time="2025-05-08T00:22:06.623425356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:04dc6c3105e9ad34677f29b8912a104f,Namespace:kube-system,Attempt:0,}" May 8 00:22:06.660865 kubelet[2198]: I0508 00:22:06.660806 2198 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:22:06.661245 kubelet[2198]: E0508 00:22:06.661133 2198 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" May 8 00:22:07.002746 kubelet[2198]: W0508 00:22:07.002671 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 8 00:22:07.002746 kubelet[2198]: E0508 00:22:07.002756 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 8 00:22:07.026157 kubelet[2198]: W0508 00:22:07.026099 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 8 00:22:07.026157 kubelet[2198]: E0508 00:22:07.026156 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 8 00:22:07.033450 kubelet[2198]: W0508 00:22:07.033424 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 8 00:22:07.033450 kubelet[2198]: E0508 00:22:07.033452 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 8 00:22:07.090665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4247467903.mount: Deactivated successfully. May 8 00:22:07.097121 containerd[1437]: time="2025-05-08T00:22:07.097074556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:22:07.097631 containerd[1437]: time="2025-05-08T00:22:07.097598196Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 8 00:22:07.098253 containerd[1437]: time="2025-05-08T00:22:07.098222716Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:22:07.099591 containerd[1437]: time="2025-05-08T00:22:07.099150476Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:22:07.099591 containerd[1437]: time="2025-05-08T00:22:07.099280356Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:22:07.099889 containerd[1437]: time="2025-05-08T00:22:07.099861636Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:22:07.100559 containerd[1437]: time="2025-05-08T00:22:07.100519516Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:22:07.104776 containerd[1437]: time="2025-05-08T00:22:07.104735076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:22:07.105704 containerd[1437]: time="2025-05-08T00:22:07.105675156Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 503.46344ms" May 8 00:22:07.108603 containerd[1437]: time="2025-05-08T00:22:07.108562116Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 484.97592ms" May 8 00:22:07.109835 containerd[1437]: time="2025-05-08T00:22:07.109601596Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 490.18784ms" May 8 00:22:07.204900 kubelet[2198]: W0508 00:22:07.204836 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 8 00:22:07.204900 kubelet[2198]: E0508 00:22:07.204878 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 8 00:22:07.280485 containerd[1437]: time="2025-05-08T00:22:07.279790916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:22:07.280485 containerd[1437]: time="2025-05-08T00:22:07.279893196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:22:07.280639 containerd[1437]: time="2025-05-08T00:22:07.279947356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:07.280639 containerd[1437]: time="2025-05-08T00:22:07.280063036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:07.282237 containerd[1437]: time="2025-05-08T00:22:07.282136116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:22:07.282237 containerd[1437]: time="2025-05-08T00:22:07.282199756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:22:07.282237 containerd[1437]: time="2025-05-08T00:22:07.282214716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:07.282460 containerd[1437]: time="2025-05-08T00:22:07.282304116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:07.282714 containerd[1437]: time="2025-05-08T00:22:07.282643396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:22:07.282714 containerd[1437]: time="2025-05-08T00:22:07.282695116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:22:07.282714 containerd[1437]: time="2025-05-08T00:22:07.282710876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:07.282867 containerd[1437]: time="2025-05-08T00:22:07.282806196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:07.309909 systemd[1]: Started cri-containerd-3a15c73e9058e141590b16b2dd045d0fea2993045f0a0a8b00485ca1d4aac85b.scope - libcontainer container 3a15c73e9058e141590b16b2dd045d0fea2993045f0a0a8b00485ca1d4aac85b. May 8 00:22:07.311883 systemd[1]: Started cri-containerd-583466c49e4f2a2a4ad74214833339cd22a50db9985a1908acd7cb5dd2f38245.scope - libcontainer container 583466c49e4f2a2a4ad74214833339cd22a50db9985a1908acd7cb5dd2f38245. May 8 00:22:07.314803 systemd[1]: Started cri-containerd-edfc8302298de807b0842e12daf34e6616cd19dc08a69607d04644a2b379151f.scope - libcontainer container edfc8302298de807b0842e12daf34e6616cd19dc08a69607d04644a2b379151f. May 8 00:22:07.347518 containerd[1437]: time="2025-05-08T00:22:07.347470996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:04dc6c3105e9ad34677f29b8912a104f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a15c73e9058e141590b16b2dd045d0fea2993045f0a0a8b00485ca1d4aac85b\"" May 8 00:22:07.348877 kubelet[2198]: E0508 00:22:07.348832 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:07.351765 containerd[1437]: time="2025-05-08T00:22:07.351502036Z" level=info msg="CreateContainer within sandbox \"3a15c73e9058e141590b16b2dd045d0fea2993045f0a0a8b00485ca1d4aac85b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:22:07.355017 containerd[1437]: time="2025-05-08T00:22:07.354984676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"583466c49e4f2a2a4ad74214833339cd22a50db9985a1908acd7cb5dd2f38245\"" May 8 00:22:07.356556 kubelet[2198]: E0508 00:22:07.356520 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:07.359173 containerd[1437]: time="2025-05-08T00:22:07.359139956Z" level=info msg="CreateContainer within sandbox \"583466c49e4f2a2a4ad74214833339cd22a50db9985a1908acd7cb5dd2f38245\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:22:07.360748 containerd[1437]: time="2025-05-08T00:22:07.360664076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"edfc8302298de807b0842e12daf34e6616cd19dc08a69607d04644a2b379151f\"" May 8 00:22:07.361325 kubelet[2198]: E0508 00:22:07.361299 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:07.363592 kubelet[2198]: E0508 00:22:07.363557 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="1.6s" May 8 00:22:07.363952 containerd[1437]: time="2025-05-08T00:22:07.363838996Z" level=info msg="CreateContainer within sandbox \"edfc8302298de807b0842e12daf34e6616cd19dc08a69607d04644a2b379151f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:22:07.368509 containerd[1437]: time="2025-05-08T00:22:07.368461716Z" level=info msg="CreateContainer within sandbox \"3a15c73e9058e141590b16b2dd045d0fea2993045f0a0a8b00485ca1d4aac85b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8189fa7f178af5ea5c434220ebe62ab045ada7c54a311570c9f3aee53c890144\"" May 8 00:22:07.373081 containerd[1437]: time="2025-05-08T00:22:07.371988436Z" level=info msg="StartContainer for \"8189fa7f178af5ea5c434220ebe62ab045ada7c54a311570c9f3aee53c890144\"" May 8 00:22:07.379680 containerd[1437]: time="2025-05-08T00:22:07.379645476Z" level=info msg="CreateContainer within sandbox \"583466c49e4f2a2a4ad74214833339cd22a50db9985a1908acd7cb5dd2f38245\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a3f3e1f31b1997edcdbe00f441aba06774735f7f20a8e9546770ab685246665a\"" May 8 00:22:07.381522 containerd[1437]: time="2025-05-08T00:22:07.381483876Z" level=info msg="StartContainer for \"a3f3e1f31b1997edcdbe00f441aba06774735f7f20a8e9546770ab685246665a\"" May 8 00:22:07.386183 containerd[1437]: time="2025-05-08T00:22:07.386117796Z" level=info msg="CreateContainer within sandbox \"edfc8302298de807b0842e12daf34e6616cd19dc08a69607d04644a2b379151f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"daf025511946828943220b3f211625e6d12c4124448923fde242064aca259377\"" May 8 00:22:07.387017 containerd[1437]: time="2025-05-08T00:22:07.386659836Z" level=info msg="StartContainer for \"daf025511946828943220b3f211625e6d12c4124448923fde242064aca259377\"" May 8 00:22:07.398556 systemd[1]: Started cri-containerd-8189fa7f178af5ea5c434220ebe62ab045ada7c54a311570c9f3aee53c890144.scope - libcontainer container 8189fa7f178af5ea5c434220ebe62ab045ada7c54a311570c9f3aee53c890144. May 8 00:22:07.411925 systemd[1]: Started cri-containerd-a3f3e1f31b1997edcdbe00f441aba06774735f7f20a8e9546770ab685246665a.scope - libcontainer container a3f3e1f31b1997edcdbe00f441aba06774735f7f20a8e9546770ab685246665a. May 8 00:22:07.415767 systemd[1]: Started cri-containerd-daf025511946828943220b3f211625e6d12c4124448923fde242064aca259377.scope - libcontainer container daf025511946828943220b3f211625e6d12c4124448923fde242064aca259377. May 8 00:22:07.462127 containerd[1437]: time="2025-05-08T00:22:07.461931276Z" level=info msg="StartContainer for \"a3f3e1f31b1997edcdbe00f441aba06774735f7f20a8e9546770ab685246665a\" returns successfully" May 8 00:22:07.462127 containerd[1437]: time="2025-05-08T00:22:07.462051556Z" level=info msg="StartContainer for \"8189fa7f178af5ea5c434220ebe62ab045ada7c54a311570c9f3aee53c890144\" returns successfully" May 8 00:22:07.465563 kubelet[2198]: I0508 00:22:07.463421 2198 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:22:07.465563 kubelet[2198]: E0508 00:22:07.463896 2198 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" May 8 00:22:07.492521 containerd[1437]: time="2025-05-08T00:22:07.483311596Z" level=info msg="StartContainer for \"daf025511946828943220b3f211625e6d12c4124448923fde242064aca259377\" returns successfully" May 8 00:22:07.976084 kubelet[2198]: E0508 00:22:07.975818 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:07.981412 kubelet[2198]: E0508 00:22:07.981385 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:07.983517 kubelet[2198]: E0508 00:22:07.983494 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:08.985103 kubelet[2198]: E0508 00:22:08.985035 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:08.985400 kubelet[2198]: E0508 00:22:08.985225 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:09.065640 kubelet[2198]: I0508 00:22:09.065364 2198 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:22:09.478382 kubelet[2198]: E0508 00:22:09.478326 2198 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:22:09.569785 kubelet[2198]: I0508 00:22:09.569755 2198 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:22:09.945983 kubelet[2198]: I0508 00:22:09.945906 2198 apiserver.go:52] "Watching apiserver" May 8 00:22:09.956359 kubelet[2198]: I0508 00:22:09.956330 2198 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:22:11.703843 systemd[1]: Reloading requested from client PID 2476 ('systemctl') (unit session-7.scope)... May 8 00:22:11.703858 systemd[1]: Reloading... May 8 00:22:11.725304 kubelet[2198]: E0508 00:22:11.725272 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:11.759592 zram_generator::config[2518]: No configuration found. May 8 00:22:11.855497 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:22:11.935924 systemd[1]: Reloading finished in 231 ms. May 8 00:22:11.969549 kubelet[2198]: I0508 00:22:11.969217 2198 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:22:11.969406 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:22:11.983157 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:22:11.983393 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:22:11.983449 systemd[1]: kubelet.service: Consumed 1.055s CPU time, 113.9M memory peak, 0B memory swap peak. May 8 00:22:11.995071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:22:12.088448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:22:12.093510 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:22:12.140870 kubelet[2557]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:22:12.140870 kubelet[2557]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:22:12.140870 kubelet[2557]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:22:12.142230 kubelet[2557]: I0508 00:22:12.141853 2557 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:22:12.146164 kubelet[2557]: I0508 00:22:12.146128 2557 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:22:12.146164 kubelet[2557]: I0508 00:22:12.146156 2557 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:22:12.146356 kubelet[2557]: I0508 00:22:12.146340 2557 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:22:12.148941 kubelet[2557]: I0508 00:22:12.148769 2557 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:22:12.151233 kubelet[2557]: I0508 00:22:12.151208 2557 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:22:12.156844 kubelet[2557]: I0508 00:22:12.156820 2557 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:22:12.157059 kubelet[2557]: I0508 00:22:12.157021 2557 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:22:12.157220 kubelet[2557]: I0508 00:22:12.157053 2557 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:22:12.157293 kubelet[2557]: I0508 00:22:12.157223 2557 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:22:12.157293 kubelet[2557]: I0508 00:22:12.157233 2557 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:22:12.157293 kubelet[2557]: I0508 00:22:12.157265 2557 state_mem.go:36] "Initialized new in-memory state store" May 8 00:22:12.157383 kubelet[2557]: I0508 00:22:12.157368 2557 kubelet.go:400] "Attempting to sync node with API server" May 8 00:22:12.157383 kubelet[2557]: I0508 00:22:12.157380 2557 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:22:12.157473 kubelet[2557]: I0508 00:22:12.157406 2557 kubelet.go:312] "Adding apiserver pod source" May 8 00:22:12.157473 kubelet[2557]: I0508 00:22:12.157422 2557 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:22:12.163480 kubelet[2557]: I0508 00:22:12.161871 2557 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:22:12.163480 kubelet[2557]: I0508 00:22:12.162039 2557 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:22:12.163480 kubelet[2557]: I0508 00:22:12.162407 2557 server.go:1264] "Started kubelet" May 8 00:22:12.164678 kubelet[2557]: I0508 00:22:12.164095 2557 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:22:12.168817 kubelet[2557]: E0508 00:22:12.167834 2557 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:22:12.168817 kubelet[2557]: I0508 00:22:12.167907 2557 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:22:12.168817 kubelet[2557]: I0508 00:22:12.168007 2557 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:22:12.168817 kubelet[2557]: I0508 00:22:12.168151 2557 reconciler.go:26] "Reconciler: start to sync state" May 8 00:22:12.171644 kubelet[2557]: I0508 00:22:12.169709 2557 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:22:12.171927 kubelet[2557]: I0508 00:22:12.171753 2557 server.go:455] "Adding debug handlers to kubelet server" May 8 00:22:12.174750 kubelet[2557]: I0508 00:22:12.173927 2557 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:22:12.174750 kubelet[2557]: I0508 00:22:12.174144 2557 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:22:12.188511 kubelet[2557]: I0508 00:22:12.188289 2557 factory.go:221] Registration of the systemd container factory successfully May 8 00:22:12.188511 kubelet[2557]: I0508 00:22:12.188382 2557 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:22:12.189723 kubelet[2557]: I0508 00:22:12.188757 2557 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:22:12.189847 kubelet[2557]: I0508 00:22:12.189816 2557 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:22:12.189847 kubelet[2557]: I0508 00:22:12.189842 2557 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:22:12.189901 kubelet[2557]: I0508 00:22:12.189860 2557 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:22:12.189930 kubelet[2557]: E0508 00:22:12.189904 2557 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:22:12.191237 kubelet[2557]: I0508 00:22:12.191217 2557 factory.go:221] Registration of the containerd container factory successfully May 8 00:22:12.222774 kubelet[2557]: I0508 00:22:12.222673 2557 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:22:12.222774 kubelet[2557]: I0508 00:22:12.222691 2557 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:22:12.222774 kubelet[2557]: I0508 00:22:12.222711 2557 state_mem.go:36] "Initialized new in-memory state store" May 8 00:22:12.223365 kubelet[2557]: I0508 00:22:12.222946 2557 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:22:12.223365 kubelet[2557]: I0508 00:22:12.222974 2557 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:22:12.223365 kubelet[2557]: I0508 00:22:12.223005 2557 policy_none.go:49] "None policy: Start" May 8 00:22:12.224687 kubelet[2557]: I0508 00:22:12.224654 2557 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:22:12.224687 kubelet[2557]: I0508 00:22:12.224680 2557 state_mem.go:35] "Initializing new in-memory state store" May 8 00:22:12.224833 kubelet[2557]: I0508 00:22:12.224818 2557 state_mem.go:75] "Updated machine memory state" May 8 00:22:12.232708 kubelet[2557]: I0508 00:22:12.232675 2557 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:22:12.232948 kubelet[2557]: I0508 00:22:12.232855 2557 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:22:12.234163 kubelet[2557]: I0508 00:22:12.233799 2557 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:22:12.271984 kubelet[2557]: I0508 00:22:12.271949 2557 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:22:12.279567 kubelet[2557]: I0508 00:22:12.279519 2557 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 8 00:22:12.279683 kubelet[2557]: I0508 00:22:12.279604 2557 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:22:12.290222 kubelet[2557]: I0508 00:22:12.290185 2557 topology_manager.go:215] "Topology Admit Handler" podUID="04dc6c3105e9ad34677f29b8912a104f" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:22:12.290317 kubelet[2557]: I0508 00:22:12.290280 2557 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:22:12.290346 kubelet[2557]: I0508 00:22:12.290318 2557 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:22:12.296161 kubelet[2557]: E0508 00:22:12.296094 2557 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:22:12.469971 kubelet[2557]: I0508 00:22:12.469922 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04dc6c3105e9ad34677f29b8912a104f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"04dc6c3105e9ad34677f29b8912a104f\") " pod="kube-system/kube-apiserver-localhost" May 8 00:22:12.469971 kubelet[2557]: I0508 00:22:12.469968 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:22:12.470121 kubelet[2557]: I0508 00:22:12.469992 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:22:12.470121 kubelet[2557]: I0508 00:22:12.470033 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:22:12.470121 kubelet[2557]: I0508 00:22:12.470063 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04dc6c3105e9ad34677f29b8912a104f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"04dc6c3105e9ad34677f29b8912a104f\") " pod="kube-system/kube-apiserver-localhost" May 8 00:22:12.470121 kubelet[2557]: I0508 00:22:12.470081 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04dc6c3105e9ad34677f29b8912a104f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"04dc6c3105e9ad34677f29b8912a104f\") " pod="kube-system/kube-apiserver-localhost" May 8 00:22:12.470121 kubelet[2557]: I0508 00:22:12.470102 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:22:12.470226 kubelet[2557]: I0508 00:22:12.470119 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:22:12.470226 kubelet[2557]: I0508 00:22:12.470135 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:22:12.597634 kubelet[2557]: E0508 00:22:12.596604 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:12.597634 kubelet[2557]: E0508 00:22:12.596905 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:12.598861 kubelet[2557]: E0508 00:22:12.598839 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:13.158657 kubelet[2557]: I0508 00:22:13.158618 2557 apiserver.go:52] "Watching apiserver" May 8 00:22:13.168696 kubelet[2557]: I0508 00:22:13.168652 2557 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:22:13.213607 kubelet[2557]: E0508 00:22:13.212029 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:13.213607 kubelet[2557]: E0508 00:22:13.212348 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:13.218874 kubelet[2557]: E0508 00:22:13.218298 2557 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:22:13.218874 kubelet[2557]: E0508 00:22:13.218688 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:13.242070 kubelet[2557]: I0508 00:22:13.242012 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.241995476 podStartE2EDuration="1.241995476s" podCreationTimestamp="2025-05-08 00:22:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:22:13.240748076 +0000 UTC m=+1.143611961" watchObservedRunningTime="2025-05-08 00:22:13.241995476 +0000 UTC m=+1.144859361" May 8 00:22:13.272667 kubelet[2557]: I0508 00:22:13.272613 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.272596436 podStartE2EDuration="2.272596436s" podCreationTimestamp="2025-05-08 00:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:22:13.272389916 +0000 UTC m=+1.175253801" watchObservedRunningTime="2025-05-08 00:22:13.272596436 +0000 UTC m=+1.175460321" May 8 00:22:13.272938 kubelet[2557]: I0508 00:22:13.272834 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.272828316 podStartE2EDuration="1.272828316s" podCreationTimestamp="2025-05-08 00:22:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:22:13.255849956 +0000 UTC m=+1.158713841" watchObservedRunningTime="2025-05-08 00:22:13.272828316 +0000 UTC m=+1.175692201" May 8 00:22:14.214238 kubelet[2557]: E0508 00:22:14.214000 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:14.214238 kubelet[2557]: E0508 00:22:14.214140 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:14.991083 kubelet[2557]: E0508 00:22:14.991052 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:15.553100 kubelet[2557]: E0508 00:22:15.553060 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:16.831292 sudo[1620]: pam_unix(sudo:session): session closed for user root May 8 00:22:16.847341 sshd[1617]: pam_unix(sshd:session): session closed for user core May 8 00:22:16.850813 systemd[1]: sshd@6-10.0.0.58:22-10.0.0.1:40408.service: Deactivated successfully. May 8 00:22:16.852606 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:22:16.852813 systemd[1]: session-7.scope: Consumed 7.287s CPU time, 191.7M memory peak, 0B memory swap peak. May 8 00:22:16.853383 systemd-logind[1426]: Session 7 logged out. Waiting for processes to exit. May 8 00:22:16.854482 systemd-logind[1426]: Removed session 7. May 8 00:22:23.242263 kubelet[2557]: E0508 00:22:23.242172 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:25.026063 kubelet[2557]: E0508 00:22:25.025795 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:25.102322 kubelet[2557]: I0508 00:22:25.102102 2557 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:22:25.107680 containerd[1437]: time="2025-05-08T00:22:25.107619690Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:22:25.108439 kubelet[2557]: I0508 00:22:25.108174 2557 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:22:25.561762 kubelet[2557]: E0508 00:22:25.561034 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:25.910174 kubelet[2557]: I0508 00:22:25.910131 2557 topology_manager.go:215] "Topology Admit Handler" podUID="ccf2d13e-6083-4392-92fe-9cb16be576d9" podNamespace="kube-system" podName="kube-proxy-9nslb" May 8 00:22:25.919139 systemd[1]: Created slice kubepods-besteffort-podccf2d13e_6083_4392_92fe_9cb16be576d9.slice - libcontainer container kubepods-besteffort-podccf2d13e_6083_4392_92fe_9cb16be576d9.slice. May 8 00:22:26.055163 kubelet[2557]: I0508 00:22:26.055046 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ccf2d13e-6083-4392-92fe-9cb16be576d9-kube-proxy\") pod \"kube-proxy-9nslb\" (UID: \"ccf2d13e-6083-4392-92fe-9cb16be576d9\") " pod="kube-system/kube-proxy-9nslb" May 8 00:22:26.055163 kubelet[2557]: I0508 00:22:26.055091 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccf2d13e-6083-4392-92fe-9cb16be576d9-lib-modules\") pod \"kube-proxy-9nslb\" (UID: \"ccf2d13e-6083-4392-92fe-9cb16be576d9\") " pod="kube-system/kube-proxy-9nslb" May 8 00:22:26.055163 kubelet[2557]: I0508 00:22:26.055106 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccf2d13e-6083-4392-92fe-9cb16be576d9-xtables-lock\") pod \"kube-proxy-9nslb\" (UID: \"ccf2d13e-6083-4392-92fe-9cb16be576d9\") " pod="kube-system/kube-proxy-9nslb" May 8 00:22:26.055163 kubelet[2557]: I0508 00:22:26.055127 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9qfn\" (UniqueName: \"kubernetes.io/projected/ccf2d13e-6083-4392-92fe-9cb16be576d9-kube-api-access-b9qfn\") pod \"kube-proxy-9nslb\" (UID: \"ccf2d13e-6083-4392-92fe-9cb16be576d9\") " pod="kube-system/kube-proxy-9nslb" May 8 00:22:26.216292 kubelet[2557]: I0508 00:22:26.215518 2557 topology_manager.go:215] "Topology Admit Handler" podUID="a9936c16-e62e-49bd-a83e-5e4e2be022f2" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-7ltmj" May 8 00:22:26.226820 systemd[1]: Created slice kubepods-besteffort-poda9936c16_e62e_49bd_a83e_5e4e2be022f2.slice - libcontainer container kubepods-besteffort-poda9936c16_e62e_49bd_a83e_5e4e2be022f2.slice. May 8 00:22:26.230109 kubelet[2557]: E0508 00:22:26.230070 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:26.232259 containerd[1437]: time="2025-05-08T00:22:26.231949245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9nslb,Uid:ccf2d13e-6083-4392-92fe-9cb16be576d9,Namespace:kube-system,Attempt:0,}" May 8 00:22:26.310173 containerd[1437]: time="2025-05-08T00:22:26.309228036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:22:26.310173 containerd[1437]: time="2025-05-08T00:22:26.309288395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:22:26.310173 containerd[1437]: time="2025-05-08T00:22:26.309305355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:26.310173 containerd[1437]: time="2025-05-08T00:22:26.309394555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:26.332951 systemd[1]: Started cri-containerd-775e60758b6cbd31a7572dac27dc0bd153bce3a3d269ea670a03640678f24660.scope - libcontainer container 775e60758b6cbd31a7572dac27dc0bd153bce3a3d269ea670a03640678f24660. May 8 00:22:26.352194 containerd[1437]: time="2025-05-08T00:22:26.352154168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9nslb,Uid:ccf2d13e-6083-4392-92fe-9cb16be576d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"775e60758b6cbd31a7572dac27dc0bd153bce3a3d269ea670a03640678f24660\"" May 8 00:22:26.357747 kubelet[2557]: E0508 00:22:26.353920 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:26.357747 kubelet[2557]: I0508 00:22:26.356521 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5hr5\" (UniqueName: \"kubernetes.io/projected/a9936c16-e62e-49bd-a83e-5e4e2be022f2-kube-api-access-v5hr5\") pod \"tigera-operator-797db67f8-7ltmj\" (UID: \"a9936c16-e62e-49bd-a83e-5e4e2be022f2\") " pod="tigera-operator/tigera-operator-797db67f8-7ltmj" May 8 00:22:26.357747 kubelet[2557]: I0508 00:22:26.356572 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a9936c16-e62e-49bd-a83e-5e4e2be022f2-var-lib-calico\") pod \"tigera-operator-797db67f8-7ltmj\" (UID: \"a9936c16-e62e-49bd-a83e-5e4e2be022f2\") " pod="tigera-operator/tigera-operator-797db67f8-7ltmj" May 8 00:22:26.361648 containerd[1437]: time="2025-05-08T00:22:26.361600242Z" level=info msg="CreateContainer within sandbox \"775e60758b6cbd31a7572dac27dc0bd153bce3a3d269ea670a03640678f24660\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:22:26.375555 containerd[1437]: time="2025-05-08T00:22:26.375514753Z" level=info msg="CreateContainer within sandbox \"775e60758b6cbd31a7572dac27dc0bd153bce3a3d269ea670a03640678f24660\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"17fb46001dd4ca9a59c0bce9de22f892c23c1ea9cdb7681fa1c19c4a991fbc06\"" May 8 00:22:26.377089 containerd[1437]: time="2025-05-08T00:22:26.376054232Z" level=info msg="StartContainer for \"17fb46001dd4ca9a59c0bce9de22f892c23c1ea9cdb7681fa1c19c4a991fbc06\"" May 8 00:22:26.401914 systemd[1]: Started cri-containerd-17fb46001dd4ca9a59c0bce9de22f892c23c1ea9cdb7681fa1c19c4a991fbc06.scope - libcontainer container 17fb46001dd4ca9a59c0bce9de22f892c23c1ea9cdb7681fa1c19c4a991fbc06. May 8 00:22:26.426345 containerd[1437]: time="2025-05-08T00:22:26.426304560Z" level=info msg="StartContainer for \"17fb46001dd4ca9a59c0bce9de22f892c23c1ea9cdb7681fa1c19c4a991fbc06\" returns successfully" May 8 00:22:26.535936 containerd[1437]: time="2025-05-08T00:22:26.535833729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-7ltmj,Uid:a9936c16-e62e-49bd-a83e-5e4e2be022f2,Namespace:tigera-operator,Attempt:0,}" May 8 00:22:26.561200 containerd[1437]: time="2025-05-08T00:22:26.561082993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:22:26.561421 containerd[1437]: time="2025-05-08T00:22:26.561142913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:22:26.561421 containerd[1437]: time="2025-05-08T00:22:26.561221153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:26.561421 containerd[1437]: time="2025-05-08T00:22:26.561334633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:26.581890 systemd[1]: Started cri-containerd-b9a13ba82f7207a226d4922371e737de2ff2c363ce1683fbbac68d5bd46b7472.scope - libcontainer container b9a13ba82f7207a226d4922371e737de2ff2c363ce1683fbbac68d5bd46b7472. May 8 00:22:26.608859 containerd[1437]: time="2025-05-08T00:22:26.608822122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-7ltmj,Uid:a9936c16-e62e-49bd-a83e-5e4e2be022f2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b9a13ba82f7207a226d4922371e737de2ff2c363ce1683fbbac68d5bd46b7472\"" May 8 00:22:26.611197 containerd[1437]: time="2025-05-08T00:22:26.611168801Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 8 00:22:26.654300 update_engine[1428]: I20250508 00:22:26.654232 1428 update_attempter.cc:509] Updating boot flags... May 8 00:22:26.684854 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2803) May 8 00:22:26.744924 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2805) May 8 00:22:27.246494 kubelet[2557]: E0508 00:22:27.246416 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:27.258281 kubelet[2557]: I0508 00:22:27.258238 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9nslb" podStartSLOduration=2.258211873 podStartE2EDuration="2.258211873s" podCreationTimestamp="2025-05-08 00:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:22:27.257664394 +0000 UTC m=+15.160528319" watchObservedRunningTime="2025-05-08 00:22:27.258211873 +0000 UTC m=+15.161075758" May 8 00:22:27.995494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2809414395.mount: Deactivated successfully. May 8 00:22:30.804275 containerd[1437]: time="2025-05-08T00:22:30.804232284Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:30.805260 containerd[1437]: time="2025-05-08T00:22:30.805219843Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 8 00:22:30.806495 containerd[1437]: time="2025-05-08T00:22:30.806456282Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:30.809157 containerd[1437]: time="2025-05-08T00:22:30.808920841Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:30.809773 containerd[1437]: time="2025-05-08T00:22:30.809746641Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 4.19854304s" May 8 00:22:30.809881 containerd[1437]: time="2025-05-08T00:22:30.809811001Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 8 00:22:30.819075 containerd[1437]: time="2025-05-08T00:22:30.819038836Z" level=info msg="CreateContainer within sandbox \"b9a13ba82f7207a226d4922371e737de2ff2c363ce1683fbbac68d5bd46b7472\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 8 00:22:30.829555 containerd[1437]: time="2025-05-08T00:22:30.829454831Z" level=info msg="CreateContainer within sandbox \"b9a13ba82f7207a226d4922371e737de2ff2c363ce1683fbbac68d5bd46b7472\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"778086656edf4af767f15f117db6afd504ec0c79bf05deb179239bccc8ad640a\"" May 8 00:22:30.830014 containerd[1437]: time="2025-05-08T00:22:30.829989431Z" level=info msg="StartContainer for \"778086656edf4af767f15f117db6afd504ec0c79bf05deb179239bccc8ad640a\"" May 8 00:22:30.850293 systemd[1]: Started cri-containerd-778086656edf4af767f15f117db6afd504ec0c79bf05deb179239bccc8ad640a.scope - libcontainer container 778086656edf4af767f15f117db6afd504ec0c79bf05deb179239bccc8ad640a. May 8 00:22:30.878303 containerd[1437]: time="2025-05-08T00:22:30.878160647Z" level=info msg="StartContainer for \"778086656edf4af767f15f117db6afd504ec0c79bf05deb179239bccc8ad640a\" returns successfully" May 8 00:22:31.266878 kubelet[2557]: I0508 00:22:31.266421 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-7ltmj" podStartSLOduration=1.058569905 podStartE2EDuration="5.266406381s" podCreationTimestamp="2025-05-08 00:22:26 +0000 UTC" firstStartedPulling="2025-05-08 00:22:26.610002361 +0000 UTC m=+14.512866246" lastFinishedPulling="2025-05-08 00:22:30.817838837 +0000 UTC m=+18.720702722" observedRunningTime="2025-05-08 00:22:31.264232102 +0000 UTC m=+19.167095987" watchObservedRunningTime="2025-05-08 00:22:31.266406381 +0000 UTC m=+19.169270226" May 8 00:22:34.432699 kubelet[2557]: I0508 00:22:34.432652 2557 topology_manager.go:215] "Topology Admit Handler" podUID="87e11d19-60c0-471b-a769-22e659c188e1" podNamespace="calico-system" podName="calico-typha-595b7f69d4-hjjqj" May 8 00:22:34.444826 systemd[1]: Created slice kubepods-besteffort-pod87e11d19_60c0_471b_a769_22e659c188e1.slice - libcontainer container kubepods-besteffort-pod87e11d19_60c0_471b_a769_22e659c188e1.slice. May 8 00:22:34.497715 kubelet[2557]: I0508 00:22:34.497654 2557 topology_manager.go:215] "Topology Admit Handler" podUID="05fd5258-1ec6-47e7-b39b-08317a9205ec" podNamespace="calico-system" podName="calico-node-lzmfw" May 8 00:22:34.505632 systemd[1]: Created slice kubepods-besteffort-pod05fd5258_1ec6_47e7_b39b_08317a9205ec.slice - libcontainer container kubepods-besteffort-pod05fd5258_1ec6_47e7_b39b_08317a9205ec.slice. May 8 00:22:34.611274 kubelet[2557]: I0508 00:22:34.609783 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/05fd5258-1ec6-47e7-b39b-08317a9205ec-policysync\") pod \"calico-node-lzmfw\" (UID: \"05fd5258-1ec6-47e7-b39b-08317a9205ec\") " pod="calico-system/calico-node-lzmfw" May 8 00:22:34.611274 kubelet[2557]: I0508 00:22:34.609868 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/05fd5258-1ec6-47e7-b39b-08317a9205ec-var-lib-calico\") pod \"calico-node-lzmfw\" (UID: \"05fd5258-1ec6-47e7-b39b-08317a9205ec\") " pod="calico-system/calico-node-lzmfw" May 8 00:22:34.611274 kubelet[2557]: I0508 00:22:34.609890 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvxts\" (UniqueName: \"kubernetes.io/projected/87e11d19-60c0-471b-a769-22e659c188e1-kube-api-access-fvxts\") pod \"calico-typha-595b7f69d4-hjjqj\" (UID: \"87e11d19-60c0-471b-a769-22e659c188e1\") " pod="calico-system/calico-typha-595b7f69d4-hjjqj" May 8 00:22:34.611274 kubelet[2557]: I0508 00:22:34.609910 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05fd5258-1ec6-47e7-b39b-08317a9205ec-lib-modules\") pod \"calico-node-lzmfw\" (UID: \"05fd5258-1ec6-47e7-b39b-08317a9205ec\") " pod="calico-system/calico-node-lzmfw" May 8 00:22:34.611274 kubelet[2557]: I0508 00:22:34.609927 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/87e11d19-60c0-471b-a769-22e659c188e1-typha-certs\") pod \"calico-typha-595b7f69d4-hjjqj\" (UID: \"87e11d19-60c0-471b-a769-22e659c188e1\") " pod="calico-system/calico-typha-595b7f69d4-hjjqj" May 8 00:22:34.611509 kubelet[2557]: I0508 00:22:34.609943 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/05fd5258-1ec6-47e7-b39b-08317a9205ec-flexvol-driver-host\") pod \"calico-node-lzmfw\" (UID: \"05fd5258-1ec6-47e7-b39b-08317a9205ec\") " pod="calico-system/calico-node-lzmfw" May 8 00:22:34.611509 kubelet[2557]: I0508 00:22:34.609960 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05fd5258-1ec6-47e7-b39b-08317a9205ec-xtables-lock\") pod \"calico-node-lzmfw\" (UID: \"05fd5258-1ec6-47e7-b39b-08317a9205ec\") " pod="calico-system/calico-node-lzmfw" May 8 00:22:34.611509 kubelet[2557]: I0508 00:22:34.609976 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05fd5258-1ec6-47e7-b39b-08317a9205ec-tigera-ca-bundle\") pod \"calico-node-lzmfw\" (UID: \"05fd5258-1ec6-47e7-b39b-08317a9205ec\") " pod="calico-system/calico-node-lzmfw" May 8 00:22:34.611509 kubelet[2557]: I0508 00:22:34.609991 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/05fd5258-1ec6-47e7-b39b-08317a9205ec-cni-log-dir\") pod \"calico-node-lzmfw\" (UID: \"05fd5258-1ec6-47e7-b39b-08317a9205ec\") " pod="calico-system/calico-node-lzmfw" May 8 00:22:34.611509 kubelet[2557]: I0508 00:22:34.610005 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/05fd5258-1ec6-47e7-b39b-08317a9205ec-node-certs\") pod \"calico-node-lzmfw\" (UID: \"05fd5258-1ec6-47e7-b39b-08317a9205ec\") " pod="calico-system/calico-node-lzmfw" May 8 00:22:34.611653 kubelet[2557]: I0508 00:22:34.610031 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/05fd5258-1ec6-47e7-b39b-08317a9205ec-var-run-calico\") pod \"calico-node-lzmfw\" (UID: \"05fd5258-1ec6-47e7-b39b-08317a9205ec\") " pod="calico-system/calico-node-lzmfw" May 8 00:22:34.611653 kubelet[2557]: I0508 00:22:34.610054 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/05fd5258-1ec6-47e7-b39b-08317a9205ec-cni-bin-dir\") pod \"calico-node-lzmfw\" (UID: \"05fd5258-1ec6-47e7-b39b-08317a9205ec\") " pod="calico-system/calico-node-lzmfw" May 8 00:22:34.611653 kubelet[2557]: I0508 00:22:34.610070 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4qmp\" (UniqueName: \"kubernetes.io/projected/05fd5258-1ec6-47e7-b39b-08317a9205ec-kube-api-access-z4qmp\") pod \"calico-node-lzmfw\" (UID: \"05fd5258-1ec6-47e7-b39b-08317a9205ec\") " pod="calico-system/calico-node-lzmfw" May 8 00:22:34.611653 kubelet[2557]: I0508 00:22:34.610086 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87e11d19-60c0-471b-a769-22e659c188e1-tigera-ca-bundle\") pod \"calico-typha-595b7f69d4-hjjqj\" (UID: \"87e11d19-60c0-471b-a769-22e659c188e1\") " pod="calico-system/calico-typha-595b7f69d4-hjjqj" May 8 00:22:34.611653 kubelet[2557]: I0508 00:22:34.610100 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/05fd5258-1ec6-47e7-b39b-08317a9205ec-cni-net-dir\") pod \"calico-node-lzmfw\" (UID: \"05fd5258-1ec6-47e7-b39b-08317a9205ec\") " pod="calico-system/calico-node-lzmfw" May 8 00:22:34.617496 kubelet[2557]: I0508 00:22:34.616911 2557 topology_manager.go:215] "Topology Admit Handler" podUID="26f41155-3ab0-4f8a-91d4-9d90c9524fe5" podNamespace="calico-system" podName="csi-node-driver-56jwl" May 8 00:22:34.617496 kubelet[2557]: E0508 00:22:34.617178 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-56jwl" podUID="26f41155-3ab0-4f8a-91d4-9d90c9524fe5" May 8 00:22:34.713288 kubelet[2557]: I0508 00:22:34.711037 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/26f41155-3ab0-4f8a-91d4-9d90c9524fe5-socket-dir\") pod \"csi-node-driver-56jwl\" (UID: \"26f41155-3ab0-4f8a-91d4-9d90c9524fe5\") " pod="calico-system/csi-node-driver-56jwl" May 8 00:22:34.713288 kubelet[2557]: I0508 00:22:34.711079 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trnwc\" (UniqueName: \"kubernetes.io/projected/26f41155-3ab0-4f8a-91d4-9d90c9524fe5-kube-api-access-trnwc\") pod \"csi-node-driver-56jwl\" (UID: \"26f41155-3ab0-4f8a-91d4-9d90c9524fe5\") " pod="calico-system/csi-node-driver-56jwl" May 8 00:22:34.713288 kubelet[2557]: I0508 00:22:34.711096 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26f41155-3ab0-4f8a-91d4-9d90c9524fe5-kubelet-dir\") pod \"csi-node-driver-56jwl\" (UID: \"26f41155-3ab0-4f8a-91d4-9d90c9524fe5\") " pod="calico-system/csi-node-driver-56jwl" May 8 00:22:34.713288 kubelet[2557]: I0508 00:22:34.711148 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/26f41155-3ab0-4f8a-91d4-9d90c9524fe5-registration-dir\") pod \"csi-node-driver-56jwl\" (UID: \"26f41155-3ab0-4f8a-91d4-9d90c9524fe5\") " pod="calico-system/csi-node-driver-56jwl" May 8 00:22:34.713288 kubelet[2557]: I0508 00:22:34.711281 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/26f41155-3ab0-4f8a-91d4-9d90c9524fe5-varrun\") pod \"csi-node-driver-56jwl\" (UID: \"26f41155-3ab0-4f8a-91d4-9d90c9524fe5\") " pod="calico-system/csi-node-driver-56jwl" May 8 00:22:34.729104 kubelet[2557]: E0508 00:22:34.729067 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.729104 kubelet[2557]: W0508 00:22:34.729090 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.729263 kubelet[2557]: E0508 00:22:34.729113 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.729896 kubelet[2557]: E0508 00:22:34.729881 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.729896 kubelet[2557]: W0508 00:22:34.729892 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.729959 kubelet[2557]: E0508 00:22:34.729906 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.730561 kubelet[2557]: E0508 00:22:34.730529 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.730561 kubelet[2557]: W0508 00:22:34.730543 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.730561 kubelet[2557]: E0508 00:22:34.730555 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.738513 kubelet[2557]: E0508 00:22:34.738484 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.738513 kubelet[2557]: W0508 00:22:34.738499 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.738513 kubelet[2557]: E0508 00:22:34.738513 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.750126 kubelet[2557]: E0508 00:22:34.750091 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:34.750694 containerd[1437]: time="2025-05-08T00:22:34.750658740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-595b7f69d4-hjjqj,Uid:87e11d19-60c0-471b-a769-22e659c188e1,Namespace:calico-system,Attempt:0,}" May 8 00:22:34.796394 containerd[1437]: time="2025-05-08T00:22:34.796268122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:22:34.796394 containerd[1437]: time="2025-05-08T00:22:34.796341362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:22:34.796394 containerd[1437]: time="2025-05-08T00:22:34.796363962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:34.796534 containerd[1437]: time="2025-05-08T00:22:34.796452322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:34.808145 kubelet[2557]: E0508 00:22:34.808097 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:34.808997 containerd[1437]: time="2025-05-08T00:22:34.808955517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lzmfw,Uid:05fd5258-1ec6-47e7-b39b-08317a9205ec,Namespace:calico-system,Attempt:0,}" May 8 00:22:34.812496 kubelet[2557]: E0508 00:22:34.812473 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.812496 kubelet[2557]: W0508 00:22:34.812492 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.812709 kubelet[2557]: E0508 00:22:34.812510 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.813161 kubelet[2557]: E0508 00:22:34.813113 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.813161 kubelet[2557]: W0508 00:22:34.813130 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.813161 kubelet[2557]: E0508 00:22:34.813146 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.813877 kubelet[2557]: E0508 00:22:34.813848 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.813877 kubelet[2557]: W0508 00:22:34.813862 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.813877 kubelet[2557]: E0508 00:22:34.813879 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.814945 kubelet[2557]: E0508 00:22:34.814786 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.814945 kubelet[2557]: W0508 00:22:34.814802 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.815147 kubelet[2557]: E0508 00:22:34.815077 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.815441 kubelet[2557]: E0508 00:22:34.815375 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.815441 kubelet[2557]: W0508 00:22:34.815390 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.815441 kubelet[2557]: E0508 00:22:34.815424 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.815936 kubelet[2557]: E0508 00:22:34.815921 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.816073 kubelet[2557]: W0508 00:22:34.816018 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.816107 kubelet[2557]: E0508 00:22:34.816064 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.816496 kubelet[2557]: E0508 00:22:34.816437 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.816496 kubelet[2557]: W0508 00:22:34.816451 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.816496 kubelet[2557]: E0508 00:22:34.816482 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.817182 kubelet[2557]: E0508 00:22:34.817078 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.817182 kubelet[2557]: W0508 00:22:34.817095 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.817182 kubelet[2557]: E0508 00:22:34.817127 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.817497 kubelet[2557]: E0508 00:22:34.817416 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.817497 kubelet[2557]: W0508 00:22:34.817431 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.817497 kubelet[2557]: E0508 00:22:34.817483 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.817927 kubelet[2557]: E0508 00:22:34.817856 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.818038 kubelet[2557]: W0508 00:22:34.818022 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.818217 kubelet[2557]: E0508 00:22:34.818142 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.818705 kubelet[2557]: E0508 00:22:34.818590 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.818705 kubelet[2557]: W0508 00:22:34.818602 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.819114 kubelet[2557]: E0508 00:22:34.818996 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.819246 kubelet[2557]: E0508 00:22:34.819208 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.819406 kubelet[2557]: W0508 00:22:34.819220 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.819446 kubelet[2557]: E0508 00:22:34.819416 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.819976 kubelet[2557]: E0508 00:22:34.819898 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.819976 kubelet[2557]: W0508 00:22:34.819911 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.820282 kubelet[2557]: E0508 00:22:34.820251 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.821005 kubelet[2557]: E0508 00:22:34.820896 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.821005 kubelet[2557]: W0508 00:22:34.820912 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.821349 kubelet[2557]: E0508 00:22:34.821197 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.821349 kubelet[2557]: E0508 00:22:34.821257 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.821349 kubelet[2557]: W0508 00:22:34.821265 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.821474 kubelet[2557]: E0508 00:22:34.821385 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.821872 kubelet[2557]: E0508 00:22:34.821798 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.821872 kubelet[2557]: W0508 00:22:34.821812 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.821872 kubelet[2557]: E0508 00:22:34.821841 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.822214 kubelet[2557]: E0508 00:22:34.822199 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.822377 kubelet[2557]: W0508 00:22:34.822329 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.822596 kubelet[2557]: E0508 00:22:34.822511 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.822889 kubelet[2557]: E0508 00:22:34.822782 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.822889 kubelet[2557]: W0508 00:22:34.822796 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.822889 kubelet[2557]: E0508 00:22:34.822882 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.823447 kubelet[2557]: E0508 00:22:34.823244 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.823447 kubelet[2557]: W0508 00:22:34.823259 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.823819 kubelet[2557]: E0508 00:22:34.823690 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.824048 kubelet[2557]: E0508 00:22:34.824016 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.824048 kubelet[2557]: W0508 00:22:34.824031 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.824249 kubelet[2557]: E0508 00:22:34.824198 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.824560 kubelet[2557]: E0508 00:22:34.824364 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.824560 kubelet[2557]: W0508 00:22:34.824487 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.824560 kubelet[2557]: E0508 00:22:34.824517 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.826222 kubelet[2557]: E0508 00:22:34.826194 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.826222 kubelet[2557]: W0508 00:22:34.826214 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.826222 kubelet[2557]: E0508 00:22:34.826251 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.826646 kubelet[2557]: E0508 00:22:34.826618 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.826646 kubelet[2557]: W0508 00:22:34.826635 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.826839 kubelet[2557]: E0508 00:22:34.826675 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.828044 kubelet[2557]: E0508 00:22:34.827914 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.828044 kubelet[2557]: W0508 00:22:34.827930 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.828044 kubelet[2557]: E0508 00:22:34.827945 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.828247 kubelet[2557]: E0508 00:22:34.828202 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.828247 kubelet[2557]: W0508 00:22:34.828217 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.828247 kubelet[2557]: E0508 00:22:34.828227 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.835562 kubelet[2557]: E0508 00:22:34.835173 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:34.835562 kubelet[2557]: W0508 00:22:34.835195 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:34.835562 kubelet[2557]: E0508 00:22:34.835208 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:34.836944 systemd[1]: Started cri-containerd-f59a132fb68149c61602e67d91897e008ccba199fb64a81a6bb930c3ce7c4b53.scope - libcontainer container f59a132fb68149c61602e67d91897e008ccba199fb64a81a6bb930c3ce7c4b53. May 8 00:22:34.844622 containerd[1437]: time="2025-05-08T00:22:34.842950224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:22:34.844622 containerd[1437]: time="2025-05-08T00:22:34.844594943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:22:34.844622 containerd[1437]: time="2025-05-08T00:22:34.844606463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:34.844819 containerd[1437]: time="2025-05-08T00:22:34.844709383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:34.868925 systemd[1]: Started cri-containerd-d6591e329b22f78a4c16c143dd43e28671689d5a0c6b8ee1ae5d7235917cedda.scope - libcontainer container d6591e329b22f78a4c16c143dd43e28671689d5a0c6b8ee1ae5d7235917cedda. May 8 00:22:34.880582 containerd[1437]: time="2025-05-08T00:22:34.880486010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-595b7f69d4-hjjqj,Uid:87e11d19-60c0-471b-a769-22e659c188e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"f59a132fb68149c61602e67d91897e008ccba199fb64a81a6bb930c3ce7c4b53\"" May 8 00:22:34.881421 kubelet[2557]: E0508 00:22:34.881394 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:34.883172 containerd[1437]: time="2025-05-08T00:22:34.883137369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 8 00:22:34.898294 containerd[1437]: time="2025-05-08T00:22:34.897671523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lzmfw,Uid:05fd5258-1ec6-47e7-b39b-08317a9205ec,Namespace:calico-system,Attempt:0,} returns sandbox id \"d6591e329b22f78a4c16c143dd43e28671689d5a0c6b8ee1ae5d7235917cedda\"" May 8 00:22:34.900147 kubelet[2557]: E0508 00:22:34.900122 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:36.190672 kubelet[2557]: E0508 00:22:36.190591 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-56jwl" podUID="26f41155-3ab0-4f8a-91d4-9d90c9524fe5" May 8 00:22:36.222555 containerd[1437]: time="2025-05-08T00:22:36.222105407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:36.223066 containerd[1437]: time="2025-05-08T00:22:36.223032887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 8 00:22:36.223903 containerd[1437]: time="2025-05-08T00:22:36.223877247Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:36.225798 containerd[1437]: time="2025-05-08T00:22:36.225763406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:36.226528 containerd[1437]: time="2025-05-08T00:22:36.226288406Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.343112397s" May 8 00:22:36.226528 containerd[1437]: time="2025-05-08T00:22:36.226322566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 8 00:22:36.227450 containerd[1437]: time="2025-05-08T00:22:36.227355725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 00:22:36.247864 containerd[1437]: time="2025-05-08T00:22:36.247813758Z" level=info msg="CreateContainer within sandbox \"f59a132fb68149c61602e67d91897e008ccba199fb64a81a6bb930c3ce7c4b53\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 00:22:36.260876 containerd[1437]: time="2025-05-08T00:22:36.260830914Z" level=info msg="CreateContainer within sandbox \"f59a132fb68149c61602e67d91897e008ccba199fb64a81a6bb930c3ce7c4b53\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0eb8a132fc0f7e80acf52e10508a45a5f2d6d26a23044f587afae866e95956e3\"" May 8 00:22:36.261319 containerd[1437]: time="2025-05-08T00:22:36.261278914Z" level=info msg="StartContainer for \"0eb8a132fc0f7e80acf52e10508a45a5f2d6d26a23044f587afae866e95956e3\"" May 8 00:22:36.288892 systemd[1]: Started cri-containerd-0eb8a132fc0f7e80acf52e10508a45a5f2d6d26a23044f587afae866e95956e3.scope - libcontainer container 0eb8a132fc0f7e80acf52e10508a45a5f2d6d26a23044f587afae866e95956e3. May 8 00:22:36.320329 containerd[1437]: time="2025-05-08T00:22:36.320216094Z" level=info msg="StartContainer for \"0eb8a132fc0f7e80acf52e10508a45a5f2d6d26a23044f587afae866e95956e3\" returns successfully" May 8 00:22:37.270965 kubelet[2557]: E0508 00:22:37.270417 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:37.344205 kubelet[2557]: E0508 00:22:37.344173 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.344205 kubelet[2557]: W0508 00:22:37.344202 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.344336 kubelet[2557]: E0508 00:22:37.344222 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.344477 kubelet[2557]: E0508 00:22:37.344462 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.344533 kubelet[2557]: W0508 00:22:37.344477 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.344533 kubelet[2557]: E0508 00:22:37.344487 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.344832 kubelet[2557]: E0508 00:22:37.344814 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.344832 kubelet[2557]: W0508 00:22:37.344828 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.344877 kubelet[2557]: E0508 00:22:37.344838 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.345079 kubelet[2557]: E0508 00:22:37.345067 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.345079 kubelet[2557]: W0508 00:22:37.345078 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.345131 kubelet[2557]: E0508 00:22:37.345087 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.345269 kubelet[2557]: E0508 00:22:37.345258 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.345308 kubelet[2557]: W0508 00:22:37.345269 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.345308 kubelet[2557]: E0508 00:22:37.345278 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.345451 kubelet[2557]: E0508 00:22:37.345440 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.345484 kubelet[2557]: W0508 00:22:37.345451 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.345484 kubelet[2557]: E0508 00:22:37.345460 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.345596 kubelet[2557]: E0508 00:22:37.345584 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.345596 kubelet[2557]: W0508 00:22:37.345594 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.345653 kubelet[2557]: E0508 00:22:37.345601 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.345755 kubelet[2557]: E0508 00:22:37.345739 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.345755 kubelet[2557]: W0508 00:22:37.345750 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.345811 kubelet[2557]: E0508 00:22:37.345757 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.345928 kubelet[2557]: E0508 00:22:37.345915 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.345956 kubelet[2557]: W0508 00:22:37.345928 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.345956 kubelet[2557]: E0508 00:22:37.345938 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.346155 kubelet[2557]: E0508 00:22:37.346142 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.346241 kubelet[2557]: W0508 00:22:37.346155 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.346241 kubelet[2557]: E0508 00:22:37.346164 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.346324 kubelet[2557]: E0508 00:22:37.346311 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.346324 kubelet[2557]: W0508 00:22:37.346321 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.346376 kubelet[2557]: E0508 00:22:37.346331 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.346466 kubelet[2557]: E0508 00:22:37.346455 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.346466 kubelet[2557]: W0508 00:22:37.346464 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.346523 kubelet[2557]: E0508 00:22:37.346472 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.346627 kubelet[2557]: E0508 00:22:37.346611 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.346627 kubelet[2557]: W0508 00:22:37.346621 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.346669 kubelet[2557]: E0508 00:22:37.346629 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.346784 kubelet[2557]: E0508 00:22:37.346772 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.346784 kubelet[2557]: W0508 00:22:37.346783 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.346844 kubelet[2557]: E0508 00:22:37.346791 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.346954 kubelet[2557]: E0508 00:22:37.346937 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.346954 kubelet[2557]: W0508 00:22:37.346950 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.346999 kubelet[2557]: E0508 00:22:37.346959 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.436108 kubelet[2557]: E0508 00:22:37.436058 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.436108 kubelet[2557]: W0508 00:22:37.436101 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.436463 kubelet[2557]: E0508 00:22:37.436121 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.436463 kubelet[2557]: E0508 00:22:37.436346 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.436463 kubelet[2557]: W0508 00:22:37.436356 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.436463 kubelet[2557]: E0508 00:22:37.436376 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.436667 kubelet[2557]: E0508 00:22:37.436650 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.436667 kubelet[2557]: W0508 00:22:37.436664 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.436918 kubelet[2557]: E0508 00:22:37.436678 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.436918 kubelet[2557]: E0508 00:22:37.436907 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.437054 kubelet[2557]: W0508 00:22:37.436934 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.437054 kubelet[2557]: E0508 00:22:37.436955 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.437352 kubelet[2557]: E0508 00:22:37.437339 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.437352 kubelet[2557]: W0508 00:22:37.437351 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.437427 kubelet[2557]: E0508 00:22:37.437365 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.437573 kubelet[2557]: E0508 00:22:37.437560 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.437573 kubelet[2557]: W0508 00:22:37.437570 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.437631 kubelet[2557]: E0508 00:22:37.437582 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.437801 kubelet[2557]: E0508 00:22:37.437790 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.437832 kubelet[2557]: W0508 00:22:37.437803 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.437832 kubelet[2557]: E0508 00:22:37.437853 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.437832 kubelet[2557]: E0508 00:22:37.438034 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.437832 kubelet[2557]: W0508 00:22:37.438040 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.437832 kubelet[2557]: E0508 00:22:37.438075 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.438318 kubelet[2557]: E0508 00:22:37.438208 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.438318 kubelet[2557]: W0508 00:22:37.438216 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.438318 kubelet[2557]: E0508 00:22:37.438231 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.438415 kubelet[2557]: E0508 00:22:37.438401 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.438415 kubelet[2557]: W0508 00:22:37.438412 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.438470 kubelet[2557]: E0508 00:22:37.438421 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.438589 kubelet[2557]: E0508 00:22:37.438579 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.438589 kubelet[2557]: W0508 00:22:37.438588 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.438641 kubelet[2557]: E0508 00:22:37.438599 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.438817 kubelet[2557]: E0508 00:22:37.438806 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.438817 kubelet[2557]: W0508 00:22:37.438816 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.438868 kubelet[2557]: E0508 00:22:37.438828 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.439080 kubelet[2557]: E0508 00:22:37.439061 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.439119 kubelet[2557]: W0508 00:22:37.439080 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.439119 kubelet[2557]: E0508 00:22:37.439099 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.439270 kubelet[2557]: E0508 00:22:37.439261 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.439270 kubelet[2557]: W0508 00:22:37.439270 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.439324 kubelet[2557]: E0508 00:22:37.439283 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.439586 kubelet[2557]: E0508 00:22:37.439573 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.439618 kubelet[2557]: W0508 00:22:37.439586 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.439649 kubelet[2557]: E0508 00:22:37.439612 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.439909 kubelet[2557]: E0508 00:22:37.439895 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.439935 kubelet[2557]: W0508 00:22:37.439909 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.439960 kubelet[2557]: E0508 00:22:37.439937 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.440102 kubelet[2557]: E0508 00:22:37.440092 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.440102 kubelet[2557]: W0508 00:22:37.440101 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.440160 kubelet[2557]: E0508 00:22:37.440114 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.440378 kubelet[2557]: E0508 00:22:37.440363 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:22:37.440378 kubelet[2557]: W0508 00:22:37.440377 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:22:37.440448 kubelet[2557]: E0508 00:22:37.440434 2557 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:22:37.704690 containerd[1437]: time="2025-05-08T00:22:37.704646560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:37.705771 containerd[1437]: time="2025-05-08T00:22:37.705698640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 8 00:22:37.706594 containerd[1437]: time="2025-05-08T00:22:37.706556879Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:37.708688 containerd[1437]: time="2025-05-08T00:22:37.708654599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:37.709725 containerd[1437]: time="2025-05-08T00:22:37.709684318Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.482298033s" May 8 00:22:37.709791 containerd[1437]: time="2025-05-08T00:22:37.709716678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 8 00:22:37.712793 containerd[1437]: time="2025-05-08T00:22:37.712763917Z" level=info msg="CreateContainer within sandbox \"d6591e329b22f78a4c16c143dd43e28671689d5a0c6b8ee1ae5d7235917cedda\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:22:37.724418 containerd[1437]: time="2025-05-08T00:22:37.724376474Z" level=info msg="CreateContainer within sandbox \"d6591e329b22f78a4c16c143dd43e28671689d5a0c6b8ee1ae5d7235917cedda\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ad3b66fffead99d697e8d1a84d7429006be65170443b4aecaf30d6bfe8c11aaf\"" May 8 00:22:37.724913 containerd[1437]: time="2025-05-08T00:22:37.724880434Z" level=info msg="StartContainer for \"ad3b66fffead99d697e8d1a84d7429006be65170443b4aecaf30d6bfe8c11aaf\"" May 8 00:22:37.753876 systemd[1]: Started cri-containerd-ad3b66fffead99d697e8d1a84d7429006be65170443b4aecaf30d6bfe8c11aaf.scope - libcontainer container ad3b66fffead99d697e8d1a84d7429006be65170443b4aecaf30d6bfe8c11aaf. May 8 00:22:37.779101 containerd[1437]: time="2025-05-08T00:22:37.779054096Z" level=info msg="StartContainer for \"ad3b66fffead99d697e8d1a84d7429006be65170443b4aecaf30d6bfe8c11aaf\" returns successfully" May 8 00:22:37.826444 systemd[1]: cri-containerd-ad3b66fffead99d697e8d1a84d7429006be65170443b4aecaf30d6bfe8c11aaf.scope: Deactivated successfully. May 8 00:22:37.859794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad3b66fffead99d697e8d1a84d7429006be65170443b4aecaf30d6bfe8c11aaf-rootfs.mount: Deactivated successfully. May 8 00:22:38.014814 containerd[1437]: time="2025-05-08T00:22:38.011189823Z" level=info msg="shim disconnected" id=ad3b66fffead99d697e8d1a84d7429006be65170443b4aecaf30d6bfe8c11aaf namespace=k8s.io May 8 00:22:38.015154 containerd[1437]: time="2025-05-08T00:22:38.014978182Z" level=warning msg="cleaning up after shim disconnected" id=ad3b66fffead99d697e8d1a84d7429006be65170443b4aecaf30d6bfe8c11aaf namespace=k8s.io May 8 00:22:38.015154 containerd[1437]: time="2025-05-08T00:22:38.014998982Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:22:38.191482 kubelet[2557]: E0508 00:22:38.191051 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-56jwl" podUID="26f41155-3ab0-4f8a-91d4-9d90c9524fe5" May 8 00:22:38.274408 kubelet[2557]: E0508 00:22:38.273918 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:38.275748 containerd[1437]: time="2025-05-08T00:22:38.275392304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 00:22:38.275920 kubelet[2557]: I0508 00:22:38.275897 2557 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:22:38.277566 kubelet[2557]: E0508 00:22:38.277538 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:38.293118 kubelet[2557]: I0508 00:22:38.291744 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-595b7f69d4-hjjqj" podStartSLOduration=2.946723783 podStartE2EDuration="4.291713819s" podCreationTimestamp="2025-05-08 00:22:34 +0000 UTC" firstStartedPulling="2025-05-08 00:22:34.882226409 +0000 UTC m=+22.785090294" lastFinishedPulling="2025-05-08 00:22:36.227216445 +0000 UTC m=+24.130080330" observedRunningTime="2025-05-08 00:22:37.279499295 +0000 UTC m=+25.182363180" watchObservedRunningTime="2025-05-08 00:22:38.291713819 +0000 UTC m=+26.194577704" May 8 00:22:39.276855 kubelet[2557]: E0508 00:22:39.276815 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:40.191263 kubelet[2557]: E0508 00:22:40.191170 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-56jwl" podUID="26f41155-3ab0-4f8a-91d4-9d90c9524fe5" May 8 00:22:40.278050 kubelet[2557]: E0508 00:22:40.278022 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:41.768326 containerd[1437]: time="2025-05-08T00:22:41.768275040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:41.769128 containerd[1437]: time="2025-05-08T00:22:41.768782760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 8 00:22:41.769638 containerd[1437]: time="2025-05-08T00:22:41.769607519Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:41.780692 containerd[1437]: time="2025-05-08T00:22:41.780637277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:41.781473 containerd[1437]: time="2025-05-08T00:22:41.781436276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.505984652s" May 8 00:22:41.781473 containerd[1437]: time="2025-05-08T00:22:41.781466316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 8 00:22:41.783867 containerd[1437]: time="2025-05-08T00:22:41.783828876Z" level=info msg="CreateContainer within sandbox \"d6591e329b22f78a4c16c143dd43e28671689d5a0c6b8ee1ae5d7235917cedda\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:22:41.794520 containerd[1437]: time="2025-05-08T00:22:41.794468673Z" level=info msg="CreateContainer within sandbox \"d6591e329b22f78a4c16c143dd43e28671689d5a0c6b8ee1ae5d7235917cedda\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"45c156383dcfebadaf87e72c6597f5d8fc62570c244a0ae93ca198c911c04ab6\"" May 8 00:22:41.796765 containerd[1437]: time="2025-05-08T00:22:41.795334673Z" level=info msg="StartContainer for \"45c156383dcfebadaf87e72c6597f5d8fc62570c244a0ae93ca198c911c04ab6\"" May 8 00:22:41.826891 systemd[1]: Started cri-containerd-45c156383dcfebadaf87e72c6597f5d8fc62570c244a0ae93ca198c911c04ab6.scope - libcontainer container 45c156383dcfebadaf87e72c6597f5d8fc62570c244a0ae93ca198c911c04ab6. May 8 00:22:41.851103 containerd[1437]: time="2025-05-08T00:22:41.851046339Z" level=info msg="StartContainer for \"45c156383dcfebadaf87e72c6597f5d8fc62570c244a0ae93ca198c911c04ab6\" returns successfully" May 8 00:22:42.190941 kubelet[2557]: E0508 00:22:42.190893 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-56jwl" podUID="26f41155-3ab0-4f8a-91d4-9d90c9524fe5" May 8 00:22:42.285483 kubelet[2557]: E0508 00:22:42.285222 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:42.343418 systemd[1]: Started sshd@7-10.0.0.58:22-10.0.0.1:41588.service - OpenSSH per-connection server daemon (10.0.0.1:41588). May 8 00:22:42.386076 sshd[3273]: Accepted publickey for core from 10.0.0.1 port 41588 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:22:42.387529 sshd[3273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:22:42.393337 systemd-logind[1426]: New session 8 of user core. May 8 00:22:42.401880 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:22:42.408462 systemd[1]: cri-containerd-45c156383dcfebadaf87e72c6597f5d8fc62570c244a0ae93ca198c911c04ab6.scope: Deactivated successfully. May 8 00:22:42.423643 kubelet[2557]: I0508 00:22:42.423607 2557 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 00:22:42.424890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45c156383dcfebadaf87e72c6597f5d8fc62570c244a0ae93ca198c911c04ab6-rootfs.mount: Deactivated successfully. May 8 00:22:42.493508 kubelet[2557]: I0508 00:22:42.492130 2557 topology_manager.go:215] "Topology Admit Handler" podUID="8196bfa1-7d4c-4b32-bb04-7483aba589c0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-j4988" May 8 00:22:42.496150 kubelet[2557]: I0508 00:22:42.495014 2557 topology_manager.go:215] "Topology Admit Handler" podUID="7441d347-bcf2-42a6-b5f7-183f25ff2768" podNamespace="calico-system" podName="calico-kube-controllers-8f4df646d-hrzsd" May 8 00:22:42.496150 kubelet[2557]: I0508 00:22:42.495201 2557 topology_manager.go:215] "Topology Admit Handler" podUID="37b135c5-5fc9-4679-a6b3-7f9b2a12dd64" podNamespace="calico-apiserver" podName="calico-apiserver-5cbb7457c4-99wch" May 8 00:22:42.496150 kubelet[2557]: I0508 00:22:42.495578 2557 topology_manager.go:215] "Topology Admit Handler" podUID="58a2dab3-faad-488a-baa2-8365e3fce66c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zsxj2" May 8 00:22:42.496737 kubelet[2557]: I0508 00:22:42.496650 2557 topology_manager.go:215] "Topology Admit Handler" podUID="76116a76-06fa-4e4c-be4b-d6de000109ca" podNamespace="calico-apiserver" podName="calico-apiserver-5cbb7457c4-9bvwx" May 8 00:22:42.505465 systemd[1]: Created slice kubepods-burstable-pod8196bfa1_7d4c_4b32_bb04_7483aba589c0.slice - libcontainer container kubepods-burstable-pod8196bfa1_7d4c_4b32_bb04_7483aba589c0.slice. May 8 00:22:42.518351 systemd[1]: Created slice kubepods-besteffort-pod7441d347_bcf2_42a6_b5f7_183f25ff2768.slice - libcontainer container kubepods-besteffort-pod7441d347_bcf2_42a6_b5f7_183f25ff2768.slice. May 8 00:22:42.520371 containerd[1437]: time="2025-05-08T00:22:42.520311143Z" level=info msg="shim disconnected" id=45c156383dcfebadaf87e72c6597f5d8fc62570c244a0ae93ca198c911c04ab6 namespace=k8s.io May 8 00:22:42.520789 containerd[1437]: time="2025-05-08T00:22:42.520415823Z" level=warning msg="cleaning up after shim disconnected" id=45c156383dcfebadaf87e72c6597f5d8fc62570c244a0ae93ca198c911c04ab6 namespace=k8s.io May 8 00:22:42.520789 containerd[1437]: time="2025-05-08T00:22:42.520429503Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:22:42.525380 systemd[1]: Created slice kubepods-besteffort-pod37b135c5_5fc9_4679_a6b3_7f9b2a12dd64.slice - libcontainer container kubepods-besteffort-pod37b135c5_5fc9_4679_a6b3_7f9b2a12dd64.slice. May 8 00:22:42.538423 systemd[1]: Created slice kubepods-burstable-pod58a2dab3_faad_488a_baa2_8365e3fce66c.slice - libcontainer container kubepods-burstable-pod58a2dab3_faad_488a_baa2_8365e3fce66c.slice. May 8 00:22:42.553950 sshd[3273]: pam_unix(sshd:session): session closed for user core May 8 00:22:42.554930 systemd[1]: Created slice kubepods-besteffort-pod76116a76_06fa_4e4c_be4b_d6de000109ca.slice - libcontainer container kubepods-besteffort-pod76116a76_06fa_4e4c_be4b_d6de000109ca.slice. May 8 00:22:42.559799 systemd[1]: sshd@7-10.0.0.58:22-10.0.0.1:41588.service: Deactivated successfully. May 8 00:22:42.561508 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:22:42.565051 systemd-logind[1426]: Session 8 logged out. Waiting for processes to exit. May 8 00:22:42.566863 systemd-logind[1426]: Removed session 8. May 8 00:22:42.673120 kubelet[2557]: I0508 00:22:42.673072 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8196bfa1-7d4c-4b32-bb04-7483aba589c0-config-volume\") pod \"coredns-7db6d8ff4d-j4988\" (UID: \"8196bfa1-7d4c-4b32-bb04-7483aba589c0\") " pod="kube-system/coredns-7db6d8ff4d-j4988" May 8 00:22:42.673120 kubelet[2557]: I0508 00:22:42.673116 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7441d347-bcf2-42a6-b5f7-183f25ff2768-tigera-ca-bundle\") pod \"calico-kube-controllers-8f4df646d-hrzsd\" (UID: \"7441d347-bcf2-42a6-b5f7-183f25ff2768\") " pod="calico-system/calico-kube-controllers-8f4df646d-hrzsd" May 8 00:22:42.673289 kubelet[2557]: I0508 00:22:42.673138 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjvwd\" (UniqueName: \"kubernetes.io/projected/7441d347-bcf2-42a6-b5f7-183f25ff2768-kube-api-access-qjvwd\") pod \"calico-kube-controllers-8f4df646d-hrzsd\" (UID: \"7441d347-bcf2-42a6-b5f7-183f25ff2768\") " pod="calico-system/calico-kube-controllers-8f4df646d-hrzsd" May 8 00:22:42.673289 kubelet[2557]: I0508 00:22:42.673158 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5hv5\" (UniqueName: \"kubernetes.io/projected/76116a76-06fa-4e4c-be4b-d6de000109ca-kube-api-access-z5hv5\") pod \"calico-apiserver-5cbb7457c4-9bvwx\" (UID: \"76116a76-06fa-4e4c-be4b-d6de000109ca\") " pod="calico-apiserver/calico-apiserver-5cbb7457c4-9bvwx" May 8 00:22:42.673289 kubelet[2557]: I0508 00:22:42.673175 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khxhf\" (UniqueName: \"kubernetes.io/projected/37b135c5-5fc9-4679-a6b3-7f9b2a12dd64-kube-api-access-khxhf\") pod \"calico-apiserver-5cbb7457c4-99wch\" (UID: \"37b135c5-5fc9-4679-a6b3-7f9b2a12dd64\") " pod="calico-apiserver/calico-apiserver-5cbb7457c4-99wch" May 8 00:22:42.673289 kubelet[2557]: I0508 00:22:42.673196 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58a2dab3-faad-488a-baa2-8365e3fce66c-config-volume\") pod \"coredns-7db6d8ff4d-zsxj2\" (UID: \"58a2dab3-faad-488a-baa2-8365e3fce66c\") " pod="kube-system/coredns-7db6d8ff4d-zsxj2" May 8 00:22:42.673289 kubelet[2557]: I0508 00:22:42.673230 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsw6k\" (UniqueName: \"kubernetes.io/projected/58a2dab3-faad-488a-baa2-8365e3fce66c-kube-api-access-dsw6k\") pod \"coredns-7db6d8ff4d-zsxj2\" (UID: \"58a2dab3-faad-488a-baa2-8365e3fce66c\") " pod="kube-system/coredns-7db6d8ff4d-zsxj2" May 8 00:22:42.673395 kubelet[2557]: I0508 00:22:42.673247 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mzj5\" (UniqueName: \"kubernetes.io/projected/8196bfa1-7d4c-4b32-bb04-7483aba589c0-kube-api-access-9mzj5\") pod \"coredns-7db6d8ff4d-j4988\" (UID: \"8196bfa1-7d4c-4b32-bb04-7483aba589c0\") " pod="kube-system/coredns-7db6d8ff4d-j4988" May 8 00:22:42.673395 kubelet[2557]: I0508 00:22:42.673275 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/76116a76-06fa-4e4c-be4b-d6de000109ca-calico-apiserver-certs\") pod \"calico-apiserver-5cbb7457c4-9bvwx\" (UID: \"76116a76-06fa-4e4c-be4b-d6de000109ca\") " pod="calico-apiserver/calico-apiserver-5cbb7457c4-9bvwx" May 8 00:22:42.673395 kubelet[2557]: I0508 00:22:42.673360 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/37b135c5-5fc9-4679-a6b3-7f9b2a12dd64-calico-apiserver-certs\") pod \"calico-apiserver-5cbb7457c4-99wch\" (UID: \"37b135c5-5fc9-4679-a6b3-7f9b2a12dd64\") " pod="calico-apiserver/calico-apiserver-5cbb7457c4-99wch" May 8 00:22:42.816903 kubelet[2557]: E0508 00:22:42.816813 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:42.818413 containerd[1437]: time="2025-05-08T00:22:42.818347235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j4988,Uid:8196bfa1-7d4c-4b32-bb04-7483aba589c0,Namespace:kube-system,Attempt:0,}" May 8 00:22:42.823316 containerd[1437]: time="2025-05-08T00:22:42.823047594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8f4df646d-hrzsd,Uid:7441d347-bcf2-42a6-b5f7-183f25ff2768,Namespace:calico-system,Attempt:0,}" May 8 00:22:42.832022 containerd[1437]: time="2025-05-08T00:22:42.831987072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cbb7457c4-99wch,Uid:37b135c5-5fc9-4679-a6b3-7f9b2a12dd64,Namespace:calico-apiserver,Attempt:0,}" May 8 00:22:42.844248 kubelet[2557]: E0508 00:22:42.843876 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:42.844526 containerd[1437]: time="2025-05-08T00:22:42.844495189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zsxj2,Uid:58a2dab3-faad-488a-baa2-8365e3fce66c,Namespace:kube-system,Attempt:0,}" May 8 00:22:42.875632 containerd[1437]: time="2025-05-08T00:22:42.875599981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cbb7457c4-9bvwx,Uid:76116a76-06fa-4e4c-be4b-d6de000109ca,Namespace:calico-apiserver,Attempt:0,}" May 8 00:22:43.191762 containerd[1437]: time="2025-05-08T00:22:43.191693592Z" level=error msg="Failed to destroy network for sandbox \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.192423 containerd[1437]: time="2025-05-08T00:22:43.192325911Z" level=error msg="encountered an error cleaning up failed sandbox \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.192473 containerd[1437]: time="2025-05-08T00:22:43.192434591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j4988,Uid:8196bfa1-7d4c-4b32-bb04-7483aba589c0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.193768 kubelet[2557]: E0508 00:22:43.192924 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.193768 kubelet[2557]: E0508 00:22:43.193012 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j4988" May 8 00:22:43.193768 kubelet[2557]: E0508 00:22:43.193033 2557 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j4988" May 8 00:22:43.198645 containerd[1437]: time="2025-05-08T00:22:43.198596950Z" level=error msg="Failed to destroy network for sandbox \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.199591 containerd[1437]: time="2025-05-08T00:22:43.199558110Z" level=error msg="Failed to destroy network for sandbox \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.199808 kubelet[2557]: E0508 00:22:43.193083 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j4988_kube-system(8196bfa1-7d4c-4b32-bb04-7483aba589c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j4988_kube-system(8196bfa1-7d4c-4b32-bb04-7483aba589c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j4988" podUID="8196bfa1-7d4c-4b32-bb04-7483aba589c0" May 8 00:22:43.200147 containerd[1437]: time="2025-05-08T00:22:43.200108550Z" level=error msg="encountered an error cleaning up failed sandbox \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.200476 containerd[1437]: time="2025-05-08T00:22:43.200449070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8f4df646d-hrzsd,Uid:7441d347-bcf2-42a6-b5f7-183f25ff2768,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.200632 containerd[1437]: time="2025-05-08T00:22:43.200368430Z" level=error msg="encountered an error cleaning up failed sandbox \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.200764 containerd[1437]: time="2025-05-08T00:22:43.200708710Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cbb7457c4-99wch,Uid:37b135c5-5fc9-4679-a6b3-7f9b2a12dd64,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.201046 kubelet[2557]: E0508 00:22:43.200997 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.201102 kubelet[2557]: E0508 00:22:43.201056 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8f4df646d-hrzsd" May 8 00:22:43.201102 kubelet[2557]: E0508 00:22:43.201076 2557 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8f4df646d-hrzsd" May 8 00:22:43.201157 kubelet[2557]: E0508 00:22:43.201108 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8f4df646d-hrzsd_calico-system(7441d347-bcf2-42a6-b5f7-183f25ff2768)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8f4df646d-hrzsd_calico-system(7441d347-bcf2-42a6-b5f7-183f25ff2768)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8f4df646d-hrzsd" podUID="7441d347-bcf2-42a6-b5f7-183f25ff2768" May 8 00:22:43.201157 kubelet[2557]: E0508 00:22:43.200997 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.201157 kubelet[2557]: E0508 00:22:43.201148 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cbb7457c4-99wch" May 8 00:22:43.201245 kubelet[2557]: E0508 00:22:43.201160 2557 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cbb7457c4-99wch" May 8 00:22:43.201245 kubelet[2557]: E0508 00:22:43.201185 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cbb7457c4-99wch_calico-apiserver(37b135c5-5fc9-4679-a6b3-7f9b2a12dd64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cbb7457c4-99wch_calico-apiserver(37b135c5-5fc9-4679-a6b3-7f9b2a12dd64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cbb7457c4-99wch" podUID="37b135c5-5fc9-4679-a6b3-7f9b2a12dd64" May 8 00:22:43.203985 containerd[1437]: time="2025-05-08T00:22:43.203901309Z" level=error msg="Failed to destroy network for sandbox \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.205035 containerd[1437]: time="2025-05-08T00:22:43.205000909Z" level=error msg="encountered an error cleaning up failed sandbox \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.205174 containerd[1437]: time="2025-05-08T00:22:43.205150829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cbb7457c4-9bvwx,Uid:76116a76-06fa-4e4c-be4b-d6de000109ca,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.205517 kubelet[2557]: E0508 00:22:43.205481 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.205576 kubelet[2557]: E0508 00:22:43.205528 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cbb7457c4-9bvwx" May 8 00:22:43.205576 kubelet[2557]: E0508 00:22:43.205545 2557 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cbb7457c4-9bvwx" May 8 00:22:43.205622 kubelet[2557]: E0508 00:22:43.205577 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cbb7457c4-9bvwx_calico-apiserver(76116a76-06fa-4e4c-be4b-d6de000109ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cbb7457c4-9bvwx_calico-apiserver(76116a76-06fa-4e4c-be4b-d6de000109ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cbb7457c4-9bvwx" podUID="76116a76-06fa-4e4c-be4b-d6de000109ca" May 8 00:22:43.207497 containerd[1437]: time="2025-05-08T00:22:43.207143588Z" level=error msg="Failed to destroy network for sandbox \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.210481 containerd[1437]: time="2025-05-08T00:22:43.210424547Z" level=error msg="encountered an error cleaning up failed sandbox \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.210563 containerd[1437]: time="2025-05-08T00:22:43.210491467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zsxj2,Uid:58a2dab3-faad-488a-baa2-8365e3fce66c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.210705 kubelet[2557]: E0508 00:22:43.210668 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.211344 kubelet[2557]: E0508 00:22:43.210716 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zsxj2" May 8 00:22:43.211344 kubelet[2557]: E0508 00:22:43.210785 2557 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zsxj2" May 8 00:22:43.211344 kubelet[2557]: E0508 00:22:43.210839 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zsxj2_kube-system(58a2dab3-faad-488a-baa2-8365e3fce66c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zsxj2_kube-system(58a2dab3-faad-488a-baa2-8365e3fce66c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zsxj2" podUID="58a2dab3-faad-488a-baa2-8365e3fce66c" May 8 00:22:43.288864 kubelet[2557]: I0508 00:22:43.288569 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" May 8 00:22:43.289540 containerd[1437]: time="2025-05-08T00:22:43.289452690Z" level=info msg="StopPodSandbox for \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\"" May 8 00:22:43.289743 containerd[1437]: time="2025-05-08T00:22:43.289615250Z" level=info msg="Ensure that sandbox aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7 in task-service has been cleanup successfully" May 8 00:22:43.290389 kubelet[2557]: I0508 00:22:43.290300 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" May 8 00:22:43.290987 containerd[1437]: time="2025-05-08T00:22:43.290796050Z" level=info msg="StopPodSandbox for \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\"" May 8 00:22:43.291541 containerd[1437]: time="2025-05-08T00:22:43.291497850Z" level=info msg="Ensure that sandbox f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247 in task-service has been cleanup successfully" May 8 00:22:43.292494 kubelet[2557]: I0508 00:22:43.292471 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" May 8 00:22:43.293982 containerd[1437]: time="2025-05-08T00:22:43.293804929Z" level=info msg="StopPodSandbox for \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\"" May 8 00:22:43.293982 containerd[1437]: time="2025-05-08T00:22:43.293933409Z" level=info msg="Ensure that sandbox 0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c in task-service has been cleanup successfully" May 8 00:22:43.299245 kubelet[2557]: E0508 00:22:43.299114 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:43.301422 containerd[1437]: time="2025-05-08T00:22:43.301340608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 00:22:43.304188 kubelet[2557]: I0508 00:22:43.303155 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" May 8 00:22:43.305868 containerd[1437]: time="2025-05-08T00:22:43.305639967Z" level=info msg="StopPodSandbox for \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\"" May 8 00:22:43.305868 containerd[1437]: time="2025-05-08T00:22:43.305814407Z" level=info msg="Ensure that sandbox 5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef in task-service has been cleanup successfully" May 8 00:22:43.306129 kubelet[2557]: I0508 00:22:43.306055 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" May 8 00:22:43.307050 containerd[1437]: time="2025-05-08T00:22:43.306809887Z" level=info msg="StopPodSandbox for \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\"" May 8 00:22:43.307387 containerd[1437]: time="2025-05-08T00:22:43.307209647Z" level=info msg="Ensure that sandbox 1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616 in task-service has been cleanup successfully" May 8 00:22:43.350958 containerd[1437]: time="2025-05-08T00:22:43.350621237Z" level=error msg="StopPodSandbox for \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\" failed" error="failed to destroy network for sandbox \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.351107 kubelet[2557]: E0508 00:22:43.350980 2557 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" May 8 00:22:43.351107 kubelet[2557]: E0508 00:22:43.351034 2557 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7"} May 8 00:22:43.351432 kubelet[2557]: E0508 00:22:43.351365 2557 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7441d347-bcf2-42a6-b5f7-183f25ff2768\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:22:43.351432 kubelet[2557]: E0508 00:22:43.351407 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7441d347-bcf2-42a6-b5f7-183f25ff2768\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8f4df646d-hrzsd" podUID="7441d347-bcf2-42a6-b5f7-183f25ff2768" May 8 00:22:43.354448 containerd[1437]: time="2025-05-08T00:22:43.354395236Z" level=error msg="StopPodSandbox for \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\" failed" error="failed to destroy network for sandbox \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.354736 kubelet[2557]: E0508 00:22:43.354651 2557 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" May 8 00:22:43.354736 kubelet[2557]: E0508 00:22:43.354695 2557 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247"} May 8 00:22:43.354825 kubelet[2557]: E0508 00:22:43.354741 2557 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8196bfa1-7d4c-4b32-bb04-7483aba589c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:22:43.354825 kubelet[2557]: E0508 00:22:43.354762 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8196bfa1-7d4c-4b32-bb04-7483aba589c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j4988" podUID="8196bfa1-7d4c-4b32-bb04-7483aba589c0" May 8 00:22:43.360910 containerd[1437]: time="2025-05-08T00:22:43.360782475Z" level=error msg="StopPodSandbox for \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\" failed" error="failed to destroy network for sandbox \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.361666 kubelet[2557]: E0508 00:22:43.361624 2557 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" May 8 00:22:43.361666 kubelet[2557]: E0508 00:22:43.361672 2557 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616"} May 8 00:22:43.361902 kubelet[2557]: E0508 00:22:43.361713 2557 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"76116a76-06fa-4e4c-be4b-d6de000109ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:22:43.361902 kubelet[2557]: E0508 00:22:43.361771 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"76116a76-06fa-4e4c-be4b-d6de000109ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cbb7457c4-9bvwx" podUID="76116a76-06fa-4e4c-be4b-d6de000109ca" May 8 00:22:43.362633 containerd[1437]: time="2025-05-08T00:22:43.362597435Z" level=error msg="StopPodSandbox for \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\" failed" error="failed to destroy network for sandbox \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.362813 kubelet[2557]: E0508 00:22:43.362786 2557 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" May 8 00:22:43.362873 kubelet[2557]: E0508 00:22:43.362820 2557 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef"} May 8 00:22:43.362873 kubelet[2557]: E0508 00:22:43.362848 2557 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"58a2dab3-faad-488a-baa2-8365e3fce66c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:22:43.362873 kubelet[2557]: E0508 00:22:43.362868 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"58a2dab3-faad-488a-baa2-8365e3fce66c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zsxj2" podUID="58a2dab3-faad-488a-baa2-8365e3fce66c" May 8 00:22:43.366144 containerd[1437]: time="2025-05-08T00:22:43.366038394Z" level=error msg="StopPodSandbox for \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\" failed" error="failed to destroy network for sandbox \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:43.366390 kubelet[2557]: E0508 00:22:43.366347 2557 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" May 8 00:22:43.366390 kubelet[2557]: E0508 00:22:43.366386 2557 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c"} May 8 00:22:43.366451 kubelet[2557]: E0508 00:22:43.366410 2557 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37b135c5-5fc9-4679-a6b3-7f9b2a12dd64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:22:43.366451 kubelet[2557]: E0508 00:22:43.366438 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37b135c5-5fc9-4679-a6b3-7f9b2a12dd64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cbb7457c4-99wch" podUID="37b135c5-5fc9-4679-a6b3-7f9b2a12dd64" May 8 00:22:43.793776 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7-shm.mount: Deactivated successfully. May 8 00:22:43.793868 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247-shm.mount: Deactivated successfully. May 8 00:22:44.199154 systemd[1]: Created slice kubepods-besteffort-pod26f41155_3ab0_4f8a_91d4_9d90c9524fe5.slice - libcontainer container kubepods-besteffort-pod26f41155_3ab0_4f8a_91d4_9d90c9524fe5.slice. May 8 00:22:44.202122 containerd[1437]: time="2025-05-08T00:22:44.201752817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-56jwl,Uid:26f41155-3ab0-4f8a-91d4-9d90c9524fe5,Namespace:calico-system,Attempt:0,}" May 8 00:22:44.269843 containerd[1437]: time="2025-05-08T00:22:44.269785083Z" level=error msg="Failed to destroy network for sandbox \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:44.270152 containerd[1437]: time="2025-05-08T00:22:44.270109043Z" level=error msg="encountered an error cleaning up failed sandbox \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:44.270195 containerd[1437]: time="2025-05-08T00:22:44.270166003Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-56jwl,Uid:26f41155-3ab0-4f8a-91d4-9d90c9524fe5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:44.270898 kubelet[2557]: E0508 00:22:44.270857 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:44.271163 kubelet[2557]: E0508 00:22:44.270918 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-56jwl" May 8 00:22:44.271163 kubelet[2557]: E0508 00:22:44.270936 2557 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-56jwl" May 8 00:22:44.271163 kubelet[2557]: E0508 00:22:44.270981 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-56jwl_calico-system(26f41155-3ab0-4f8a-91d4-9d90c9524fe5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-56jwl_calico-system(26f41155-3ab0-4f8a-91d4-9d90c9524fe5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-56jwl" podUID="26f41155-3ab0-4f8a-91d4-9d90c9524fe5" May 8 00:22:44.273665 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836-shm.mount: Deactivated successfully. May 8 00:22:44.309743 kubelet[2557]: I0508 00:22:44.309457 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" May 8 00:22:44.310148 containerd[1437]: time="2025-05-08T00:22:44.310113035Z" level=info msg="StopPodSandbox for \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\"" May 8 00:22:44.310338 containerd[1437]: time="2025-05-08T00:22:44.310280155Z" level=info msg="Ensure that sandbox c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836 in task-service has been cleanup successfully" May 8 00:22:44.341286 containerd[1437]: time="2025-05-08T00:22:44.341228108Z" level=error msg="StopPodSandbox for \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\" failed" error="failed to destroy network for sandbox \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:22:44.341505 kubelet[2557]: E0508 00:22:44.341454 2557 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" May 8 00:22:44.341555 kubelet[2557]: E0508 00:22:44.341508 2557 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836"} May 8 00:22:44.341555 kubelet[2557]: E0508 00:22:44.341541 2557 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26f41155-3ab0-4f8a-91d4-9d90c9524fe5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:22:44.341669 kubelet[2557]: E0508 00:22:44.341565 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26f41155-3ab0-4f8a-91d4-9d90c9524fe5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-56jwl" podUID="26f41155-3ab0-4f8a-91d4-9d90c9524fe5" May 8 00:22:47.192815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount447798828.mount: Deactivated successfully. May 8 00:22:47.285542 containerd[1437]: time="2025-05-08T00:22:47.285493921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:47.286453 containerd[1437]: time="2025-05-08T00:22:47.286282441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 8 00:22:47.287441 containerd[1437]: time="2025-05-08T00:22:47.287229760Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:47.294788 containerd[1437]: time="2025-05-08T00:22:47.294692879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:47.295239 containerd[1437]: time="2025-05-08T00:22:47.295197999Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 3.993736591s" May 8 00:22:47.295239 containerd[1437]: time="2025-05-08T00:22:47.295237399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 8 00:22:47.305927 containerd[1437]: time="2025-05-08T00:22:47.305813197Z" level=info msg="CreateContainer within sandbox \"d6591e329b22f78a4c16c143dd43e28671689d5a0c6b8ee1ae5d7235917cedda\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:22:47.327095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2010709464.mount: Deactivated successfully. May 8 00:22:47.327260 containerd[1437]: time="2025-05-08T00:22:47.327132154Z" level=info msg="CreateContainer within sandbox \"d6591e329b22f78a4c16c143dd43e28671689d5a0c6b8ee1ae5d7235917cedda\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cac4fa17172c25be737265571b5bce5f0611b5e3c0920af5381d34ebc7cdb4a5\"" May 8 00:22:47.327681 containerd[1437]: time="2025-05-08T00:22:47.327609554Z" level=info msg="StartContainer for \"cac4fa17172c25be737265571b5bce5f0611b5e3c0920af5381d34ebc7cdb4a5\"" May 8 00:22:47.388887 systemd[1]: Started cri-containerd-cac4fa17172c25be737265571b5bce5f0611b5e3c0920af5381d34ebc7cdb4a5.scope - libcontainer container cac4fa17172c25be737265571b5bce5f0611b5e3c0920af5381d34ebc7cdb4a5. May 8 00:22:47.410929 containerd[1437]: time="2025-05-08T00:22:47.410887740Z" level=info msg="StartContainer for \"cac4fa17172c25be737265571b5bce5f0611b5e3c0920af5381d34ebc7cdb4a5\" returns successfully" May 8 00:22:47.567065 systemd[1]: Started sshd@8-10.0.0.58:22-10.0.0.1:58080.service - OpenSSH per-connection server daemon (10.0.0.1:58080). May 8 00:22:47.583343 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 00:22:47.583587 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 00:22:47.607058 sshd[3723]: Accepted publickey for core from 10.0.0.1 port 58080 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:22:47.609367 sshd[3723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:22:47.615930 systemd-logind[1426]: New session 9 of user core. May 8 00:22:47.622884 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:22:47.753950 sshd[3723]: pam_unix(sshd:session): session closed for user core May 8 00:22:47.758920 systemd[1]: sshd@8-10.0.0.58:22-10.0.0.1:58080.service: Deactivated successfully. May 8 00:22:47.760765 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:22:47.761920 systemd-logind[1426]: Session 9 logged out. Waiting for processes to exit. May 8 00:22:47.763287 systemd-logind[1426]: Removed session 9. May 8 00:22:48.320915 kubelet[2557]: E0508 00:22:48.320386 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:48.333559 kubelet[2557]: I0508 00:22:48.333499 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lzmfw" podStartSLOduration=1.938232993 podStartE2EDuration="14.33348671s" podCreationTimestamp="2025-05-08 00:22:34 +0000 UTC" firstStartedPulling="2025-05-08 00:22:34.900800802 +0000 UTC m=+22.803664687" lastFinishedPulling="2025-05-08 00:22:47.296054519 +0000 UTC m=+35.198918404" observedRunningTime="2025-05-08 00:22:48.33326803 +0000 UTC m=+36.236131915" watchObservedRunningTime="2025-05-08 00:22:48.33348671 +0000 UTC m=+36.236350595" May 8 00:22:49.057806 kernel: bpftool[3896]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 8 00:22:49.210673 systemd-networkd[1383]: vxlan.calico: Link UP May 8 00:22:49.210679 systemd-networkd[1383]: vxlan.calico: Gained carrier May 8 00:22:49.321945 kubelet[2557]: I0508 00:22:49.321254 2557 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:22:49.323390 kubelet[2557]: E0508 00:22:49.322008 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:51.100868 systemd-networkd[1383]: vxlan.calico: Gained IPv6LL May 8 00:22:52.769692 systemd[1]: Started sshd@9-10.0.0.58:22-10.0.0.1:36270.service - OpenSSH per-connection server daemon (10.0.0.1:36270). May 8 00:22:52.899571 sshd[3971]: Accepted publickey for core from 10.0.0.1 port 36270 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:22:52.901434 sshd[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:22:52.906184 systemd-logind[1426]: New session 10 of user core. May 8 00:22:52.915892 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:22:53.057543 sshd[3971]: pam_unix(sshd:session): session closed for user core May 8 00:22:53.066432 systemd[1]: sshd@9-10.0.0.58:22-10.0.0.1:36270.service: Deactivated successfully. May 8 00:22:53.068072 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:22:53.069796 systemd-logind[1426]: Session 10 logged out. Waiting for processes to exit. May 8 00:22:53.076068 systemd[1]: Started sshd@10-10.0.0.58:22-10.0.0.1:36272.service - OpenSSH per-connection server daemon (10.0.0.1:36272). May 8 00:22:53.076926 systemd-logind[1426]: Removed session 10. May 8 00:22:53.106311 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 36272 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:22:53.107667 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:22:53.111404 systemd-logind[1426]: New session 11 of user core. May 8 00:22:53.119916 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:22:53.281138 sshd[3987]: pam_unix(sshd:session): session closed for user core May 8 00:22:53.288398 systemd[1]: sshd@10-10.0.0.58:22-10.0.0.1:36272.service: Deactivated successfully. May 8 00:22:53.291656 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:22:53.296045 systemd-logind[1426]: Session 11 logged out. Waiting for processes to exit. May 8 00:22:53.315393 systemd[1]: Started sshd@11-10.0.0.58:22-10.0.0.1:36288.service - OpenSSH per-connection server daemon (10.0.0.1:36288). May 8 00:22:53.321978 systemd-logind[1426]: Removed session 11. May 8 00:22:53.351196 sshd[4000]: Accepted publickey for core from 10.0.0.1 port 36288 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:22:53.352867 sshd[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:22:53.357375 systemd-logind[1426]: New session 12 of user core. May 8 00:22:53.373923 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:22:53.532428 sshd[4000]: pam_unix(sshd:session): session closed for user core May 8 00:22:53.537790 systemd[1]: sshd@11-10.0.0.58:22-10.0.0.1:36288.service: Deactivated successfully. May 8 00:22:53.538127 systemd-logind[1426]: Session 12 logged out. Waiting for processes to exit. May 8 00:22:53.540158 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:22:53.541289 systemd-logind[1426]: Removed session 12. May 8 00:22:54.193029 containerd[1437]: time="2025-05-08T00:22:54.192839619Z" level=info msg="StopPodSandbox for \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\"" May 8 00:22:54.193376 containerd[1437]: time="2025-05-08T00:22:54.193044859Z" level=info msg="StopPodSandbox for \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\"" May 8 00:22:54.452914 containerd[1437]: 2025-05-08 00:22:54.282 [INFO][4046] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" May 8 00:22:54.452914 containerd[1437]: 2025-05-08 00:22:54.282 [INFO][4046] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" iface="eth0" netns="/var/run/netns/cni-b8aee1d6-3adb-99c7-bd3a-aadc892eb37c" May 8 00:22:54.452914 containerd[1437]: 2025-05-08 00:22:54.284 [INFO][4046] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" iface="eth0" netns="/var/run/netns/cni-b8aee1d6-3adb-99c7-bd3a-aadc892eb37c" May 8 00:22:54.452914 containerd[1437]: 2025-05-08 00:22:54.284 [INFO][4046] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" iface="eth0" netns="/var/run/netns/cni-b8aee1d6-3adb-99c7-bd3a-aadc892eb37c" May 8 00:22:54.452914 containerd[1437]: 2025-05-08 00:22:54.284 [INFO][4046] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" May 8 00:22:54.452914 containerd[1437]: 2025-05-08 00:22:54.284 [INFO][4046] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" May 8 00:22:54.452914 containerd[1437]: 2025-05-08 00:22:54.435 [INFO][4061] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" HandleID="k8s-pod-network.f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" Workload="localhost-k8s-coredns--7db6d8ff4d--j4988-eth0" May 8 00:22:54.452914 containerd[1437]: 2025-05-08 00:22:54.435 [INFO][4061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:22:54.452914 containerd[1437]: 2025-05-08 00:22:54.435 [INFO][4061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:22:54.452914 containerd[1437]: 2025-05-08 00:22:54.444 [WARNING][4061] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" HandleID="k8s-pod-network.f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" Workload="localhost-k8s-coredns--7db6d8ff4d--j4988-eth0" May 8 00:22:54.452914 containerd[1437]: 2025-05-08 00:22:54.444 [INFO][4061] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" HandleID="k8s-pod-network.f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" Workload="localhost-k8s-coredns--7db6d8ff4d--j4988-eth0" May 8 00:22:54.452914 containerd[1437]: 2025-05-08 00:22:54.446 [INFO][4061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:22:54.452914 containerd[1437]: 2025-05-08 00:22:54.450 [INFO][4046] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" May 8 00:22:54.457240 containerd[1437]: time="2025-05-08T00:22:54.453741992Z" level=info msg="TearDown network for sandbox \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\" successfully" May 8 00:22:54.457240 containerd[1437]: time="2025-05-08T00:22:54.453773992Z" level=info msg="StopPodSandbox for \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\" returns successfully" May 8 00:22:54.457240 containerd[1437]: time="2025-05-08T00:22:54.454774272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j4988,Uid:8196bfa1-7d4c-4b32-bb04-7483aba589c0,Namespace:kube-system,Attempt:1,}" May 8 00:22:54.455589 systemd[1]: run-netns-cni\x2db8aee1d6\x2d3adb\x2d99c7\x2dbd3a\x2daadc892eb37c.mount: Deactivated successfully. May 8 00:22:54.457660 kubelet[2557]: E0508 00:22:54.454122 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:54.464885 containerd[1437]: 2025-05-08 00:22:54.285 [INFO][4045] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" May 8 00:22:54.464885 containerd[1437]: 2025-05-08 00:22:54.285 [INFO][4045] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" iface="eth0" netns="/var/run/netns/cni-9d4dc684-4e87-ead4-5e95-56e6ca5619fb" May 8 00:22:54.464885 containerd[1437]: 2025-05-08 00:22:54.285 [INFO][4045] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" iface="eth0" netns="/var/run/netns/cni-9d4dc684-4e87-ead4-5e95-56e6ca5619fb" May 8 00:22:54.464885 containerd[1437]: 2025-05-08 00:22:54.286 [INFO][4045] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" iface="eth0" netns="/var/run/netns/cni-9d4dc684-4e87-ead4-5e95-56e6ca5619fb" May 8 00:22:54.464885 containerd[1437]: 2025-05-08 00:22:54.286 [INFO][4045] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" May 8 00:22:54.464885 containerd[1437]: 2025-05-08 00:22:54.286 [INFO][4045] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" May 8 00:22:54.464885 containerd[1437]: 2025-05-08 00:22:54.435 [INFO][4063] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" HandleID="k8s-pod-network.aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" Workload="localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0" May 8 00:22:54.464885 containerd[1437]: 2025-05-08 00:22:54.435 [INFO][4063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:22:54.464885 containerd[1437]: 2025-05-08 00:22:54.446 [INFO][4063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:22:54.464885 containerd[1437]: 2025-05-08 00:22:54.458 [WARNING][4063] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" HandleID="k8s-pod-network.aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" Workload="localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0" May 8 00:22:54.464885 containerd[1437]: 2025-05-08 00:22:54.458 [INFO][4063] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" HandleID="k8s-pod-network.aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" Workload="localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0" May 8 00:22:54.464885 containerd[1437]: 2025-05-08 00:22:54.460 [INFO][4063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:22:54.464885 containerd[1437]: 2025-05-08 00:22:54.462 [INFO][4045] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" May 8 00:22:54.466849 containerd[1437]: time="2025-05-08T00:22:54.466811310Z" level=info msg="TearDown network for sandbox \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\" successfully" May 8 00:22:54.466849 containerd[1437]: time="2025-05-08T00:22:54.466844630Z" level=info msg="StopPodSandbox for \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\" returns successfully" May 8 00:22:54.466945 systemd[1]: run-netns-cni\x2d9d4dc684\x2d4e87\x2dead4\x2d5e95\x2d56e6ca5619fb.mount: Deactivated successfully. May 8 00:22:54.467767 containerd[1437]: time="2025-05-08T00:22:54.467426870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8f4df646d-hrzsd,Uid:7441d347-bcf2-42a6-b5f7-183f25ff2768,Namespace:calico-system,Attempt:1,}" May 8 00:22:54.610018 systemd-networkd[1383]: cali8727e66adfa: Link UP May 8 00:22:54.610542 systemd-networkd[1383]: cali8727e66adfa: Gained carrier May 8 00:22:54.625679 containerd[1437]: 2025-05-08 00:22:54.524 [INFO][4099] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0 calico-kube-controllers-8f4df646d- calico-system 7441d347-bcf2-42a6-b5f7-183f25ff2768 857 0 2025-05-08 00:22:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8f4df646d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-8f4df646d-hrzsd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8727e66adfa [] []}} ContainerID="2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc" Namespace="calico-system" Pod="calico-kube-controllers-8f4df646d-hrzsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-" May 8 00:22:54.625679 containerd[1437]: 2025-05-08 00:22:54.524 [INFO][4099] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc" Namespace="calico-system" Pod="calico-kube-controllers-8f4df646d-hrzsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0" May 8 00:22:54.625679 containerd[1437]: 2025-05-08 00:22:54.563 [INFO][4117] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc" HandleID="k8s-pod-network.2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc" Workload="localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0" May 8 00:22:54.625679 containerd[1437]: 2025-05-08 00:22:54.578 [INFO][4117] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc" HandleID="k8s-pod-network.2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc" Workload="localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003aa280), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-8f4df646d-hrzsd", "timestamp":"2025-05-08 00:22:54.56358954 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:22:54.625679 containerd[1437]: 2025-05-08 00:22:54.578 [INFO][4117] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:22:54.625679 containerd[1437]: 2025-05-08 00:22:54.578 [INFO][4117] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:22:54.625679 containerd[1437]: 2025-05-08 00:22:54.578 [INFO][4117] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:22:54.625679 containerd[1437]: 2025-05-08 00:22:54.579 [INFO][4117] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc" host="localhost" May 8 00:22:54.625679 containerd[1437]: 2025-05-08 00:22:54.584 [INFO][4117] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:22:54.625679 containerd[1437]: 2025-05-08 00:22:54.588 [INFO][4117] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:22:54.625679 containerd[1437]: 2025-05-08 00:22:54.590 [INFO][4117] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:22:54.625679 containerd[1437]: 2025-05-08 00:22:54.591 [INFO][4117] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:22:54.625679 containerd[1437]: 2025-05-08 00:22:54.591 [INFO][4117] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc" host="localhost" May 8 00:22:54.625679 containerd[1437]: 2025-05-08 00:22:54.593 [INFO][4117] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc May 8 00:22:54.625679 containerd[1437]: 2025-05-08 00:22:54.596 [INFO][4117] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc" host="localhost" May 8 00:22:54.625679 containerd[1437]: 2025-05-08 00:22:54.601 [INFO][4117] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc" host="localhost" May 8 00:22:54.625679 containerd[1437]: 2025-05-08 00:22:54.601 [INFO][4117] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc" host="localhost" May 8 00:22:54.625679 containerd[1437]: 2025-05-08 00:22:54.601 [INFO][4117] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:22:54.625679 containerd[1437]: 2025-05-08 00:22:54.601 [INFO][4117] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc" HandleID="k8s-pod-network.2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc" Workload="localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0" May 8 00:22:54.626370 containerd[1437]: 2025-05-08 00:22:54.606 [INFO][4099] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc" Namespace="calico-system" Pod="calico-kube-controllers-8f4df646d-hrzsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0", GenerateName:"calico-kube-controllers-8f4df646d-", Namespace:"calico-system", SelfLink:"", UID:"7441d347-bcf2-42a6-b5f7-183f25ff2768", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8f4df646d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-8f4df646d-hrzsd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8727e66adfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:22:54.626370 containerd[1437]: 2025-05-08 00:22:54.606 [INFO][4099] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc" Namespace="calico-system" Pod="calico-kube-controllers-8f4df646d-hrzsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0" May 8 00:22:54.626370 containerd[1437]: 2025-05-08 00:22:54.606 [INFO][4099] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8727e66adfa ContainerID="2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc" Namespace="calico-system" Pod="calico-kube-controllers-8f4df646d-hrzsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0" May 8 00:22:54.626370 containerd[1437]: 2025-05-08 00:22:54.611 [INFO][4099] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc" Namespace="calico-system" Pod="calico-kube-controllers-8f4df646d-hrzsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0" May 8 00:22:54.626370 containerd[1437]: 2025-05-08 00:22:54.611 [INFO][4099] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc" Namespace="calico-system" Pod="calico-kube-controllers-8f4df646d-hrzsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0", GenerateName:"calico-kube-controllers-8f4df646d-", Namespace:"calico-system", SelfLink:"", UID:"7441d347-bcf2-42a6-b5f7-183f25ff2768", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8f4df646d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc", Pod:"calico-kube-controllers-8f4df646d-hrzsd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8727e66adfa", MAC:"46:b6:14:26:fb:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:22:54.626370 containerd[1437]: 2025-05-08 00:22:54.620 [INFO][4099] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc" Namespace="calico-system" Pod="calico-kube-controllers-8f4df646d-hrzsd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0" May 8 00:22:54.649066 systemd-networkd[1383]: cali0c59c53ce77: Link UP May 8 00:22:54.649779 systemd-networkd[1383]: cali0c59c53ce77: Gained carrier May 8 00:22:54.664559 containerd[1437]: time="2025-05-08T00:22:54.664381689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:22:54.664559 containerd[1437]: time="2025-05-08T00:22:54.664531689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:22:54.664799 containerd[1437]: time="2025-05-08T00:22:54.664557529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:54.664834 containerd[1437]: time="2025-05-08T00:22:54.664665289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:54.670811 containerd[1437]: 2025-05-08 00:22:54.522 [INFO][4087] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--j4988-eth0 coredns-7db6d8ff4d- kube-system 8196bfa1-7d4c-4b32-bb04-7483aba589c0 856 0 2025-05-08 00:22:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-j4988 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0c59c53ce77 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4988" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j4988-" May 8 00:22:54.670811 containerd[1437]: 2025-05-08 00:22:54.522 [INFO][4087] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4988" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j4988-eth0" May 8 00:22:54.670811 containerd[1437]: 2025-05-08 00:22:54.573 [INFO][4115] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62" HandleID="k8s-pod-network.3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62" Workload="localhost-k8s-coredns--7db6d8ff4d--j4988-eth0" May 8 00:22:54.670811 containerd[1437]: 2025-05-08 00:22:54.583 [INFO][4115] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62" HandleID="k8s-pod-network.3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62" Workload="localhost-k8s-coredns--7db6d8ff4d--j4988-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d1a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-j4988", "timestamp":"2025-05-08 00:22:54.573551899 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:22:54.670811 containerd[1437]: 2025-05-08 00:22:54.583 [INFO][4115] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:22:54.670811 containerd[1437]: 2025-05-08 00:22:54.601 [INFO][4115] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:22:54.670811 containerd[1437]: 2025-05-08 00:22:54.601 [INFO][4115] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:22:54.670811 containerd[1437]: 2025-05-08 00:22:54.604 [INFO][4115] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62" host="localhost" May 8 00:22:54.670811 containerd[1437]: 2025-05-08 00:22:54.612 [INFO][4115] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:22:54.670811 containerd[1437]: 2025-05-08 00:22:54.616 [INFO][4115] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:22:54.670811 containerd[1437]: 2025-05-08 00:22:54.618 [INFO][4115] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:22:54.670811 containerd[1437]: 2025-05-08 00:22:54.625 [INFO][4115] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:22:54.670811 containerd[1437]: 2025-05-08 00:22:54.625 [INFO][4115] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62" host="localhost" May 8 00:22:54.670811 containerd[1437]: 2025-05-08 00:22:54.626 [INFO][4115] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62 May 8 00:22:54.670811 containerd[1437]: 2025-05-08 00:22:54.635 [INFO][4115] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62" host="localhost" May 8 00:22:54.670811 containerd[1437]: 2025-05-08 00:22:54.641 [INFO][4115] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62" host="localhost" May 8 00:22:54.670811 containerd[1437]: 2025-05-08 00:22:54.641 [INFO][4115] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62" host="localhost" May 8 00:22:54.670811 containerd[1437]: 2025-05-08 00:22:54.641 [INFO][4115] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:22:54.670811 containerd[1437]: 2025-05-08 00:22:54.641 [INFO][4115] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62" HandleID="k8s-pod-network.3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62" Workload="localhost-k8s-coredns--7db6d8ff4d--j4988-eth0" May 8 00:22:54.671655 containerd[1437]: 2025-05-08 00:22:54.646 [INFO][4087] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4988" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j4988-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--j4988-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8196bfa1-7d4c-4b32-bb04-7483aba589c0", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-j4988", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0c59c53ce77", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:22:54.671655 containerd[1437]: 2025-05-08 00:22:54.646 [INFO][4087] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4988" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j4988-eth0" May 8 00:22:54.671655 containerd[1437]: 2025-05-08 00:22:54.646 [INFO][4087] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c59c53ce77 ContainerID="3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4988" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j4988-eth0" May 8 00:22:54.671655 containerd[1437]: 2025-05-08 00:22:54.649 [INFO][4087] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4988" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j4988-eth0" May 8 00:22:54.671655 containerd[1437]: 2025-05-08 00:22:54.650 [INFO][4087] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4988" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j4988-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--j4988-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8196bfa1-7d4c-4b32-bb04-7483aba589c0", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62", Pod:"coredns-7db6d8ff4d-j4988", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0c59c53ce77", MAC:"66:89:6a:27:53:18", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:22:54.671655 containerd[1437]: 2025-05-08 00:22:54.666 [INFO][4087] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4988" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j4988-eth0" May 8 00:22:54.688884 systemd[1]: Started cri-containerd-2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc.scope - libcontainer container 2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc. May 8 00:22:54.707583 containerd[1437]: time="2025-05-08T00:22:54.707349165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:22:54.707583 containerd[1437]: time="2025-05-08T00:22:54.707404125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:22:54.707583 containerd[1437]: time="2025-05-08T00:22:54.707419645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:54.707583 containerd[1437]: time="2025-05-08T00:22:54.707492085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:54.709821 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:22:54.735033 systemd[1]: Started cri-containerd-3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62.scope - libcontainer container 3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62. May 8 00:22:54.747208 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:22:54.752249 containerd[1437]: time="2025-05-08T00:22:54.752151560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8f4df646d-hrzsd,Uid:7441d347-bcf2-42a6-b5f7-183f25ff2768,Namespace:calico-system,Attempt:1,} returns sandbox id \"2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc\"" May 8 00:22:54.754784 containerd[1437]: time="2025-05-08T00:22:54.753629760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 8 00:22:54.764293 containerd[1437]: time="2025-05-08T00:22:54.764258439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j4988,Uid:8196bfa1-7d4c-4b32-bb04-7483aba589c0,Namespace:kube-system,Attempt:1,} returns sandbox id \"3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62\"" May 8 00:22:54.764864 kubelet[2557]: E0508 00:22:54.764839 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:54.769300 containerd[1437]: time="2025-05-08T00:22:54.769264518Z" level=info msg="CreateContainer within sandbox \"3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:22:54.788413 containerd[1437]: time="2025-05-08T00:22:54.788371636Z" level=info msg="CreateContainer within sandbox \"3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9dccc204b6718aa25d6409d4046547e80bf4caee77adf3afc1cd821588d2a7cd\"" May 8 00:22:54.789207 containerd[1437]: time="2025-05-08T00:22:54.789000236Z" level=info msg="StartContainer for \"9dccc204b6718aa25d6409d4046547e80bf4caee77adf3afc1cd821588d2a7cd\"" May 8 00:22:54.814900 systemd[1]: Started cri-containerd-9dccc204b6718aa25d6409d4046547e80bf4caee77adf3afc1cd821588d2a7cd.scope - libcontainer container 9dccc204b6718aa25d6409d4046547e80bf4caee77adf3afc1cd821588d2a7cd. May 8 00:22:54.840357 containerd[1437]: time="2025-05-08T00:22:54.840308751Z" level=info msg="StartContainer for \"9dccc204b6718aa25d6409d4046547e80bf4caee77adf3afc1cd821588d2a7cd\" returns successfully" May 8 00:22:55.368585 kubelet[2557]: E0508 00:22:55.368446 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:55.377084 kubelet[2557]: I0508 00:22:55.376613 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-j4988" podStartSLOduration=29.376601456 podStartE2EDuration="29.376601456s" podCreationTimestamp="2025-05-08 00:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:22:55.376235496 +0000 UTC m=+43.279099381" watchObservedRunningTime="2025-05-08 00:22:55.376601456 +0000 UTC m=+43.279465341" May 8 00:22:56.192070 containerd[1437]: time="2025-05-08T00:22:56.192028777Z" level=info msg="StopPodSandbox for \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\"" May 8 00:22:56.193161 containerd[1437]: time="2025-05-08T00:22:56.192254737Z" level=info msg="StopPodSandbox for \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\"" May 8 00:22:56.298880 containerd[1437]: 2025-05-08 00:22:56.256 [INFO][4335] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" May 8 00:22:56.298880 containerd[1437]: 2025-05-08 00:22:56.256 [INFO][4335] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" iface="eth0" netns="/var/run/netns/cni-9817c8fc-7854-0b8a-05bd-37d54e0d865e" May 8 00:22:56.298880 containerd[1437]: 2025-05-08 00:22:56.256 [INFO][4335] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" iface="eth0" netns="/var/run/netns/cni-9817c8fc-7854-0b8a-05bd-37d54e0d865e" May 8 00:22:56.298880 containerd[1437]: 2025-05-08 00:22:56.257 [INFO][4335] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" iface="eth0" netns="/var/run/netns/cni-9817c8fc-7854-0b8a-05bd-37d54e0d865e" May 8 00:22:56.298880 containerd[1437]: 2025-05-08 00:22:56.257 [INFO][4335] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" May 8 00:22:56.298880 containerd[1437]: 2025-05-08 00:22:56.257 [INFO][4335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" May 8 00:22:56.298880 containerd[1437]: 2025-05-08 00:22:56.282 [INFO][4346] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" HandleID="k8s-pod-network.1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0" May 8 00:22:56.298880 containerd[1437]: 2025-05-08 00:22:56.282 [INFO][4346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:22:56.298880 containerd[1437]: 2025-05-08 00:22:56.282 [INFO][4346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:22:56.298880 containerd[1437]: 2025-05-08 00:22:56.292 [WARNING][4346] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" HandleID="k8s-pod-network.1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0" May 8 00:22:56.298880 containerd[1437]: 2025-05-08 00:22:56.292 [INFO][4346] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" HandleID="k8s-pod-network.1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0" May 8 00:22:56.298880 containerd[1437]: 2025-05-08 00:22:56.293 [INFO][4346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:22:56.298880 containerd[1437]: 2025-05-08 00:22:56.296 [INFO][4335] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" May 8 00:22:56.300436 containerd[1437]: time="2025-05-08T00:22:56.299073287Z" level=info msg="TearDown network for sandbox \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\" successfully" May 8 00:22:56.300436 containerd[1437]: time="2025-05-08T00:22:56.299103607Z" level=info msg="StopPodSandbox for \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\" returns successfully" May 8 00:22:56.301434 systemd[1]: run-netns-cni\x2d9817c8fc\x2d7854\x2d0b8a\x2d05bd\x2d37d54e0d865e.mount: Deactivated successfully. May 8 00:22:56.302106 containerd[1437]: time="2025-05-08T00:22:56.301498286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cbb7457c4-9bvwx,Uid:76116a76-06fa-4e4c-be4b-d6de000109ca,Namespace:calico-apiserver,Attempt:1,}" May 8 00:22:56.316847 containerd[1437]: 2025-05-08 00:22:56.257 [INFO][4329] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" May 8 00:22:56.316847 containerd[1437]: 2025-05-08 00:22:56.257 [INFO][4329] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" iface="eth0" netns="/var/run/netns/cni-08c88757-1b0d-12aa-332d-f92f19db4eba" May 8 00:22:56.316847 containerd[1437]: 2025-05-08 00:22:56.258 [INFO][4329] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" iface="eth0" netns="/var/run/netns/cni-08c88757-1b0d-12aa-332d-f92f19db4eba" May 8 00:22:56.316847 containerd[1437]: 2025-05-08 00:22:56.258 [INFO][4329] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" iface="eth0" netns="/var/run/netns/cni-08c88757-1b0d-12aa-332d-f92f19db4eba" May 8 00:22:56.316847 containerd[1437]: 2025-05-08 00:22:56.258 [INFO][4329] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" May 8 00:22:56.316847 containerd[1437]: 2025-05-08 00:22:56.258 [INFO][4329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" May 8 00:22:56.316847 containerd[1437]: 2025-05-08 00:22:56.301 [INFO][4348] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" HandleID="k8s-pod-network.0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0" May 8 00:22:56.316847 containerd[1437]: 2025-05-08 00:22:56.301 [INFO][4348] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:22:56.316847 containerd[1437]: 2025-05-08 00:22:56.301 [INFO][4348] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:22:56.316847 containerd[1437]: 2025-05-08 00:22:56.310 [WARNING][4348] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" HandleID="k8s-pod-network.0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0" May 8 00:22:56.316847 containerd[1437]: 2025-05-08 00:22:56.310 [INFO][4348] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" HandleID="k8s-pod-network.0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0" May 8 00:22:56.316847 containerd[1437]: 2025-05-08 00:22:56.312 [INFO][4348] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:22:56.316847 containerd[1437]: 2025-05-08 00:22:56.314 [INFO][4329] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" May 8 00:22:56.317654 containerd[1437]: time="2025-05-08T00:22:56.317016005Z" level=info msg="TearDown network for sandbox \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\" successfully" May 8 00:22:56.317654 containerd[1437]: time="2025-05-08T00:22:56.317043445Z" level=info msg="StopPodSandbox for \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\" returns successfully" May 8 00:22:56.317873 containerd[1437]: time="2025-05-08T00:22:56.317837885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cbb7457c4-99wch,Uid:37b135c5-5fc9-4679-a6b3-7f9b2a12dd64,Namespace:calico-apiserver,Attempt:1,}" May 8 00:22:56.319034 systemd[1]: run-netns-cni\x2d08c88757\x2d1b0d\x2d12aa\x2d332d\x2df92f19db4eba.mount: Deactivated successfully. May 8 00:22:56.376566 kubelet[2557]: E0508 00:22:56.376527 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:56.408754 containerd[1437]: time="2025-05-08T00:22:56.408547436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 8 00:22:56.413778 containerd[1437]: time="2025-05-08T00:22:56.413700636Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.660035436s" May 8 00:22:56.414019 containerd[1437]: time="2025-05-08T00:22:56.413896636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 8 00:22:56.415692 containerd[1437]: time="2025-05-08T00:22:56.415633676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:56.416762 containerd[1437]: time="2025-05-08T00:22:56.416308876Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:56.417061 containerd[1437]: time="2025-05-08T00:22:56.416856036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:56.427828 containerd[1437]: time="2025-05-08T00:22:56.427787075Z" level=info msg="CreateContainer within sandbox \"2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 8 00:22:56.438957 containerd[1437]: time="2025-05-08T00:22:56.438896074Z" level=info msg="CreateContainer within sandbox \"2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"931666551f626e9c6f809ac494bd371a469e0a457c105fbb9bdbd14dfa9f66c6\"" May 8 00:22:56.440148 containerd[1437]: time="2025-05-08T00:22:56.440112473Z" level=info msg="StartContainer for \"931666551f626e9c6f809ac494bd371a469e0a457c105fbb9bdbd14dfa9f66c6\"" May 8 00:22:56.473795 systemd-networkd[1383]: calicdefd006f55: Link UP May 8 00:22:56.474360 systemd-networkd[1383]: calicdefd006f55: Gained carrier May 8 00:22:56.477824 systemd-networkd[1383]: cali8727e66adfa: Gained IPv6LL May 8 00:22:56.480812 systemd-networkd[1383]: cali0c59c53ce77: Gained IPv6LL May 8 00:22:56.496199 containerd[1437]: 2025-05-08 00:22:56.378 [INFO][4367] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0 calico-apiserver-5cbb7457c4- calico-apiserver 37b135c5-5fc9-4679-a6b3-7f9b2a12dd64 896 0 2025-05-08 00:22:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cbb7457c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5cbb7457c4-99wch eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicdefd006f55 [] []}} ContainerID="1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e" Namespace="calico-apiserver" Pod="calico-apiserver-5cbb7457c4-99wch" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cbb7457c4--99wch-" May 8 00:22:56.496199 containerd[1437]: 2025-05-08 00:22:56.378 [INFO][4367] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e" Namespace="calico-apiserver" Pod="calico-apiserver-5cbb7457c4-99wch" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0" May 8 00:22:56.496199 containerd[1437]: 2025-05-08 00:22:56.414 [INFO][4394] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e" HandleID="k8s-pod-network.1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0" May 8 00:22:56.496199 containerd[1437]: 2025-05-08 00:22:56.428 [INFO][4394] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e" HandleID="k8s-pod-network.1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002920d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5cbb7457c4-99wch", "timestamp":"2025-05-08 00:22:56.414607716 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:22:56.496199 containerd[1437]: 2025-05-08 00:22:56.429 [INFO][4394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:22:56.496199 containerd[1437]: 2025-05-08 00:22:56.429 [INFO][4394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:22:56.496199 containerd[1437]: 2025-05-08 00:22:56.429 [INFO][4394] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:22:56.496199 containerd[1437]: 2025-05-08 00:22:56.435 [INFO][4394] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e" host="localhost" May 8 00:22:56.496199 containerd[1437]: 2025-05-08 00:22:56.440 [INFO][4394] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:22:56.496199 containerd[1437]: 2025-05-08 00:22:56.447 [INFO][4394] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:22:56.496199 containerd[1437]: 2025-05-08 00:22:56.448 [INFO][4394] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:22:56.496199 containerd[1437]: 2025-05-08 00:22:56.451 [INFO][4394] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:22:56.496199 containerd[1437]: 2025-05-08 00:22:56.451 [INFO][4394] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e" host="localhost" May 8 00:22:56.496199 containerd[1437]: 2025-05-08 00:22:56.453 [INFO][4394] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e May 8 00:22:56.496199 containerd[1437]: 2025-05-08 00:22:56.459 [INFO][4394] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e" host="localhost" May 8 00:22:56.496199 containerd[1437]: 2025-05-08 00:22:56.464 [INFO][4394] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e" host="localhost" May 8 00:22:56.496199 containerd[1437]: 2025-05-08 00:22:56.464 [INFO][4394] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e" host="localhost" May 8 00:22:56.496199 containerd[1437]: 2025-05-08 00:22:56.464 [INFO][4394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:22:56.496199 containerd[1437]: 2025-05-08 00:22:56.464 [INFO][4394] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e" HandleID="k8s-pod-network.1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0" May 8 00:22:56.496837 containerd[1437]: 2025-05-08 00:22:56.468 [INFO][4367] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e" Namespace="calico-apiserver" Pod="calico-apiserver-5cbb7457c4-99wch" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0", GenerateName:"calico-apiserver-5cbb7457c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"37b135c5-5fc9-4679-a6b3-7f9b2a12dd64", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cbb7457c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5cbb7457c4-99wch", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicdefd006f55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:22:56.496837 containerd[1437]: 2025-05-08 00:22:56.468 [INFO][4367] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e" Namespace="calico-apiserver" Pod="calico-apiserver-5cbb7457c4-99wch" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0" May 8 00:22:56.496837 containerd[1437]: 2025-05-08 00:22:56.468 [INFO][4367] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicdefd006f55 ContainerID="1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e" Namespace="calico-apiserver" Pod="calico-apiserver-5cbb7457c4-99wch" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0" May 8 00:22:56.496837 containerd[1437]: 2025-05-08 00:22:56.474 [INFO][4367] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e" Namespace="calico-apiserver" Pod="calico-apiserver-5cbb7457c4-99wch" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0" May 8 00:22:56.496837 containerd[1437]: 2025-05-08 00:22:56.476 [INFO][4367] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e" Namespace="calico-apiserver" Pod="calico-apiserver-5cbb7457c4-99wch" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0", GenerateName:"calico-apiserver-5cbb7457c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"37b135c5-5fc9-4679-a6b3-7f9b2a12dd64", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cbb7457c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e", Pod:"calico-apiserver-5cbb7457c4-99wch", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicdefd006f55", MAC:"c2:ad:5f:bf:62:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:22:56.496837 containerd[1437]: 2025-05-08 00:22:56.489 [INFO][4367] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e" Namespace="calico-apiserver" Pod="calico-apiserver-5cbb7457c4-99wch" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0" May 8 00:22:56.510979 systemd[1]: Started cri-containerd-931666551f626e9c6f809ac494bd371a469e0a457c105fbb9bdbd14dfa9f66c6.scope - libcontainer container 931666551f626e9c6f809ac494bd371a469e0a457c105fbb9bdbd14dfa9f66c6. May 8 00:22:56.511114 systemd-networkd[1383]: cali40a4a5aad8d: Link UP May 8 00:22:56.511330 systemd-networkd[1383]: cali40a4a5aad8d: Gained carrier May 8 00:22:56.529887 containerd[1437]: 2025-05-08 00:22:56.383 [INFO][4361] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0 calico-apiserver-5cbb7457c4- calico-apiserver 76116a76-06fa-4e4c-be4b-d6de000109ca 897 0 2025-05-08 00:22:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cbb7457c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5cbb7457c4-9bvwx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali40a4a5aad8d [] []}} ContainerID="4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134" Namespace="calico-apiserver" Pod="calico-apiserver-5cbb7457c4-9bvwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-" May 8 00:22:56.529887 containerd[1437]: 2025-05-08 00:22:56.383 [INFO][4361] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134" Namespace="calico-apiserver" Pod="calico-apiserver-5cbb7457c4-9bvwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0" May 8 00:22:56.529887 containerd[1437]: 2025-05-08 00:22:56.427 [INFO][4400] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134" HandleID="k8s-pod-network.4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0" May 8 00:22:56.529887 containerd[1437]: 2025-05-08 00:22:56.439 [INFO][4400] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134" HandleID="k8s-pod-network.4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000442130), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5cbb7457c4-9bvwx", "timestamp":"2025-05-08 00:22:56.427262435 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:22:56.529887 containerd[1437]: 2025-05-08 00:22:56.439 [INFO][4400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:22:56.529887 containerd[1437]: 2025-05-08 00:22:56.464 [INFO][4400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:22:56.529887 containerd[1437]: 2025-05-08 00:22:56.465 [INFO][4400] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:22:56.529887 containerd[1437]: 2025-05-08 00:22:56.469 [INFO][4400] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134" host="localhost" May 8 00:22:56.529887 containerd[1437]: 2025-05-08 00:22:56.475 [INFO][4400] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:22:56.529887 containerd[1437]: 2025-05-08 00:22:56.480 [INFO][4400] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:22:56.529887 containerd[1437]: 2025-05-08 00:22:56.481 [INFO][4400] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:22:56.529887 containerd[1437]: 2025-05-08 00:22:56.488 [INFO][4400] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:22:56.529887 containerd[1437]: 2025-05-08 00:22:56.488 [INFO][4400] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134" host="localhost" May 8 00:22:56.529887 containerd[1437]: 2025-05-08 00:22:56.490 [INFO][4400] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134 May 8 00:22:56.529887 containerd[1437]: 2025-05-08 00:22:56.498 [INFO][4400] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134" host="localhost" May 8 00:22:56.529887 containerd[1437]: 2025-05-08 00:22:56.503 [INFO][4400] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134" host="localhost" May 8 00:22:56.529887 containerd[1437]: 2025-05-08 00:22:56.503 [INFO][4400] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134" host="localhost" May 8 00:22:56.529887 containerd[1437]: 2025-05-08 00:22:56.503 [INFO][4400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:22:56.529887 containerd[1437]: 2025-05-08 00:22:56.503 [INFO][4400] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134" HandleID="k8s-pod-network.4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0" May 8 00:22:56.530802 containerd[1437]: 2025-05-08 00:22:56.507 [INFO][4361] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134" Namespace="calico-apiserver" Pod="calico-apiserver-5cbb7457c4-9bvwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0", GenerateName:"calico-apiserver-5cbb7457c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"76116a76-06fa-4e4c-be4b-d6de000109ca", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cbb7457c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5cbb7457c4-9bvwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali40a4a5aad8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:22:56.530802 containerd[1437]: 2025-05-08 00:22:56.507 [INFO][4361] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134" Namespace="calico-apiserver" Pod="calico-apiserver-5cbb7457c4-9bvwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0" May 8 00:22:56.530802 containerd[1437]: 2025-05-08 00:22:56.507 [INFO][4361] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40a4a5aad8d ContainerID="4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134" Namespace="calico-apiserver" Pod="calico-apiserver-5cbb7457c4-9bvwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0" May 8 00:22:56.530802 containerd[1437]: 2025-05-08 00:22:56.511 [INFO][4361] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134" Namespace="calico-apiserver" Pod="calico-apiserver-5cbb7457c4-9bvwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0" May 8 00:22:56.530802 containerd[1437]: 2025-05-08 00:22:56.512 [INFO][4361] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134" Namespace="calico-apiserver" Pod="calico-apiserver-5cbb7457c4-9bvwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0", GenerateName:"calico-apiserver-5cbb7457c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"76116a76-06fa-4e4c-be4b-d6de000109ca", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cbb7457c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134", Pod:"calico-apiserver-5cbb7457c4-9bvwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali40a4a5aad8d", MAC:"5e:d3:3b:e5:0b:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:22:56.530802 containerd[1437]: 2025-05-08 00:22:56.522 [INFO][4361] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134" Namespace="calico-apiserver" Pod="calico-apiserver-5cbb7457c4-9bvwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0" May 8 00:22:56.540793 containerd[1437]: time="2025-05-08T00:22:56.540356144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:22:56.540793 containerd[1437]: time="2025-05-08T00:22:56.540473624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:22:56.540793 containerd[1437]: time="2025-05-08T00:22:56.540492304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:56.543944 containerd[1437]: time="2025-05-08T00:22:56.540614824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:56.559378 containerd[1437]: time="2025-05-08T00:22:56.558446742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:22:56.559378 containerd[1437]: time="2025-05-08T00:22:56.558505862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:22:56.559767 containerd[1437]: time="2025-05-08T00:22:56.558635262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:56.559767 containerd[1437]: time="2025-05-08T00:22:56.558746062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:56.566278 containerd[1437]: time="2025-05-08T00:22:56.564344662Z" level=info msg="StartContainer for \"931666551f626e9c6f809ac494bd371a469e0a457c105fbb9bdbd14dfa9f66c6\" returns successfully" May 8 00:22:56.573874 systemd[1]: Started cri-containerd-1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e.scope - libcontainer container 1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e. May 8 00:22:56.577501 systemd[1]: Started cri-containerd-4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134.scope - libcontainer container 4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134. May 8 00:22:56.596800 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:22:56.597228 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:22:56.618085 containerd[1437]: time="2025-05-08T00:22:56.617691737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cbb7457c4-9bvwx,Uid:76116a76-06fa-4e4c-be4b-d6de000109ca,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134\"" May 8 00:22:56.620752 containerd[1437]: time="2025-05-08T00:22:56.620704377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:22:56.626665 containerd[1437]: time="2025-05-08T00:22:56.626615656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cbb7457c4-99wch,Uid:37b135c5-5fc9-4679-a6b3-7f9b2a12dd64,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e\"" May 8 00:22:57.191284 containerd[1437]: time="2025-05-08T00:22:57.191217605Z" level=info msg="StopPodSandbox for \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\"" May 8 00:22:57.269697 containerd[1437]: 2025-05-08 00:22:57.236 [INFO][4580] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" May 8 00:22:57.269697 containerd[1437]: 2025-05-08 00:22:57.237 [INFO][4580] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" iface="eth0" netns="/var/run/netns/cni-aa507f40-89ec-a0af-c733-abb761c535a3" May 8 00:22:57.269697 containerd[1437]: 2025-05-08 00:22:57.237 [INFO][4580] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" iface="eth0" netns="/var/run/netns/cni-aa507f40-89ec-a0af-c733-abb761c535a3" May 8 00:22:57.269697 containerd[1437]: 2025-05-08 00:22:57.237 [INFO][4580] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" iface="eth0" netns="/var/run/netns/cni-aa507f40-89ec-a0af-c733-abb761c535a3" May 8 00:22:57.269697 containerd[1437]: 2025-05-08 00:22:57.237 [INFO][4580] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" May 8 00:22:57.269697 containerd[1437]: 2025-05-08 00:22:57.237 [INFO][4580] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" May 8 00:22:57.269697 containerd[1437]: 2025-05-08 00:22:57.256 [INFO][4588] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" HandleID="k8s-pod-network.c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" Workload="localhost-k8s-csi--node--driver--56jwl-eth0" May 8 00:22:57.269697 containerd[1437]: 2025-05-08 00:22:57.256 [INFO][4588] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:22:57.269697 containerd[1437]: 2025-05-08 00:22:57.256 [INFO][4588] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:22:57.269697 containerd[1437]: 2025-05-08 00:22:57.265 [WARNING][4588] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" HandleID="k8s-pod-network.c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" Workload="localhost-k8s-csi--node--driver--56jwl-eth0" May 8 00:22:57.269697 containerd[1437]: 2025-05-08 00:22:57.265 [INFO][4588] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" HandleID="k8s-pod-network.c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" Workload="localhost-k8s-csi--node--driver--56jwl-eth0" May 8 00:22:57.269697 containerd[1437]: 2025-05-08 00:22:57.266 [INFO][4588] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:22:57.269697 containerd[1437]: 2025-05-08 00:22:57.268 [INFO][4580] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" May 8 00:22:57.270282 containerd[1437]: time="2025-05-08T00:22:57.269831398Z" level=info msg="TearDown network for sandbox \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\" successfully" May 8 00:22:57.270282 containerd[1437]: time="2025-05-08T00:22:57.269856638Z" level=info msg="StopPodSandbox for \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\" returns successfully" May 8 00:22:57.270882 containerd[1437]: time="2025-05-08T00:22:57.270844598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-56jwl,Uid:26f41155-3ab0-4f8a-91d4-9d90c9524fe5,Namespace:calico-system,Attempt:1,}" May 8 00:22:57.394477 systemd-networkd[1383]: cali89cb5566f7c: Link UP May 8 00:22:57.394710 systemd-networkd[1383]: cali89cb5566f7c: Gained carrier May 8 00:22:57.411017 containerd[1437]: 2025-05-08 00:22:57.313 [INFO][4596] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--56jwl-eth0 csi-node-driver- calico-system 26f41155-3ab0-4f8a-91d4-9d90c9524fe5 913 0 2025-05-08 00:22:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-56jwl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali89cb5566f7c [] []}} ContainerID="843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3" Namespace="calico-system" Pod="csi-node-driver-56jwl" WorkloadEndpoint="localhost-k8s-csi--node--driver--56jwl-" May 8 00:22:57.411017 containerd[1437]: 2025-05-08 00:22:57.313 [INFO][4596] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3" Namespace="calico-system" Pod="csi-node-driver-56jwl" WorkloadEndpoint="localhost-k8s-csi--node--driver--56jwl-eth0" May 8 00:22:57.411017 containerd[1437]: 2025-05-08 00:22:57.341 [INFO][4610] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3" HandleID="k8s-pod-network.843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3" Workload="localhost-k8s-csi--node--driver--56jwl-eth0" May 8 00:22:57.411017 containerd[1437]: 2025-05-08 00:22:57.355 [INFO][4610] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3" HandleID="k8s-pod-network.843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3" Workload="localhost-k8s-csi--node--driver--56jwl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dc60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-56jwl", "timestamp":"2025-05-08 00:22:57.341173632 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:22:57.411017 containerd[1437]: 2025-05-08 00:22:57.355 [INFO][4610] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:22:57.411017 containerd[1437]: 2025-05-08 00:22:57.355 [INFO][4610] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:22:57.411017 containerd[1437]: 2025-05-08 00:22:57.355 [INFO][4610] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:22:57.411017 containerd[1437]: 2025-05-08 00:22:57.357 [INFO][4610] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3" host="localhost" May 8 00:22:57.411017 containerd[1437]: 2025-05-08 00:22:57.362 [INFO][4610] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:22:57.411017 containerd[1437]: 2025-05-08 00:22:57.366 [INFO][4610] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:22:57.411017 containerd[1437]: 2025-05-08 00:22:57.368 [INFO][4610] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:22:57.411017 containerd[1437]: 2025-05-08 00:22:57.370 [INFO][4610] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:22:57.411017 containerd[1437]: 2025-05-08 00:22:57.370 [INFO][4610] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3" host="localhost" May 8 00:22:57.411017 containerd[1437]: 2025-05-08 00:22:57.372 [INFO][4610] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3 May 8 00:22:57.411017 containerd[1437]: 2025-05-08 00:22:57.375 [INFO][4610] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3" host="localhost" May 8 00:22:57.411017 containerd[1437]: 2025-05-08 00:22:57.382 [INFO][4610] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3" host="localhost" May 8 00:22:57.411017 containerd[1437]: 2025-05-08 00:22:57.384 [INFO][4610] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3" host="localhost" May 8 00:22:57.411017 containerd[1437]: 2025-05-08 00:22:57.384 [INFO][4610] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:22:57.411017 containerd[1437]: 2025-05-08 00:22:57.384 [INFO][4610] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3" HandleID="k8s-pod-network.843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3" Workload="localhost-k8s-csi--node--driver--56jwl-eth0" May 8 00:22:57.411556 containerd[1437]: 2025-05-08 00:22:57.388 [INFO][4596] cni-plugin/k8s.go 386: Populated endpoint ContainerID="843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3" Namespace="calico-system" Pod="csi-node-driver-56jwl" WorkloadEndpoint="localhost-k8s-csi--node--driver--56jwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--56jwl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"26f41155-3ab0-4f8a-91d4-9d90c9524fe5", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-56jwl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali89cb5566f7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:22:57.411556 containerd[1437]: 2025-05-08 00:22:57.388 [INFO][4596] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3" Namespace="calico-system" Pod="csi-node-driver-56jwl" WorkloadEndpoint="localhost-k8s-csi--node--driver--56jwl-eth0" May 8 00:22:57.411556 containerd[1437]: 2025-05-08 00:22:57.388 [INFO][4596] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89cb5566f7c ContainerID="843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3" Namespace="calico-system" Pod="csi-node-driver-56jwl" WorkloadEndpoint="localhost-k8s-csi--node--driver--56jwl-eth0" May 8 00:22:57.411556 containerd[1437]: 2025-05-08 00:22:57.392 [INFO][4596] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3" Namespace="calico-system" Pod="csi-node-driver-56jwl" WorkloadEndpoint="localhost-k8s-csi--node--driver--56jwl-eth0" May 8 00:22:57.411556 containerd[1437]: 2025-05-08 00:22:57.392 [INFO][4596] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3" Namespace="calico-system" Pod="csi-node-driver-56jwl" WorkloadEndpoint="localhost-k8s-csi--node--driver--56jwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--56jwl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"26f41155-3ab0-4f8a-91d4-9d90c9524fe5", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3", Pod:"csi-node-driver-56jwl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali89cb5566f7c", MAC:"3e:7a:c8:98:49:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:22:57.411556 containerd[1437]: 2025-05-08 00:22:57.404 [INFO][4596] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3" Namespace="calico-system" Pod="csi-node-driver-56jwl" WorkloadEndpoint="localhost-k8s-csi--node--driver--56jwl-eth0" May 8 00:22:57.417031 kubelet[2557]: E0508 00:22:57.416998 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:57.452588 containerd[1437]: time="2025-05-08T00:22:57.451923142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:22:57.452588 containerd[1437]: time="2025-05-08T00:22:57.452492702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:22:57.453432 containerd[1437]: time="2025-05-08T00:22:57.452506182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:57.457212 containerd[1437]: time="2025-05-08T00:22:57.456892421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:57.459255 systemd[1]: run-netns-cni\x2daa507f40\x2d89ec\x2da0af\x2dc733\x2dabb761c535a3.mount: Deactivated successfully. May 8 00:22:57.477528 systemd[1]: Started cri-containerd-843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3.scope - libcontainer container 843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3. May 8 00:22:57.488967 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:22:57.520100 containerd[1437]: time="2025-05-08T00:22:57.519951256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-56jwl,Uid:26f41155-3ab0-4f8a-91d4-9d90c9524fe5,Namespace:calico-system,Attempt:1,} returns sandbox id \"843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3\"" May 8 00:22:57.527055 kubelet[2557]: I0508 00:22:57.526981 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8f4df646d-hrzsd" podStartSLOduration=21.865569779 podStartE2EDuration="23.526963655s" podCreationTimestamp="2025-05-08 00:22:34 +0000 UTC" firstStartedPulling="2025-05-08 00:22:54.75325264 +0000 UTC m=+42.656116525" lastFinishedPulling="2025-05-08 00:22:56.414646516 +0000 UTC m=+44.317510401" observedRunningTime="2025-05-08 00:22:57.408204546 +0000 UTC m=+45.311068431" watchObservedRunningTime="2025-05-08 00:22:57.526963655 +0000 UTC m=+45.429827540" May 8 00:22:57.692856 systemd-networkd[1383]: cali40a4a5aad8d: Gained IPv6LL May 8 00:22:58.276535 containerd[1437]: time="2025-05-08T00:22:58.276487151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:58.277073 containerd[1437]: time="2025-05-08T00:22:58.277041991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 8 00:22:58.277677 containerd[1437]: time="2025-05-08T00:22:58.277646031Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:58.286789 containerd[1437]: time="2025-05-08T00:22:58.286756631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:58.287460 containerd[1437]: time="2025-05-08T00:22:58.287428710Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.666675893s" May 8 00:22:58.287502 containerd[1437]: time="2025-05-08T00:22:58.287460550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 8 00:22:58.288773 containerd[1437]: time="2025-05-08T00:22:58.288444190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:22:58.291952 containerd[1437]: time="2025-05-08T00:22:58.291584150Z" level=info msg="CreateContainer within sandbox \"4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:22:58.302794 containerd[1437]: time="2025-05-08T00:22:58.302749829Z" level=info msg="CreateContainer within sandbox \"4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c05584c366eb1383698fa41934ff815ca5c7ad4cf99d2bcc37bec6ab8107c8aa\"" May 8 00:22:58.303757 containerd[1437]: time="2025-05-08T00:22:58.303407749Z" level=info msg="StartContainer for \"c05584c366eb1383698fa41934ff815ca5c7ad4cf99d2bcc37bec6ab8107c8aa\"" May 8 00:22:58.333453 systemd-networkd[1383]: calicdefd006f55: Gained IPv6LL May 8 00:22:58.338981 systemd[1]: Started cri-containerd-c05584c366eb1383698fa41934ff815ca5c7ad4cf99d2bcc37bec6ab8107c8aa.scope - libcontainer container c05584c366eb1383698fa41934ff815ca5c7ad4cf99d2bcc37bec6ab8107c8aa. May 8 00:22:58.434509 containerd[1437]: time="2025-05-08T00:22:58.434393018Z" level=info msg="StartContainer for \"c05584c366eb1383698fa41934ff815ca5c7ad4cf99d2bcc37bec6ab8107c8aa\" returns successfully" May 8 00:22:58.521249 containerd[1437]: time="2025-05-08T00:22:58.521201651Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:58.521690 containerd[1437]: time="2025-05-08T00:22:58.521657971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 8 00:22:58.524127 containerd[1437]: time="2025-05-08T00:22:58.524034811Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 235.555541ms" May 8 00:22:58.524127 containerd[1437]: time="2025-05-08T00:22:58.524078531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 8 00:22:58.524874 systemd-networkd[1383]: cali89cb5566f7c: Gained IPv6LL May 8 00:22:58.526372 containerd[1437]: time="2025-05-08T00:22:58.524983251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 00:22:58.528495 containerd[1437]: time="2025-05-08T00:22:58.527747251Z" level=info msg="CreateContainer within sandbox \"1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:22:58.539762 containerd[1437]: time="2025-05-08T00:22:58.539689410Z" level=info msg="CreateContainer within sandbox \"1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d5da61e012db885a17f305ff9516fbee2d1538c0d4e3dba1ac8945def4115441\"" May 8 00:22:58.540472 containerd[1437]: time="2025-05-08T00:22:58.540432490Z" level=info msg="StartContainer for \"d5da61e012db885a17f305ff9516fbee2d1538c0d4e3dba1ac8945def4115441\"" May 8 00:22:58.543913 kubelet[2557]: I0508 00:22:58.542936 2557 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:22:58.543913 kubelet[2557]: E0508 00:22:58.543863 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:58.558312 systemd[1]: Started sshd@12-10.0.0.58:22-10.0.0.1:36298.service - OpenSSH per-connection server daemon (10.0.0.1:36298). May 8 00:22:58.591909 systemd[1]: Started cri-containerd-d5da61e012db885a17f305ff9516fbee2d1538c0d4e3dba1ac8945def4115441.scope - libcontainer container d5da61e012db885a17f305ff9516fbee2d1538c0d4e3dba1ac8945def4115441. May 8 00:22:58.627149 sshd[4748]: Accepted publickey for core from 10.0.0.1 port 36298 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:22:58.631821 sshd[4748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:22:58.647937 systemd-logind[1426]: New session 13 of user core. May 8 00:22:58.653130 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:22:58.660759 containerd[1437]: time="2025-05-08T00:22:58.659976800Z" level=info msg="StartContainer for \"d5da61e012db885a17f305ff9516fbee2d1538c0d4e3dba1ac8945def4115441\" returns successfully" May 8 00:22:58.883706 sshd[4748]: pam_unix(sshd:session): session closed for user core May 8 00:22:58.888421 systemd-logind[1426]: Session 13 logged out. Waiting for processes to exit. May 8 00:22:58.889087 systemd[1]: sshd@12-10.0.0.58:22-10.0.0.1:36298.service: Deactivated successfully. May 8 00:22:58.890973 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:22:58.891919 systemd-logind[1426]: Removed session 13. May 8 00:22:59.192415 containerd[1437]: time="2025-05-08T00:22:59.192376437Z" level=info msg="StopPodSandbox for \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\"" May 8 00:22:59.279948 containerd[1437]: 2025-05-08 00:22:59.236 [INFO][4854] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" May 8 00:22:59.279948 containerd[1437]: 2025-05-08 00:22:59.236 [INFO][4854] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" iface="eth0" netns="/var/run/netns/cni-f334c00e-631f-7890-dc27-1393d4a4f856" May 8 00:22:59.279948 containerd[1437]: 2025-05-08 00:22:59.236 [INFO][4854] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" iface="eth0" netns="/var/run/netns/cni-f334c00e-631f-7890-dc27-1393d4a4f856" May 8 00:22:59.279948 containerd[1437]: 2025-05-08 00:22:59.237 [INFO][4854] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" iface="eth0" netns="/var/run/netns/cni-f334c00e-631f-7890-dc27-1393d4a4f856" May 8 00:22:59.279948 containerd[1437]: 2025-05-08 00:22:59.237 [INFO][4854] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" May 8 00:22:59.279948 containerd[1437]: 2025-05-08 00:22:59.237 [INFO][4854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" May 8 00:22:59.279948 containerd[1437]: 2025-05-08 00:22:59.264 [INFO][4863] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" HandleID="k8s-pod-network.5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" Workload="localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0" May 8 00:22:59.279948 containerd[1437]: 2025-05-08 00:22:59.264 [INFO][4863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:22:59.279948 containerd[1437]: 2025-05-08 00:22:59.264 [INFO][4863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:22:59.279948 containerd[1437]: 2025-05-08 00:22:59.273 [WARNING][4863] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" HandleID="k8s-pod-network.5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" Workload="localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0" May 8 00:22:59.279948 containerd[1437]: 2025-05-08 00:22:59.273 [INFO][4863] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" HandleID="k8s-pod-network.5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" Workload="localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0" May 8 00:22:59.279948 containerd[1437]: 2025-05-08 00:22:59.276 [INFO][4863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:22:59.279948 containerd[1437]: 2025-05-08 00:22:59.278 [INFO][4854] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" May 8 00:22:59.280547 containerd[1437]: time="2025-05-08T00:22:59.280077431Z" level=info msg="TearDown network for sandbox \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\" successfully" May 8 00:22:59.280547 containerd[1437]: time="2025-05-08T00:22:59.280105711Z" level=info msg="StopPodSandbox for \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\" returns successfully" May 8 00:22:59.280594 kubelet[2557]: E0508 00:22:59.280398 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:59.280929 containerd[1437]: time="2025-05-08T00:22:59.280886351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zsxj2,Uid:58a2dab3-faad-488a-baa2-8365e3fce66c,Namespace:kube-system,Attempt:1,}" May 8 00:22:59.442586 systemd-networkd[1383]: cali4fc01ebd967: Link UP May 8 00:22:59.442793 systemd-networkd[1383]: cali4fc01ebd967: Gained carrier May 8 00:22:59.458947 systemd[1]: run-netns-cni\x2df334c00e\x2d631f\x2d7890\x2ddc27\x2d1393d4a4f856.mount: Deactivated successfully. May 8 00:22:59.465039 containerd[1437]: 2025-05-08 00:22:59.336 [INFO][4871] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0 coredns-7db6d8ff4d- kube-system 58a2dab3-faad-488a-baa2-8365e3fce66c 936 0 2025-05-08 00:22:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-zsxj2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4fc01ebd967 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsxj2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zsxj2-" May 8 00:22:59.465039 containerd[1437]: 2025-05-08 00:22:59.336 [INFO][4871] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsxj2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0" May 8 00:22:59.465039 containerd[1437]: 2025-05-08 00:22:59.365 [INFO][4886] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7" HandleID="k8s-pod-network.f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7" Workload="localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0" May 8 00:22:59.465039 containerd[1437]: 2025-05-08 00:22:59.379 [INFO][4886] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7" HandleID="k8s-pod-network.f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7" Workload="localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e8ad0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-zsxj2", "timestamp":"2025-05-08 00:22:59.365278424 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:22:59.465039 containerd[1437]: 2025-05-08 00:22:59.379 [INFO][4886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:22:59.465039 containerd[1437]: 2025-05-08 00:22:59.379 [INFO][4886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:22:59.465039 containerd[1437]: 2025-05-08 00:22:59.379 [INFO][4886] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:22:59.465039 containerd[1437]: 2025-05-08 00:22:59.381 [INFO][4886] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7" host="localhost" May 8 00:22:59.465039 containerd[1437]: 2025-05-08 00:22:59.389 [INFO][4886] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:22:59.465039 containerd[1437]: 2025-05-08 00:22:59.398 [INFO][4886] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:22:59.465039 containerd[1437]: 2025-05-08 00:22:59.400 [INFO][4886] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:22:59.465039 containerd[1437]: 2025-05-08 00:22:59.414 [INFO][4886] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:22:59.465039 containerd[1437]: 2025-05-08 00:22:59.414 [INFO][4886] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7" host="localhost" May 8 00:22:59.465039 containerd[1437]: 2025-05-08 00:22:59.417 [INFO][4886] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7 May 8 00:22:59.465039 containerd[1437]: 2025-05-08 00:22:59.420 [INFO][4886] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7" host="localhost" May 8 00:22:59.465039 containerd[1437]: 2025-05-08 00:22:59.426 [INFO][4886] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7" host="localhost" May 8 00:22:59.465039 containerd[1437]: 2025-05-08 00:22:59.427 [INFO][4886] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7" host="localhost" May 8 00:22:59.465039 containerd[1437]: 2025-05-08 00:22:59.427 [INFO][4886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:22:59.465039 containerd[1437]: 2025-05-08 00:22:59.427 [INFO][4886] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7" HandleID="k8s-pod-network.f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7" Workload="localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0" May 8 00:22:59.465702 containerd[1437]: 2025-05-08 00:22:59.434 [INFO][4871] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsxj2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"58a2dab3-faad-488a-baa2-8365e3fce66c", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-zsxj2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4fc01ebd967", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:22:59.465702 containerd[1437]: 2025-05-08 00:22:59.434 [INFO][4871] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsxj2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0" May 8 00:22:59.465702 containerd[1437]: 2025-05-08 00:22:59.434 [INFO][4871] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4fc01ebd967 ContainerID="f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsxj2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0" May 8 00:22:59.465702 containerd[1437]: 2025-05-08 00:22:59.442 [INFO][4871] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsxj2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0" May 8 00:22:59.465702 containerd[1437]: 2025-05-08 00:22:59.443 [INFO][4871] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsxj2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"58a2dab3-faad-488a-baa2-8365e3fce66c", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7", Pod:"coredns-7db6d8ff4d-zsxj2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4fc01ebd967", MAC:"02:d8:85:ae:2f:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:22:59.465702 containerd[1437]: 2025-05-08 00:22:59.453 [INFO][4871] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsxj2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0" May 8 00:22:59.478287 kubelet[2557]: I0508 00:22:59.478226 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cbb7457c4-99wch" podStartSLOduration=23.58127134 podStartE2EDuration="25.478207055s" podCreationTimestamp="2025-05-08 00:22:34 +0000 UTC" firstStartedPulling="2025-05-08 00:22:56.627759496 +0000 UTC m=+44.530623341" lastFinishedPulling="2025-05-08 00:22:58.524695171 +0000 UTC m=+46.427559056" observedRunningTime="2025-05-08 00:22:59.476115856 +0000 UTC m=+47.378979741" watchObservedRunningTime="2025-05-08 00:22:59.478207055 +0000 UTC m=+47.381070980" May 8 00:22:59.488543 kubelet[2557]: I0508 00:22:59.488343 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cbb7457c4-9bvwx" podStartSLOduration=23.819632802 podStartE2EDuration="25.488326815s" podCreationTimestamp="2025-05-08 00:22:34 +0000 UTC" firstStartedPulling="2025-05-08 00:22:56.619601377 +0000 UTC m=+44.522465262" lastFinishedPulling="2025-05-08 00:22:58.28829539 +0000 UTC m=+46.191159275" observedRunningTime="2025-05-08 00:22:59.488319055 +0000 UTC m=+47.391182940" watchObservedRunningTime="2025-05-08 00:22:59.488326815 +0000 UTC m=+47.391190780" May 8 00:22:59.497573 containerd[1437]: time="2025-05-08T00:22:59.497354174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:22:59.497573 containerd[1437]: time="2025-05-08T00:22:59.497416574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:22:59.497573 containerd[1437]: time="2025-05-08T00:22:59.497431214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:59.497573 containerd[1437]: time="2025-05-08T00:22:59.497521654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:22:59.522927 systemd[1]: Started cri-containerd-f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7.scope - libcontainer container f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7. May 8 00:22:59.536354 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:22:59.553167 containerd[1437]: time="2025-05-08T00:22:59.553123170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zsxj2,Uid:58a2dab3-faad-488a-baa2-8365e3fce66c,Namespace:kube-system,Attempt:1,} returns sandbox id \"f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7\"" May 8 00:22:59.553704 kubelet[2557]: E0508 00:22:59.553685 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:22:59.558196 containerd[1437]: time="2025-05-08T00:22:59.558156209Z" level=info msg="CreateContainer within sandbox \"f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:22:59.572316 containerd[1437]: time="2025-05-08T00:22:59.572279448Z" level=info msg="CreateContainer within sandbox \"f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7726a7e196dd5fa57173455aa5f266e44e1c2cfd8784686e854ba84953058d41\"" May 8 00:22:59.574698 containerd[1437]: time="2025-05-08T00:22:59.574655888Z" level=info msg="StartContainer for \"7726a7e196dd5fa57173455aa5f266e44e1c2cfd8784686e854ba84953058d41\"" May 8 00:22:59.614874 systemd[1]: Started cri-containerd-7726a7e196dd5fa57173455aa5f266e44e1c2cfd8784686e854ba84953058d41.scope - libcontainer container 7726a7e196dd5fa57173455aa5f266e44e1c2cfd8784686e854ba84953058d41. May 8 00:22:59.643175 containerd[1437]: time="2025-05-08T00:22:59.643117563Z" level=info msg="StartContainer for \"7726a7e196dd5fa57173455aa5f266e44e1c2cfd8784686e854ba84953058d41\" returns successfully" May 8 00:22:59.805798 containerd[1437]: time="2025-05-08T00:22:59.805677830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:59.807805 containerd[1437]: time="2025-05-08T00:22:59.807173230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 8 00:22:59.808074 containerd[1437]: time="2025-05-08T00:22:59.808010630Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:59.810274 containerd[1437]: time="2025-05-08T00:22:59.809837110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:22:59.810885 containerd[1437]: time="2025-05-08T00:22:59.810770110Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.285759259s" May 8 00:22:59.810885 containerd[1437]: time="2025-05-08T00:22:59.810804310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 8 00:22:59.813559 containerd[1437]: time="2025-05-08T00:22:59.813526230Z" level=info msg="CreateContainer within sandbox \"843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 00:22:59.836225 containerd[1437]: time="2025-05-08T00:22:59.836173108Z" level=info msg="CreateContainer within sandbox \"843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"74079f44b76a7a45b4912a3f34e477e4b83fd8d36fe354610c0e4aff5c044268\"" May 8 00:22:59.837002 containerd[1437]: time="2025-05-08T00:22:59.836971388Z" level=info msg="StartContainer for \"74079f44b76a7a45b4912a3f34e477e4b83fd8d36fe354610c0e4aff5c044268\"" May 8 00:22:59.871901 systemd[1]: Started cri-containerd-74079f44b76a7a45b4912a3f34e477e4b83fd8d36fe354610c0e4aff5c044268.scope - libcontainer container 74079f44b76a7a45b4912a3f34e477e4b83fd8d36fe354610c0e4aff5c044268. May 8 00:22:59.896155 containerd[1437]: time="2025-05-08T00:22:59.896102543Z" level=info msg="StartContainer for \"74079f44b76a7a45b4912a3f34e477e4b83fd8d36fe354610c0e4aff5c044268\" returns successfully" May 8 00:22:59.897292 containerd[1437]: time="2025-05-08T00:22:59.897266623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 00:23:00.465546 kubelet[2557]: E0508 00:23:00.465508 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:23:00.493631 kubelet[2557]: I0508 00:23:00.492026 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zsxj2" podStartSLOduration=34.49200754 podStartE2EDuration="34.49200754s" podCreationTimestamp="2025-05-08 00:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:23:00.478342861 +0000 UTC m=+48.381206746" watchObservedRunningTime="2025-05-08 00:23:00.49200754 +0000 UTC m=+48.394871465" May 8 00:23:01.084850 systemd-networkd[1383]: cali4fc01ebd967: Gained IPv6LL May 8 00:23:01.205762 containerd[1437]: time="2025-05-08T00:23:01.205374330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:23:01.206371 containerd[1437]: time="2025-05-08T00:23:01.206160650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 8 00:23:01.207036 containerd[1437]: time="2025-05-08T00:23:01.207002409Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:23:01.209191 containerd[1437]: time="2025-05-08T00:23:01.209148889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:23:01.210138 containerd[1437]: time="2025-05-08T00:23:01.210106169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.312806306s" May 8 00:23:01.210202 containerd[1437]: time="2025-05-08T00:23:01.210148769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 8 00:23:01.212150 containerd[1437]: time="2025-05-08T00:23:01.212114809Z" level=info msg="CreateContainer within sandbox \"843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 00:23:01.226496 containerd[1437]: time="2025-05-08T00:23:01.226458528Z" level=info msg="CreateContainer within sandbox \"843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"23d759cbfbc56df61bb4bf773662a3aba881ce8baae5915af19b313c890a3ac2\"" May 8 00:23:01.226989 containerd[1437]: time="2025-05-08T00:23:01.226963288Z" level=info msg="StartContainer for \"23d759cbfbc56df61bb4bf773662a3aba881ce8baae5915af19b313c890a3ac2\"" May 8 00:23:01.264941 systemd[1]: Started cri-containerd-23d759cbfbc56df61bb4bf773662a3aba881ce8baae5915af19b313c890a3ac2.scope - libcontainer container 23d759cbfbc56df61bb4bf773662a3aba881ce8baae5915af19b313c890a3ac2. May 8 00:23:01.285921 containerd[1437]: time="2025-05-08T00:23:01.285880724Z" level=info msg="StartContainer for \"23d759cbfbc56df61bb4bf773662a3aba881ce8baae5915af19b313c890a3ac2\" returns successfully" May 8 00:23:01.473247 kubelet[2557]: E0508 00:23:01.472884 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:23:01.484987 kubelet[2557]: I0508 00:23:01.484937 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-56jwl" podStartSLOduration=23.796736318 podStartE2EDuration="27.484377671s" podCreationTimestamp="2025-05-08 00:22:34 +0000 UTC" firstStartedPulling="2025-05-08 00:22:57.523246896 +0000 UTC m=+45.426110781" lastFinishedPulling="2025-05-08 00:23:01.210888249 +0000 UTC m=+49.113752134" observedRunningTime="2025-05-08 00:23:01.482638151 +0000 UTC m=+49.385502036" watchObservedRunningTime="2025-05-08 00:23:01.484377671 +0000 UTC m=+49.387241596" May 8 00:23:02.283780 kubelet[2557]: I0508 00:23:02.283734 2557 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 00:23:02.292993 kubelet[2557]: I0508 00:23:02.292963 2557 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 00:23:02.474628 kubelet[2557]: E0508 00:23:02.474557 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:23:03.900628 systemd[1]: Started sshd@13-10.0.0.58:22-10.0.0.1:37236.service - OpenSSH per-connection server daemon (10.0.0.1:37236). May 8 00:23:03.948640 sshd[5089]: Accepted publickey for core from 10.0.0.1 port 37236 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:23:03.950312 sshd[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:03.954786 systemd-logind[1426]: New session 14 of user core. May 8 00:23:03.963913 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:23:04.147582 sshd[5089]: pam_unix(sshd:session): session closed for user core May 8 00:23:04.151110 systemd[1]: sshd@13-10.0.0.58:22-10.0.0.1:37236.service: Deactivated successfully. May 8 00:23:04.154235 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:23:04.154962 systemd-logind[1426]: Session 14 logged out. Waiting for processes to exit. May 8 00:23:04.155992 systemd-logind[1426]: Removed session 14. May 8 00:23:09.163120 systemd[1]: Started sshd@14-10.0.0.58:22-10.0.0.1:37248.service - OpenSSH per-connection server daemon (10.0.0.1:37248). May 8 00:23:09.201601 sshd[5107]: Accepted publickey for core from 10.0.0.1 port 37248 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:23:09.203296 sshd[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:09.207750 systemd-logind[1426]: New session 15 of user core. May 8 00:23:09.217970 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:23:09.401501 sshd[5107]: pam_unix(sshd:session): session closed for user core May 8 00:23:09.407137 systemd[1]: sshd@14-10.0.0.58:22-10.0.0.1:37248.service: Deactivated successfully. May 8 00:23:09.409857 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:23:09.411652 systemd-logind[1426]: Session 15 logged out. Waiting for processes to exit. May 8 00:23:09.412811 systemd-logind[1426]: Removed session 15. May 8 00:23:12.180079 containerd[1437]: time="2025-05-08T00:23:12.179975589Z" level=info msg="StopPodSandbox for \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\"" May 8 00:23:12.257677 containerd[1437]: 2025-05-08 00:23:12.220 [WARNING][5144] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--56jwl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"26f41155-3ab0-4f8a-91d4-9d90c9524fe5", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3", Pod:"csi-node-driver-56jwl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali89cb5566f7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:23:12.257677 containerd[1437]: 2025-05-08 00:23:12.220 [INFO][5144] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" May 8 00:23:12.257677 containerd[1437]: 2025-05-08 00:23:12.220 [INFO][5144] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" iface="eth0" netns="" May 8 00:23:12.257677 containerd[1437]: 2025-05-08 00:23:12.220 [INFO][5144] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" May 8 00:23:12.257677 containerd[1437]: 2025-05-08 00:23:12.220 [INFO][5144] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" May 8 00:23:12.257677 containerd[1437]: 2025-05-08 00:23:12.243 [INFO][5155] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" HandleID="k8s-pod-network.c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" Workload="localhost-k8s-csi--node--driver--56jwl-eth0" May 8 00:23:12.257677 containerd[1437]: 2025-05-08 00:23:12.243 [INFO][5155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:23:12.257677 containerd[1437]: 2025-05-08 00:23:12.243 [INFO][5155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:23:12.257677 containerd[1437]: 2025-05-08 00:23:12.252 [WARNING][5155] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" HandleID="k8s-pod-network.c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" Workload="localhost-k8s-csi--node--driver--56jwl-eth0" May 8 00:23:12.257677 containerd[1437]: 2025-05-08 00:23:12.252 [INFO][5155] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" HandleID="k8s-pod-network.c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" Workload="localhost-k8s-csi--node--driver--56jwl-eth0" May 8 00:23:12.257677 containerd[1437]: 2025-05-08 00:23:12.254 [INFO][5155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:23:12.257677 containerd[1437]: 2025-05-08 00:23:12.255 [INFO][5144] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" May 8 00:23:12.258086 containerd[1437]: time="2025-05-08T00:23:12.257713586Z" level=info msg="TearDown network for sandbox \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\" successfully" May 8 00:23:12.258086 containerd[1437]: time="2025-05-08T00:23:12.257754466Z" level=info msg="StopPodSandbox for \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\" returns successfully" May 8 00:23:12.258283 containerd[1437]: time="2025-05-08T00:23:12.258243986Z" level=info msg="RemovePodSandbox for \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\"" May 8 00:23:12.267314 containerd[1437]: time="2025-05-08T00:23:12.267263306Z" level=info msg="Forcibly stopping sandbox \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\"" May 8 00:23:12.334581 containerd[1437]: 2025-05-08 00:23:12.300 [WARNING][5176] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--56jwl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"26f41155-3ab0-4f8a-91d4-9d90c9524fe5", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"843224a309f11599e959b0219b2d9a6a0a7c39fef4c104497afab280e214f6c3", Pod:"csi-node-driver-56jwl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali89cb5566f7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:23:12.334581 containerd[1437]: 2025-05-08 00:23:12.301 [INFO][5176] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" May 8 00:23:12.334581 containerd[1437]: 2025-05-08 00:23:12.301 [INFO][5176] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" iface="eth0" netns="" May 8 00:23:12.334581 containerd[1437]: 2025-05-08 00:23:12.301 [INFO][5176] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" May 8 00:23:12.334581 containerd[1437]: 2025-05-08 00:23:12.301 [INFO][5176] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" May 8 00:23:12.334581 containerd[1437]: 2025-05-08 00:23:12.320 [INFO][5184] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" HandleID="k8s-pod-network.c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" Workload="localhost-k8s-csi--node--driver--56jwl-eth0" May 8 00:23:12.334581 containerd[1437]: 2025-05-08 00:23:12.320 [INFO][5184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:23:12.334581 containerd[1437]: 2025-05-08 00:23:12.320 [INFO][5184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:23:12.334581 containerd[1437]: 2025-05-08 00:23:12.329 [WARNING][5184] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" HandleID="k8s-pod-network.c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" Workload="localhost-k8s-csi--node--driver--56jwl-eth0" May 8 00:23:12.334581 containerd[1437]: 2025-05-08 00:23:12.329 [INFO][5184] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" HandleID="k8s-pod-network.c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" Workload="localhost-k8s-csi--node--driver--56jwl-eth0" May 8 00:23:12.334581 containerd[1437]: 2025-05-08 00:23:12.331 [INFO][5184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:23:12.334581 containerd[1437]: 2025-05-08 00:23:12.332 [INFO][5176] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836" May 8 00:23:12.335034 containerd[1437]: time="2025-05-08T00:23:12.334624144Z" level=info msg="TearDown network for sandbox \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\" successfully" May 8 00:23:12.404302 containerd[1437]: time="2025-05-08T00:23:12.404254221Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:23:12.404302 containerd[1437]: time="2025-05-08T00:23:12.404334061Z" level=info msg="RemovePodSandbox \"c2498b6a6cd72d0ccaed7ccf98c3c47296b9a8f92021c0286b994e909b691836\" returns successfully" May 8 00:23:12.404897 containerd[1437]: time="2025-05-08T00:23:12.404870941Z" level=info msg="StopPodSandbox for \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\"" May 8 00:23:12.470420 containerd[1437]: 2025-05-08 00:23:12.438 [WARNING][5207] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"58a2dab3-faad-488a-baa2-8365e3fce66c", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7", Pod:"coredns-7db6d8ff4d-zsxj2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4fc01ebd967", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:23:12.470420 containerd[1437]: 2025-05-08 00:23:12.438 [INFO][5207] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" May 8 00:23:12.470420 containerd[1437]: 2025-05-08 00:23:12.438 [INFO][5207] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" iface="eth0" netns="" May 8 00:23:12.470420 containerd[1437]: 2025-05-08 00:23:12.438 [INFO][5207] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" May 8 00:23:12.470420 containerd[1437]: 2025-05-08 00:23:12.438 [INFO][5207] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" May 8 00:23:12.470420 containerd[1437]: 2025-05-08 00:23:12.457 [INFO][5215] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" HandleID="k8s-pod-network.5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" Workload="localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0" May 8 00:23:12.470420 containerd[1437]: 2025-05-08 00:23:12.457 [INFO][5215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:23:12.470420 containerd[1437]: 2025-05-08 00:23:12.457 [INFO][5215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:23:12.470420 containerd[1437]: 2025-05-08 00:23:12.465 [WARNING][5215] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" HandleID="k8s-pod-network.5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" Workload="localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0" May 8 00:23:12.470420 containerd[1437]: 2025-05-08 00:23:12.465 [INFO][5215] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" HandleID="k8s-pod-network.5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" Workload="localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0" May 8 00:23:12.470420 containerd[1437]: 2025-05-08 00:23:12.466 [INFO][5215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:23:12.470420 containerd[1437]: 2025-05-08 00:23:12.468 [INFO][5207] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" May 8 00:23:12.470420 containerd[1437]: time="2025-05-08T00:23:12.470013339Z" level=info msg="TearDown network for sandbox \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\" successfully" May 8 00:23:12.470420 containerd[1437]: time="2025-05-08T00:23:12.470036739Z" level=info msg="StopPodSandbox for \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\" returns successfully" May 8 00:23:12.471020 containerd[1437]: time="2025-05-08T00:23:12.470985219Z" level=info msg="RemovePodSandbox for \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\"" May 8 00:23:12.471084 containerd[1437]: time="2025-05-08T00:23:12.471024379Z" level=info msg="Forcibly stopping sandbox \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\"" May 8 00:23:12.543066 containerd[1437]: 2025-05-08 00:23:12.509 [WARNING][5239] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"58a2dab3-faad-488a-baa2-8365e3fce66c", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f2246241ea1e1bf93c069811f8db8e53fb2493a9afa363ecb24f88a548a072e7", Pod:"coredns-7db6d8ff4d-zsxj2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4fc01ebd967", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:23:12.543066 containerd[1437]: 2025-05-08 00:23:12.509 [INFO][5239] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" May 8 00:23:12.543066 containerd[1437]: 2025-05-08 00:23:12.509 [INFO][5239] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" iface="eth0" netns="" May 8 00:23:12.543066 containerd[1437]: 2025-05-08 00:23:12.509 [INFO][5239] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" May 8 00:23:12.543066 containerd[1437]: 2025-05-08 00:23:12.509 [INFO][5239] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" May 8 00:23:12.543066 containerd[1437]: 2025-05-08 00:23:12.528 [INFO][5248] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" HandleID="k8s-pod-network.5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" Workload="localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0" May 8 00:23:12.543066 containerd[1437]: 2025-05-08 00:23:12.528 [INFO][5248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:23:12.543066 containerd[1437]: 2025-05-08 00:23:12.528 [INFO][5248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:23:12.543066 containerd[1437]: 2025-05-08 00:23:12.536 [WARNING][5248] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" HandleID="k8s-pod-network.5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" Workload="localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0" May 8 00:23:12.543066 containerd[1437]: 2025-05-08 00:23:12.536 [INFO][5248] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" HandleID="k8s-pod-network.5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" Workload="localhost-k8s-coredns--7db6d8ff4d--zsxj2-eth0" May 8 00:23:12.543066 containerd[1437]: 2025-05-08 00:23:12.537 [INFO][5248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:23:12.543066 containerd[1437]: 2025-05-08 00:23:12.539 [INFO][5239] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef" May 8 00:23:12.543596 containerd[1437]: time="2025-05-08T00:23:12.543111537Z" level=info msg="TearDown network for sandbox \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\" successfully" May 8 00:23:12.560368 containerd[1437]: time="2025-05-08T00:23:12.560316016Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:23:12.560470 containerd[1437]: time="2025-05-08T00:23:12.560389136Z" level=info msg="RemovePodSandbox \"5772741fd0e6a4d084b50a451f0dd46277c69dc69cc7ee39dab9f2cfb1b3bbef\" returns successfully" May 8 00:23:12.561227 containerd[1437]: time="2025-05-08T00:23:12.560906936Z" level=info msg="StopPodSandbox for \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\"" May 8 00:23:12.636689 containerd[1437]: 2025-05-08 00:23:12.596 [WARNING][5270] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--j4988-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8196bfa1-7d4c-4b32-bb04-7483aba589c0", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62", Pod:"coredns-7db6d8ff4d-j4988", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0c59c53ce77", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:23:12.636689 containerd[1437]: 2025-05-08 00:23:12.596 [INFO][5270] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" May 8 00:23:12.636689 containerd[1437]: 2025-05-08 00:23:12.596 [INFO][5270] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" iface="eth0" netns="" May 8 00:23:12.636689 containerd[1437]: 2025-05-08 00:23:12.596 [INFO][5270] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" May 8 00:23:12.636689 containerd[1437]: 2025-05-08 00:23:12.596 [INFO][5270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" May 8 00:23:12.636689 containerd[1437]: 2025-05-08 00:23:12.621 [INFO][5278] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" HandleID="k8s-pod-network.f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" Workload="localhost-k8s-coredns--7db6d8ff4d--j4988-eth0" May 8 00:23:12.636689 containerd[1437]: 2025-05-08 00:23:12.622 [INFO][5278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:23:12.636689 containerd[1437]: 2025-05-08 00:23:12.622 [INFO][5278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:23:12.636689 containerd[1437]: 2025-05-08 00:23:12.631 [WARNING][5278] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" HandleID="k8s-pod-network.f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" Workload="localhost-k8s-coredns--7db6d8ff4d--j4988-eth0" May 8 00:23:12.636689 containerd[1437]: 2025-05-08 00:23:12.631 [INFO][5278] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" HandleID="k8s-pod-network.f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" Workload="localhost-k8s-coredns--7db6d8ff4d--j4988-eth0" May 8 00:23:12.636689 containerd[1437]: 2025-05-08 00:23:12.633 [INFO][5278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:23:12.636689 containerd[1437]: 2025-05-08 00:23:12.634 [INFO][5270] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" May 8 00:23:12.637239 containerd[1437]: time="2025-05-08T00:23:12.637196094Z" level=info msg="TearDown network for sandbox \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\" successfully" May 8 00:23:12.637310 containerd[1437]: time="2025-05-08T00:23:12.637286574Z" level=info msg="StopPodSandbox for \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\" returns successfully" May 8 00:23:12.637917 containerd[1437]: time="2025-05-08T00:23:12.637879094Z" level=info msg="RemovePodSandbox for \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\"" May 8 00:23:12.637917 containerd[1437]: time="2025-05-08T00:23:12.637917494Z" level=info msg="Forcibly stopping sandbox \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\"" May 8 00:23:12.711395 containerd[1437]: 2025-05-08 00:23:12.674 [WARNING][5300] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--j4988-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8196bfa1-7d4c-4b32-bb04-7483aba589c0", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3d7b1f8f93d32867fa5391112ca941c14b18ee476e20691724708bc64fab4f62", Pod:"coredns-7db6d8ff4d-j4988", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0c59c53ce77", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:23:12.711395 containerd[1437]: 2025-05-08 00:23:12.674 [INFO][5300] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" May 8 00:23:12.711395 containerd[1437]: 2025-05-08 00:23:12.674 [INFO][5300] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" iface="eth0" netns="" May 8 00:23:12.711395 containerd[1437]: 2025-05-08 00:23:12.674 [INFO][5300] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" May 8 00:23:12.711395 containerd[1437]: 2025-05-08 00:23:12.674 [INFO][5300] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" May 8 00:23:12.711395 containerd[1437]: 2025-05-08 00:23:12.697 [INFO][5308] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" HandleID="k8s-pod-network.f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" Workload="localhost-k8s-coredns--7db6d8ff4d--j4988-eth0" May 8 00:23:12.711395 containerd[1437]: 2025-05-08 00:23:12.697 [INFO][5308] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:23:12.711395 containerd[1437]: 2025-05-08 00:23:12.697 [INFO][5308] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:23:12.711395 containerd[1437]: 2025-05-08 00:23:12.706 [WARNING][5308] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" HandleID="k8s-pod-network.f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" Workload="localhost-k8s-coredns--7db6d8ff4d--j4988-eth0" May 8 00:23:12.711395 containerd[1437]: 2025-05-08 00:23:12.706 [INFO][5308] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" HandleID="k8s-pod-network.f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" Workload="localhost-k8s-coredns--7db6d8ff4d--j4988-eth0" May 8 00:23:12.711395 containerd[1437]: 2025-05-08 00:23:12.707 [INFO][5308] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:23:12.711395 containerd[1437]: 2025-05-08 00:23:12.709 [INFO][5300] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247" May 8 00:23:12.711907 containerd[1437]: time="2025-05-08T00:23:12.711454971Z" level=info msg="TearDown network for sandbox \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\" successfully" May 8 00:23:12.714409 containerd[1437]: time="2025-05-08T00:23:12.714366131Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:23:12.714460 containerd[1437]: time="2025-05-08T00:23:12.714430691Z" level=info msg="RemovePodSandbox \"f42aeeccbb6bc4b46084fc515fef7244465bd163115b32a58d0b7d9c0d04f247\" returns successfully" May 8 00:23:12.714960 containerd[1437]: time="2025-05-08T00:23:12.714937571Z" level=info msg="StopPodSandbox for \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\"" May 8 00:23:12.789190 containerd[1437]: 2025-05-08 00:23:12.752 [WARNING][5332] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0", GenerateName:"calico-apiserver-5cbb7457c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"37b135c5-5fc9-4679-a6b3-7f9b2a12dd64", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cbb7457c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e", Pod:"calico-apiserver-5cbb7457c4-99wch", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicdefd006f55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:23:12.789190 containerd[1437]: 2025-05-08 00:23:12.752 [INFO][5332] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" May 8 00:23:12.789190 containerd[1437]: 2025-05-08 00:23:12.752 [INFO][5332] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" iface="eth0" netns="" May 8 00:23:12.789190 containerd[1437]: 2025-05-08 00:23:12.752 [INFO][5332] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" May 8 00:23:12.789190 containerd[1437]: 2025-05-08 00:23:12.752 [INFO][5332] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" May 8 00:23:12.789190 containerd[1437]: 2025-05-08 00:23:12.776 [INFO][5340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" HandleID="k8s-pod-network.0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0" May 8 00:23:12.789190 containerd[1437]: 2025-05-08 00:23:12.776 [INFO][5340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:23:12.789190 containerd[1437]: 2025-05-08 00:23:12.776 [INFO][5340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:23:12.789190 containerd[1437]: 2025-05-08 00:23:12.784 [WARNING][5340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" HandleID="k8s-pod-network.0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0" May 8 00:23:12.789190 containerd[1437]: 2025-05-08 00:23:12.784 [INFO][5340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" HandleID="k8s-pod-network.0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0" May 8 00:23:12.789190 containerd[1437]: 2025-05-08 00:23:12.785 [INFO][5340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:23:12.789190 containerd[1437]: 2025-05-08 00:23:12.787 [INFO][5332] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" May 8 00:23:12.789950 containerd[1437]: time="2025-05-08T00:23:12.789163169Z" level=info msg="TearDown network for sandbox \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\" successfully" May 8 00:23:12.789994 containerd[1437]: time="2025-05-08T00:23:12.789952929Z" level=info msg="StopPodSandbox for \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\" returns successfully" May 8 00:23:12.790969 containerd[1437]: time="2025-05-08T00:23:12.790945409Z" level=info msg="RemovePodSandbox for \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\"" May 8 00:23:12.791015 containerd[1437]: time="2025-05-08T00:23:12.790988089Z" level=info msg="Forcibly stopping sandbox \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\"" May 8 00:23:12.870413 containerd[1437]: 2025-05-08 00:23:12.827 [WARNING][5363] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0", GenerateName:"calico-apiserver-5cbb7457c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"37b135c5-5fc9-4679-a6b3-7f9b2a12dd64", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cbb7457c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d7e66d758a7cfcae8a5598446d973aa0693f03e56536c0026dd59053144b63e", Pod:"calico-apiserver-5cbb7457c4-99wch", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicdefd006f55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:23:12.870413 containerd[1437]: 2025-05-08 00:23:12.828 [INFO][5363] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" May 8 00:23:12.870413 containerd[1437]: 2025-05-08 00:23:12.828 [INFO][5363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" iface="eth0" netns="" May 8 00:23:12.870413 containerd[1437]: 2025-05-08 00:23:12.828 [INFO][5363] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" May 8 00:23:12.870413 containerd[1437]: 2025-05-08 00:23:12.828 [INFO][5363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" May 8 00:23:12.870413 containerd[1437]: 2025-05-08 00:23:12.855 [INFO][5372] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" HandleID="k8s-pod-network.0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0" May 8 00:23:12.870413 containerd[1437]: 2025-05-08 00:23:12.855 [INFO][5372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:23:12.870413 containerd[1437]: 2025-05-08 00:23:12.855 [INFO][5372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:23:12.870413 containerd[1437]: 2025-05-08 00:23:12.865 [WARNING][5372] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" HandleID="k8s-pod-network.0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0" May 8 00:23:12.870413 containerd[1437]: 2025-05-08 00:23:12.865 [INFO][5372] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" HandleID="k8s-pod-network.0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--99wch-eth0" May 8 00:23:12.870413 containerd[1437]: 2025-05-08 00:23:12.866 [INFO][5372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:23:12.870413 containerd[1437]: 2025-05-08 00:23:12.868 [INFO][5363] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c" May 8 00:23:12.870413 containerd[1437]: time="2025-05-08T00:23:12.870408646Z" level=info msg="TearDown network for sandbox \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\" successfully" May 8 00:23:12.873205 containerd[1437]: time="2025-05-08T00:23:12.873164206Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:23:12.873263 containerd[1437]: time="2025-05-08T00:23:12.873224606Z" level=info msg="RemovePodSandbox \"0bc93704403acdb878db7f5e8da7c546f2bb9b6f11570f0aef8cf422f072587c\" returns successfully" May 8 00:23:12.874034 containerd[1437]: time="2025-05-08T00:23:12.873779006Z" level=info msg="StopPodSandbox for \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\"" May 8 00:23:12.941124 containerd[1437]: 2025-05-08 00:23:12.909 [WARNING][5414] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0", GenerateName:"calico-kube-controllers-8f4df646d-", Namespace:"calico-system", SelfLink:"", UID:"7441d347-bcf2-42a6-b5f7-183f25ff2768", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8f4df646d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc", Pod:"calico-kube-controllers-8f4df646d-hrzsd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8727e66adfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:23:12.941124 containerd[1437]: 2025-05-08 00:23:12.909 [INFO][5414] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" May 8 00:23:12.941124 containerd[1437]: 2025-05-08 00:23:12.909 [INFO][5414] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" iface="eth0" netns="" May 8 00:23:12.941124 containerd[1437]: 2025-05-08 00:23:12.909 [INFO][5414] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" May 8 00:23:12.941124 containerd[1437]: 2025-05-08 00:23:12.909 [INFO][5414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" May 8 00:23:12.941124 containerd[1437]: 2025-05-08 00:23:12.928 [INFO][5423] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" HandleID="k8s-pod-network.aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" Workload="localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0" May 8 00:23:12.941124 containerd[1437]: 2025-05-08 00:23:12.928 [INFO][5423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:23:12.941124 containerd[1437]: 2025-05-08 00:23:12.928 [INFO][5423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:23:12.941124 containerd[1437]: 2025-05-08 00:23:12.936 [WARNING][5423] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" HandleID="k8s-pod-network.aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" Workload="localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0" May 8 00:23:12.941124 containerd[1437]: 2025-05-08 00:23:12.936 [INFO][5423] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" HandleID="k8s-pod-network.aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" Workload="localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0" May 8 00:23:12.941124 containerd[1437]: 2025-05-08 00:23:12.937 [INFO][5423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:23:12.941124 containerd[1437]: 2025-05-08 00:23:12.939 [INFO][5414] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" May 8 00:23:12.941706 containerd[1437]: time="2025-05-08T00:23:12.941340844Z" level=info msg="TearDown network for sandbox \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\" successfully" May 8 00:23:12.941706 containerd[1437]: time="2025-05-08T00:23:12.941367724Z" level=info msg="StopPodSandbox for \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\" returns successfully" May 8 00:23:12.941859 containerd[1437]: time="2025-05-08T00:23:12.941820804Z" level=info msg="RemovePodSandbox for \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\"" May 8 00:23:12.941859 containerd[1437]: time="2025-05-08T00:23:12.941855604Z" level=info msg="Forcibly stopping sandbox \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\"" May 8 00:23:13.009696 containerd[1437]: 2025-05-08 00:23:12.976 [WARNING][5447] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0", GenerateName:"calico-kube-controllers-8f4df646d-", Namespace:"calico-system", SelfLink:"", UID:"7441d347-bcf2-42a6-b5f7-183f25ff2768", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8f4df646d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a54d3a66b4731e8dfbe3087797dac73181ca8e3280a83dc74cfe920fcddb8bc", Pod:"calico-kube-controllers-8f4df646d-hrzsd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8727e66adfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:23:13.009696 containerd[1437]: 2025-05-08 00:23:12.976 [INFO][5447] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" May 8 00:23:13.009696 containerd[1437]: 2025-05-08 00:23:12.976 [INFO][5447] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" iface="eth0" netns="" May 8 00:23:13.009696 containerd[1437]: 2025-05-08 00:23:12.976 [INFO][5447] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" May 8 00:23:13.009696 containerd[1437]: 2025-05-08 00:23:12.976 [INFO][5447] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" May 8 00:23:13.009696 containerd[1437]: 2025-05-08 00:23:12.996 [INFO][5455] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" HandleID="k8s-pod-network.aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" Workload="localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0" May 8 00:23:13.009696 containerd[1437]: 2025-05-08 00:23:12.996 [INFO][5455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:23:13.009696 containerd[1437]: 2025-05-08 00:23:12.997 [INFO][5455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:23:13.009696 containerd[1437]: 2025-05-08 00:23:13.004 [WARNING][5455] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" HandleID="k8s-pod-network.aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" Workload="localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0" May 8 00:23:13.009696 containerd[1437]: 2025-05-08 00:23:13.004 [INFO][5455] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" HandleID="k8s-pod-network.aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" Workload="localhost-k8s-calico--kube--controllers--8f4df646d--hrzsd-eth0" May 8 00:23:13.009696 containerd[1437]: 2025-05-08 00:23:13.006 [INFO][5455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:23:13.009696 containerd[1437]: 2025-05-08 00:23:13.007 [INFO][5447] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7" May 8 00:23:13.010179 containerd[1437]: time="2025-05-08T00:23:13.009743721Z" level=info msg="TearDown network for sandbox \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\" successfully" May 8 00:23:13.012500 containerd[1437]: time="2025-05-08T00:23:13.012471841Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:23:13.012559 containerd[1437]: time="2025-05-08T00:23:13.012531361Z" level=info msg="RemovePodSandbox \"aa36c5f8cb56c713c36d581f45d6fe9071b6afb4d6ef5c5b0570354429554ca7\" returns successfully" May 8 00:23:13.013295 containerd[1437]: time="2025-05-08T00:23:13.012985081Z" level=info msg="StopPodSandbox for \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\"" May 8 00:23:13.078878 containerd[1437]: 2025-05-08 00:23:13.047 [WARNING][5479] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0", GenerateName:"calico-apiserver-5cbb7457c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"76116a76-06fa-4e4c-be4b-d6de000109ca", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cbb7457c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134", Pod:"calico-apiserver-5cbb7457c4-9bvwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali40a4a5aad8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:23:13.078878 containerd[1437]: 2025-05-08 00:23:13.047 [INFO][5479] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" May 8 00:23:13.078878 containerd[1437]: 2025-05-08 00:23:13.047 [INFO][5479] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" iface="eth0" netns="" May 8 00:23:13.078878 containerd[1437]: 2025-05-08 00:23:13.047 [INFO][5479] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" May 8 00:23:13.078878 containerd[1437]: 2025-05-08 00:23:13.047 [INFO][5479] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" May 8 00:23:13.078878 containerd[1437]: 2025-05-08 00:23:13.065 [INFO][5487] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" HandleID="k8s-pod-network.1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0" May 8 00:23:13.078878 containerd[1437]: 2025-05-08 00:23:13.065 [INFO][5487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:23:13.078878 containerd[1437]: 2025-05-08 00:23:13.065 [INFO][5487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:23:13.078878 containerd[1437]: 2025-05-08 00:23:13.074 [WARNING][5487] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" HandleID="k8s-pod-network.1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0" May 8 00:23:13.078878 containerd[1437]: 2025-05-08 00:23:13.074 [INFO][5487] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" HandleID="k8s-pod-network.1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0" May 8 00:23:13.078878 containerd[1437]: 2025-05-08 00:23:13.075 [INFO][5487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:23:13.078878 containerd[1437]: 2025-05-08 00:23:13.077 [INFO][5479] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" May 8 00:23:13.079429 containerd[1437]: time="2025-05-08T00:23:13.079308959Z" level=info msg="TearDown network for sandbox \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\" successfully" May 8 00:23:13.079429 containerd[1437]: time="2025-05-08T00:23:13.079339279Z" level=info msg="StopPodSandbox for \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\" returns successfully" May 8 00:23:13.080527 containerd[1437]: time="2025-05-08T00:23:13.080263439Z" level=info msg="RemovePodSandbox for \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\"" May 8 00:23:13.080527 containerd[1437]: time="2025-05-08T00:23:13.080292839Z" level=info msg="Forcibly stopping sandbox \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\"" May 8 00:23:13.166707 containerd[1437]: 2025-05-08 00:23:13.134 [WARNING][5510] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0", GenerateName:"calico-apiserver-5cbb7457c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"76116a76-06fa-4e4c-be4b-d6de000109ca", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cbb7457c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4cf26aafdc6d6a449924bd9e9305aa9e895491d3724a8c5083f467679a3f7134", Pod:"calico-apiserver-5cbb7457c4-9bvwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali40a4a5aad8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:23:13.166707 containerd[1437]: 2025-05-08 00:23:13.134 [INFO][5510] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" May 8 00:23:13.166707 containerd[1437]: 2025-05-08 00:23:13.134 [INFO][5510] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" iface="eth0" netns="" May 8 00:23:13.166707 containerd[1437]: 2025-05-08 00:23:13.134 [INFO][5510] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" May 8 00:23:13.166707 containerd[1437]: 2025-05-08 00:23:13.134 [INFO][5510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" May 8 00:23:13.166707 containerd[1437]: 2025-05-08 00:23:13.153 [INFO][5518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" HandleID="k8s-pod-network.1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0" May 8 00:23:13.166707 containerd[1437]: 2025-05-08 00:23:13.153 [INFO][5518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:23:13.166707 containerd[1437]: 2025-05-08 00:23:13.153 [INFO][5518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:23:13.166707 containerd[1437]: 2025-05-08 00:23:13.161 [WARNING][5518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" HandleID="k8s-pod-network.1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0" May 8 00:23:13.166707 containerd[1437]: 2025-05-08 00:23:13.161 [INFO][5518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" HandleID="k8s-pod-network.1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" Workload="localhost-k8s-calico--apiserver--5cbb7457c4--9bvwx-eth0" May 8 00:23:13.166707 containerd[1437]: 2025-05-08 00:23:13.163 [INFO][5518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:23:13.166707 containerd[1437]: 2025-05-08 00:23:13.164 [INFO][5510] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616" May 8 00:23:13.167109 containerd[1437]: time="2025-05-08T00:23:13.166761356Z" level=info msg="TearDown network for sandbox \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\" successfully" May 8 00:23:13.193034 containerd[1437]: time="2025-05-08T00:23:13.192977476Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:23:13.193350 containerd[1437]: time="2025-05-08T00:23:13.193079916Z" level=info msg="RemovePodSandbox \"1d32210bc02bf279bc2e673c369c33be39821fc9a0b2c04f002f237227078616\" returns successfully" May 8 00:23:14.416891 systemd[1]: Started sshd@15-10.0.0.58:22-10.0.0.1:57356.service - OpenSSH per-connection server daemon (10.0.0.1:57356). May 8 00:23:14.467347 sshd[5526]: Accepted publickey for core from 10.0.0.1 port 57356 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:23:14.468199 sshd[5526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:14.472455 systemd-logind[1426]: New session 16 of user core. May 8 00:23:14.481913 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:23:14.614501 sshd[5526]: pam_unix(sshd:session): session closed for user core May 8 00:23:14.626775 systemd[1]: sshd@15-10.0.0.58:22-10.0.0.1:57356.service: Deactivated successfully. May 8 00:23:14.628635 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:23:14.630616 systemd-logind[1426]: Session 16 logged out. Waiting for processes to exit. May 8 00:23:14.636097 systemd[1]: Started sshd@16-10.0.0.58:22-10.0.0.1:57368.service - OpenSSH per-connection server daemon (10.0.0.1:57368). May 8 00:23:14.637348 systemd-logind[1426]: Removed session 16. May 8 00:23:14.665154 sshd[5541]: Accepted publickey for core from 10.0.0.1 port 57368 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:23:14.666453 sshd[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:14.671625 systemd-logind[1426]: New session 17 of user core. May 8 00:23:14.679900 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:23:14.886024 sshd[5541]: pam_unix(sshd:session): session closed for user core May 8 00:23:14.894639 systemd[1]: sshd@16-10.0.0.58:22-10.0.0.1:57368.service: Deactivated successfully. May 8 00:23:14.896331 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:23:14.897698 systemd-logind[1426]: Session 17 logged out. Waiting for processes to exit. May 8 00:23:14.902986 systemd[1]: Started sshd@17-10.0.0.58:22-10.0.0.1:57384.service - OpenSSH per-connection server daemon (10.0.0.1:57384). May 8 00:23:14.904207 systemd-logind[1426]: Removed session 17. May 8 00:23:14.940366 sshd[5553]: Accepted publickey for core from 10.0.0.1 port 57384 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:23:14.941706 sshd[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:14.945797 systemd-logind[1426]: New session 18 of user core. May 8 00:23:14.954860 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:23:16.355906 sshd[5553]: pam_unix(sshd:session): session closed for user core May 8 00:23:16.366360 systemd[1]: sshd@17-10.0.0.58:22-10.0.0.1:57384.service: Deactivated successfully. May 8 00:23:16.367889 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:23:16.370939 systemd-logind[1426]: Session 18 logged out. Waiting for processes to exit. May 8 00:23:16.381025 systemd[1]: Started sshd@18-10.0.0.58:22-10.0.0.1:57394.service - OpenSSH per-connection server daemon (10.0.0.1:57394). May 8 00:23:16.384028 systemd-logind[1426]: Removed session 18. May 8 00:23:16.411819 sshd[5574]: Accepted publickey for core from 10.0.0.1 port 57394 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:23:16.413079 sshd[5574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:16.418041 systemd-logind[1426]: New session 19 of user core. May 8 00:23:16.429885 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:23:16.740513 sshd[5574]: pam_unix(sshd:session): session closed for user core May 8 00:23:16.751942 systemd[1]: sshd@18-10.0.0.58:22-10.0.0.1:57394.service: Deactivated successfully. May 8 00:23:16.754445 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:23:16.756949 systemd-logind[1426]: Session 19 logged out. Waiting for processes to exit. May 8 00:23:16.766010 systemd[1]: Started sshd@19-10.0.0.58:22-10.0.0.1:57410.service - OpenSSH per-connection server daemon (10.0.0.1:57410). May 8 00:23:16.767523 systemd-logind[1426]: Removed session 19. May 8 00:23:16.795094 sshd[5587]: Accepted publickey for core from 10.0.0.1 port 57410 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:23:16.796563 sshd[5587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:16.800765 systemd-logind[1426]: New session 20 of user core. May 8 00:23:16.806901 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:23:16.941946 sshd[5587]: pam_unix(sshd:session): session closed for user core May 8 00:23:16.945168 systemd[1]: sshd@19-10.0.0.58:22-10.0.0.1:57410.service: Deactivated successfully. May 8 00:23:16.948260 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:23:16.949938 systemd-logind[1426]: Session 20 logged out. Waiting for processes to exit. May 8 00:23:16.950875 systemd-logind[1426]: Removed session 20. May 8 00:23:21.952508 systemd[1]: Started sshd@20-10.0.0.58:22-10.0.0.1:57426.service - OpenSSH per-connection server daemon (10.0.0.1:57426). May 8 00:23:21.984636 sshd[5606]: Accepted publickey for core from 10.0.0.1 port 57426 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:23:21.985947 sshd[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:21.989892 systemd-logind[1426]: New session 21 of user core. May 8 00:23:22.001885 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:23:22.121510 sshd[5606]: pam_unix(sshd:session): session closed for user core May 8 00:23:22.125156 systemd[1]: sshd@20-10.0.0.58:22-10.0.0.1:57426.service: Deactivated successfully. May 8 00:23:22.128302 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:23:22.128911 systemd-logind[1426]: Session 21 logged out. Waiting for processes to exit. May 8 00:23:22.129796 systemd-logind[1426]: Removed session 21. May 8 00:23:27.133684 systemd[1]: Started sshd@21-10.0.0.58:22-10.0.0.1:52866.service - OpenSSH per-connection server daemon (10.0.0.1:52866). May 8 00:23:27.166974 sshd[5625]: Accepted publickey for core from 10.0.0.1 port 52866 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:23:27.168187 sshd[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:27.172297 systemd-logind[1426]: New session 22 of user core. May 8 00:23:27.179924 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:23:27.313959 sshd[5625]: pam_unix(sshd:session): session closed for user core May 8 00:23:27.316622 systemd[1]: sshd@21-10.0.0.58:22-10.0.0.1:52866.service: Deactivated successfully. May 8 00:23:27.319359 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:23:27.320713 systemd-logind[1426]: Session 22 logged out. Waiting for processes to exit. May 8 00:23:27.323019 systemd-logind[1426]: Removed session 22. May 8 00:23:28.605093 kubelet[2557]: E0508 00:23:28.603970 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:23:32.327053 systemd[1]: Started sshd@22-10.0.0.58:22-10.0.0.1:52868.service - OpenSSH per-connection server daemon (10.0.0.1:52868). May 8 00:23:32.361799 sshd[5669]: Accepted publickey for core from 10.0.0.1 port 52868 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:23:32.363043 sshd[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:32.367603 systemd-logind[1426]: New session 23 of user core. May 8 00:23:32.378951 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:23:32.506793 sshd[5669]: pam_unix(sshd:session): session closed for user core May 8 00:23:32.510919 systemd[1]: sshd@22-10.0.0.58:22-10.0.0.1:52868.service: Deactivated successfully. May 8 00:23:32.515499 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:23:32.516430 systemd-logind[1426]: Session 23 logged out. Waiting for processes to exit. May 8 00:23:32.517275 systemd-logind[1426]: Removed session 23.