May 8 00:12:34.923762 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 8 00:12:34.923782 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed May 7 22:57:52 -00 2025 May 8 00:12:34.923792 kernel: KASLR enabled May 8 00:12:34.923797 kernel: efi: EFI v2.7 by EDK II May 8 00:12:34.923803 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 8 00:12:34.923808 kernel: random: crng init done May 8 00:12:34.923815 kernel: ACPI: Early table checksum verification disabled May 8 00:12:34.923821 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 8 00:12:34.923827 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 8 00:12:34.923835 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:12:34.923841 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:12:34.923846 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:12:34.923852 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:12:34.923858 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:12:34.923865 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:12:34.923873 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:12:34.923879 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:12:34.923886 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:12:34.923892 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 8 00:12:34.923898 kernel: NUMA: Failed to initialise from firmware May 8 00:12:34.923904 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:12:34.923911 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] May 8 00:12:34.923917 kernel: Zone ranges: May 8 00:12:34.923923 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:12:34.923929 kernel: DMA32 empty May 8 00:12:34.923936 kernel: Normal empty May 8 00:12:34.923943 kernel: Movable zone start for each node May 8 00:12:34.923949 kernel: Early memory node ranges May 8 00:12:34.923955 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 8 00:12:34.923961 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 8 00:12:34.923968 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 8 00:12:34.923974 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 8 00:12:34.923980 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 8 00:12:34.923986 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 8 00:12:34.923992 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 8 00:12:34.923999 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:12:34.924005 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 8 00:12:34.924012 kernel: psci: probing for conduit method from ACPI. May 8 00:12:34.924018 kernel: psci: PSCIv1.1 detected in firmware. May 8 00:12:34.924025 kernel: psci: Using standard PSCI v0.2 function IDs May 8 00:12:34.924034 kernel: psci: Trusted OS migration not required May 8 00:12:34.924040 kernel: psci: SMC Calling Convention v1.1 May 8 00:12:34.924047 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 8 00:12:34.924055 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 May 8 00:12:34.924062 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 May 8 00:12:34.924069 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 8 00:12:34.924076 kernel: Detected PIPT I-cache on CPU0 May 8 00:12:34.924083 kernel: CPU features: detected: GIC system register CPU interface May 8 00:12:34.924089 kernel: CPU features: detected: Hardware dirty bit management May 8 00:12:34.924096 kernel: CPU features: detected: Spectre-v4 May 8 00:12:34.924103 kernel: CPU features: detected: Spectre-BHB May 8 00:12:34.924109 kernel: CPU features: kernel page table isolation forced ON by KASLR May 8 00:12:34.924116 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 8 00:12:34.924124 kernel: CPU features: detected: ARM erratum 1418040 May 8 00:12:34.924131 kernel: CPU features: detected: SSBS not fully self-synchronizing May 8 00:12:34.924137 kernel: alternatives: applying boot alternatives May 8 00:12:34.924145 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ed66668e4cab2597a697b6f83cdcbc6a64a98dbc7e2125304191704297c07daf May 8 00:12:34.924153 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:12:34.924159 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:12:34.924166 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:12:34.924173 kernel: Fallback order for Node 0: 0 May 8 00:12:34.924179 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 8 00:12:34.924186 kernel: Policy zone: DMA May 8 00:12:34.924192 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:12:34.924200 kernel: software IO TLB: area num 4. May 8 00:12:34.924207 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 8 00:12:34.924214 kernel: Memory: 2386464K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185824K reserved, 0K cma-reserved) May 8 00:12:34.924220 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:12:34.924227 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:12:34.924234 kernel: rcu: RCU event tracing is enabled. May 8 00:12:34.924241 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:12:34.924248 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:12:34.924255 kernel: Tracing variant of Tasks RCU enabled. May 8 00:12:34.924271 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:12:34.924287 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:12:34.924294 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 00:12:34.924303 kernel: GICv3: 256 SPIs implemented May 8 00:12:34.924309 kernel: GICv3: 0 Extended SPIs implemented May 8 00:12:34.924316 kernel: Root IRQ handler: gic_handle_irq May 8 00:12:34.924323 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 8 00:12:34.924330 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 8 00:12:34.924337 kernel: ITS [mem 0x08080000-0x0809ffff] May 8 00:12:34.924344 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 8 00:12:34.924351 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 8 00:12:34.924358 kernel: GICv3: using LPI property table @0x00000000400f0000 May 8 00:12:34.924378 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 8 00:12:34.924385 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:12:34.924393 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:12:34.924401 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 8 00:12:34.924408 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 8 00:12:34.924414 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 8 00:12:34.924421 kernel: arm-pv: using stolen time PV May 8 00:12:34.924428 kernel: Console: colour dummy device 80x25 May 8 00:12:34.924435 kernel: ACPI: Core revision 20230628 May 8 00:12:34.924442 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 8 00:12:34.924449 kernel: pid_max: default: 32768 minimum: 301 May 8 00:12:34.924456 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:12:34.924464 kernel: landlock: Up and running. May 8 00:12:34.924471 kernel: SELinux: Initializing. May 8 00:12:34.924478 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:12:34.924485 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:12:34.924493 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:12:34.924500 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:12:34.924507 kernel: rcu: Hierarchical SRCU implementation. May 8 00:12:34.924514 kernel: rcu: Max phase no-delay instances is 400. May 8 00:12:34.924521 kernel: Platform MSI: ITS@0x8080000 domain created May 8 00:12:34.924528 kernel: PCI/MSI: ITS@0x8080000 domain created May 8 00:12:34.924535 kernel: Remapping and enabling EFI services. May 8 00:12:34.924542 kernel: smp: Bringing up secondary CPUs ... May 8 00:12:34.924549 kernel: Detected PIPT I-cache on CPU1 May 8 00:12:34.924556 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 8 00:12:34.924563 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 8 00:12:34.924570 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:12:34.924577 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 8 00:12:34.924584 kernel: Detected PIPT I-cache on CPU2 May 8 00:12:34.924590 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 8 00:12:34.924598 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 8 00:12:34.924605 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:12:34.924616 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 8 00:12:34.924625 kernel: Detected PIPT I-cache on CPU3 May 8 00:12:34.924632 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 8 00:12:34.924639 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 8 00:12:34.924646 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:12:34.924653 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 8 00:12:34.924660 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:12:34.924669 kernel: SMP: Total of 4 processors activated. May 8 00:12:34.924676 kernel: CPU features: detected: 32-bit EL0 Support May 8 00:12:34.924683 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 8 00:12:34.924690 kernel: CPU features: detected: Common not Private translations May 8 00:12:34.924698 kernel: CPU features: detected: CRC32 instructions May 8 00:12:34.924705 kernel: CPU features: detected: Enhanced Virtualization Traps May 8 00:12:34.924712 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 8 00:12:34.924719 kernel: CPU features: detected: LSE atomic instructions May 8 00:12:34.924728 kernel: CPU features: detected: Privileged Access Never May 8 00:12:34.924735 kernel: CPU features: detected: RAS Extension Support May 8 00:12:34.924742 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 8 00:12:34.924749 kernel: CPU: All CPU(s) started at EL1 May 8 00:12:34.924756 kernel: alternatives: applying system-wide alternatives May 8 00:12:34.924763 kernel: devtmpfs: initialized May 8 00:12:34.924770 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:12:34.924778 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:12:34.924785 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:12:34.924793 kernel: SMBIOS 3.0.0 present. May 8 00:12:34.924801 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 8 00:12:34.924808 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:12:34.924815 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 8 00:12:34.924823 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 00:12:34.924830 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 00:12:34.924838 kernel: audit: initializing netlink subsys (disabled) May 8 00:12:34.924845 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 May 8 00:12:34.924852 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:12:34.924860 kernel: cpuidle: using governor menu May 8 00:12:34.924867 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 00:12:34.924875 kernel: ASID allocator initialised with 32768 entries May 8 00:12:34.924882 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:12:34.924889 kernel: Serial: AMBA PL011 UART driver May 8 00:12:34.924896 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 8 00:12:34.924903 kernel: Modules: 0 pages in range for non-PLT usage May 8 00:12:34.924910 kernel: Modules: 509024 pages in range for PLT usage May 8 00:12:34.924918 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:12:34.924926 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:12:34.924933 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 8 00:12:34.924940 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 8 00:12:34.924947 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:12:34.924955 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:12:34.924962 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 8 00:12:34.924969 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 8 00:12:34.924976 kernel: ACPI: Added _OSI(Module Device) May 8 00:12:34.924984 kernel: ACPI: Added _OSI(Processor Device) May 8 00:12:34.924992 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:12:34.924999 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:12:34.925006 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:12:34.925014 kernel: ACPI: Interpreter enabled May 8 00:12:34.925021 kernel: ACPI: Using GIC for interrupt routing May 8 00:12:34.925028 kernel: ACPI: MCFG table detected, 1 entries May 8 00:12:34.925035 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 8 00:12:34.925042 kernel: printk: console [ttyAMA0] enabled May 8 00:12:34.925049 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:12:34.925188 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:12:34.925284 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 8 00:12:34.925364 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 8 00:12:34.925440 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 8 00:12:34.925523 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 8 00:12:34.925533 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 8 00:12:34.925540 kernel: PCI host bridge to bus 0000:00 May 8 00:12:34.925615 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 8 00:12:34.925703 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 8 00:12:34.925765 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 8 00:12:34.925825 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:12:34.925910 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 8 00:12:34.925991 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:12:34.926064 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 8 00:12:34.926134 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 8 00:12:34.926204 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:12:34.926338 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:12:34.926417 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 8 00:12:34.926486 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 8 00:12:34.926551 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 8 00:12:34.926618 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 8 00:12:34.926679 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 8 00:12:34.926689 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 8 00:12:34.926697 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 8 00:12:34.926705 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 8 00:12:34.926712 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 8 00:12:34.926719 kernel: iommu: Default domain type: Translated May 8 00:12:34.926727 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 00:12:34.926736 kernel: efivars: Registered efivars operations May 8 00:12:34.926744 kernel: vgaarb: loaded May 8 00:12:34.926752 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 00:12:34.926760 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:12:34.926767 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:12:34.926775 kernel: pnp: PnP ACPI init May 8 00:12:34.926865 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 8 00:12:34.926876 kernel: pnp: PnP ACPI: found 1 devices May 8 00:12:34.926884 kernel: NET: Registered PF_INET protocol family May 8 00:12:34.926894 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:12:34.926902 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:12:34.926909 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:12:34.926917 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:12:34.926925 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:12:34.926932 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:12:34.926940 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:12:34.926947 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:12:34.926956 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:12:34.926964 kernel: PCI: CLS 0 bytes, default 64 May 8 00:12:34.926972 kernel: kvm [1]: HYP mode not available May 8 00:12:34.926979 kernel: Initialise system trusted keyrings May 8 00:12:34.926987 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:12:34.926994 kernel: Key type asymmetric registered May 8 00:12:34.927002 kernel: Asymmetric key parser 'x509' registered May 8 00:12:34.927009 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 8 00:12:34.927017 kernel: io scheduler mq-deadline registered May 8 00:12:34.927024 kernel: io scheduler kyber registered May 8 00:12:34.927033 kernel: io scheduler bfq registered May 8 00:12:34.927040 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 8 00:12:34.927048 kernel: ACPI: button: Power Button [PWRB] May 8 00:12:34.927056 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 8 00:12:34.927124 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 8 00:12:34.927134 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:12:34.927141 kernel: thunder_xcv, ver 1.0 May 8 00:12:34.927149 kernel: thunder_bgx, ver 1.0 May 8 00:12:34.927156 kernel: nicpf, ver 1.0 May 8 00:12:34.927165 kernel: nicvf, ver 1.0 May 8 00:12:34.927240 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 00:12:34.927337 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T00:12:34 UTC (1746663154) May 8 00:12:34.927350 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 00:12:34.927358 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 8 00:12:34.927366 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 8 00:12:34.927373 kernel: watchdog: Hard watchdog permanently disabled May 8 00:12:34.927381 kernel: NET: Registered PF_INET6 protocol family May 8 00:12:34.927391 kernel: Segment Routing with IPv6 May 8 00:12:34.927399 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:12:34.927406 kernel: NET: Registered PF_PACKET protocol family May 8 00:12:34.927414 kernel: Key type dns_resolver registered May 8 00:12:34.927421 kernel: registered taskstats version 1 May 8 00:12:34.927429 kernel: Loading compiled-in X.509 certificates May 8 00:12:34.927437 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e350a514a19a92525be490be8fe368f9972240ea' May 8 00:12:34.927444 kernel: Key type .fscrypt registered May 8 00:12:34.927451 kernel: Key type fscrypt-provisioning registered May 8 00:12:34.927460 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:12:34.927468 kernel: ima: Allocated hash algorithm: sha1 May 8 00:12:34.927475 kernel: ima: No architecture policies found May 8 00:12:34.927482 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 00:12:34.927489 kernel: clk: Disabling unused clocks May 8 00:12:34.927497 kernel: Freeing unused kernel memory: 39424K May 8 00:12:34.927504 kernel: Run /init as init process May 8 00:12:34.927511 kernel: with arguments: May 8 00:12:34.927518 kernel: /init May 8 00:12:34.927526 kernel: with environment: May 8 00:12:34.927533 kernel: HOME=/ May 8 00:12:34.927540 kernel: TERM=linux May 8 00:12:34.927547 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:12:34.927556 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:12:34.927566 systemd[1]: Detected virtualization kvm. May 8 00:12:34.927573 systemd[1]: Detected architecture arm64. May 8 00:12:34.927582 systemd[1]: Running in initrd. May 8 00:12:34.927590 systemd[1]: No hostname configured, using default hostname. May 8 00:12:34.927597 systemd[1]: Hostname set to . May 8 00:12:34.927605 systemd[1]: Initializing machine ID from VM UUID. May 8 00:12:34.927613 systemd[1]: Queued start job for default target initrd.target. May 8 00:12:34.927621 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:12:34.927629 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:12:34.927637 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:12:34.927647 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:12:34.927655 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:12:34.927663 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:12:34.927672 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:12:34.927681 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:12:34.927689 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:12:34.927697 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:12:34.927706 systemd[1]: Reached target paths.target - Path Units. May 8 00:12:34.927715 systemd[1]: Reached target slices.target - Slice Units. May 8 00:12:34.927722 systemd[1]: Reached target swap.target - Swaps. May 8 00:12:34.927730 systemd[1]: Reached target timers.target - Timer Units. May 8 00:12:34.927738 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:12:34.927746 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:12:34.927754 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:12:34.927762 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:12:34.927770 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:12:34.927779 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:12:34.927787 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:12:34.927794 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:12:34.927802 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:12:34.927810 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:12:34.927818 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:12:34.927826 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:12:34.927833 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:12:34.927842 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:12:34.927850 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:12:34.927862 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:12:34.927870 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:12:34.927878 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:12:34.927886 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:12:34.927895 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:12:34.927919 systemd-journald[238]: Collecting audit messages is disabled. May 8 00:12:34.927938 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:12:34.927948 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:12:34.927957 systemd-journald[238]: Journal started May 8 00:12:34.927976 systemd-journald[238]: Runtime Journal (/run/log/journal/ee26b1cdab8045b78fc2c239bf17c6b0) is 5.9M, max 47.3M, 41.4M free. May 8 00:12:34.913196 systemd-modules-load[239]: Inserted module 'overlay' May 8 00:12:34.936605 systemd-modules-load[239]: Inserted module 'br_netfilter' May 8 00:12:34.938304 kernel: Bridge firewalling registered May 8 00:12:34.938326 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:12:34.939596 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:12:34.940841 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:12:34.945936 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:12:34.948783 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:12:34.952252 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:12:34.956868 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:12:34.958476 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:12:34.960710 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:12:34.963708 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:12:34.981483 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:12:34.984420 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:12:34.991742 dracut-cmdline[269]: dracut-dracut-053 May 8 00:12:34.994234 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ed66668e4cab2597a697b6f83cdcbc6a64a98dbc7e2125304191704297c07daf May 8 00:12:35.024585 systemd-resolved[275]: Positive Trust Anchors: May 8 00:12:35.024602 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:12:35.024634 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:12:35.029434 systemd-resolved[275]: Defaulting to hostname 'linux'. May 8 00:12:35.030427 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:12:35.035709 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:12:35.071301 kernel: SCSI subsystem initialized May 8 00:12:35.076292 kernel: Loading iSCSI transport class v2.0-870. May 8 00:12:35.084304 kernel: iscsi: registered transport (tcp) May 8 00:12:35.099324 kernel: iscsi: registered transport (qla4xxx) May 8 00:12:35.099365 kernel: QLogic iSCSI HBA Driver May 8 00:12:35.143459 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:12:35.149420 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:12:35.168357 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:12:35.168401 kernel: device-mapper: uevent: version 1.0.3 May 8 00:12:35.169407 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:12:35.216309 kernel: raid6: neonx8 gen() 15767 MB/s May 8 00:12:35.233298 kernel: raid6: neonx4 gen() 15660 MB/s May 8 00:12:35.250296 kernel: raid6: neonx2 gen() 13211 MB/s May 8 00:12:35.267290 kernel: raid6: neonx1 gen() 10472 MB/s May 8 00:12:35.284298 kernel: raid6: int64x8 gen() 6940 MB/s May 8 00:12:35.301302 kernel: raid6: int64x4 gen() 7331 MB/s May 8 00:12:35.318316 kernel: raid6: int64x2 gen() 6121 MB/s May 8 00:12:35.335402 kernel: raid6: int64x1 gen() 5056 MB/s May 8 00:12:35.335432 kernel: raid6: using algorithm neonx8 gen() 15767 MB/s May 8 00:12:35.353412 kernel: raid6: .... xor() 11926 MB/s, rmw enabled May 8 00:12:35.353463 kernel: raid6: using neon recovery algorithm May 8 00:12:35.358306 kernel: xor: measuring software checksum speed May 8 00:12:35.358333 kernel: 8regs : 19119 MB/sec May 8 00:12:35.359474 kernel: 32regs : 19646 MB/sec May 8 00:12:35.360739 kernel: arm64_neon : 26989 MB/sec May 8 00:12:35.360753 kernel: xor: using function: arm64_neon (26989 MB/sec) May 8 00:12:35.411322 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:12:35.423357 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:12:35.435486 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:12:35.448682 systemd-udevd[459]: Using default interface naming scheme 'v255'. May 8 00:12:35.451782 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:12:35.470440 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:12:35.482889 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation May 8 00:12:35.508056 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:12:35.519408 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:12:35.560719 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:12:35.568446 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:12:35.580765 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:12:35.583913 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:12:35.586640 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:12:35.587864 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:12:35.596426 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:12:35.607700 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:12:35.610451 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 8 00:12:35.629362 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:12:35.629530 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:12:35.629542 kernel: GPT:9289727 != 19775487 May 8 00:12:35.629559 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:12:35.629568 kernel: GPT:9289727 != 19775487 May 8 00:12:35.630221 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:12:35.630245 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:12:35.621100 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:12:35.621211 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:12:35.629866 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:12:35.631011 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:12:35.631159 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:12:35.633365 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:12:35.645586 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:12:35.650351 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (507) May 8 00:12:35.650374 kernel: BTRFS: device fsid 0be52225-f929-4b89-9354-df54a643ece0 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (508) May 8 00:12:35.657882 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 00:12:35.659359 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:12:35.672015 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 00:12:35.676556 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:12:35.680402 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 00:12:35.681620 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 00:12:35.696436 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:12:35.698213 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:12:35.704001 disk-uuid[548]: Primary Header is updated. May 8 00:12:35.704001 disk-uuid[548]: Secondary Entries is updated. May 8 00:12:35.704001 disk-uuid[548]: Secondary Header is updated. May 8 00:12:35.709304 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:12:35.721425 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:12:36.720293 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:12:36.720448 disk-uuid[549]: The operation has completed successfully. May 8 00:12:36.738777 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:12:36.738870 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:12:36.762548 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:12:36.765665 sh[573]: Success May 8 00:12:36.778295 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 00:12:36.807729 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:12:36.813457 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:12:36.815841 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:12:36.824486 kernel: BTRFS info (device dm-0): first mount of filesystem 0be52225-f929-4b89-9354-df54a643ece0 May 8 00:12:36.824520 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 8 00:12:36.824531 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:12:36.826368 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:12:36.826397 kernel: BTRFS info (device dm-0): using free space tree May 8 00:12:36.830203 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:12:36.831525 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:12:36.839438 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:12:36.841063 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:12:36.850654 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:12:36.850703 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:12:36.850714 kernel: BTRFS info (device vda6): using free space tree May 8 00:12:36.854304 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:12:36.860850 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:12:36.863297 kernel: BTRFS info (device vda6): last unmount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:12:36.869464 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:12:36.880438 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:12:36.931332 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:12:36.945436 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:12:36.969600 systemd-networkd[759]: lo: Link UP May 8 00:12:36.969612 systemd-networkd[759]: lo: Gained carrier May 8 00:12:36.970244 systemd-networkd[759]: Enumeration completed May 8 00:12:36.970515 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:12:36.970832 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:12:36.970835 systemd-networkd[759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:12:36.972360 systemd[1]: Reached target network.target - Network. May 8 00:12:36.972440 systemd-networkd[759]: eth0: Link UP May 8 00:12:36.972443 systemd-networkd[759]: eth0: Gained carrier May 8 00:12:36.972450 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:12:36.982383 ignition[668]: Ignition 2.19.0 May 8 00:12:36.982389 ignition[668]: Stage: fetch-offline May 8 00:12:36.982421 ignition[668]: no configs at "/usr/lib/ignition/base.d" May 8 00:12:36.982428 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:12:36.982579 ignition[668]: parsed url from cmdline: "" May 8 00:12:36.982582 ignition[668]: no config URL provided May 8 00:12:36.982586 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:12:36.982593 ignition[668]: no config at "/usr/lib/ignition/user.ign" May 8 00:12:36.982614 ignition[668]: op(1): [started] loading QEMU firmware config module May 8 00:12:36.982618 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:12:36.991257 ignition[668]: op(1): [finished] loading QEMU firmware config module May 8 00:12:36.991338 systemd-networkd[759]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:12:37.029466 ignition[668]: parsing config with SHA512: 8949bcac2cd1db39eb3443abe5f41e428c2c53e6bcc3ea2d0082e1cd4822c96212d2cb326d7690ba6466caf990a8cbfafdcb7615eb3960a9f3126396b2df3a75 May 8 00:12:37.033637 unknown[668]: fetched base config from "system" May 8 00:12:37.033646 unknown[668]: fetched user config from "qemu" May 8 00:12:37.035170 ignition[668]: fetch-offline: fetch-offline passed May 8 00:12:37.035289 ignition[668]: Ignition finished successfully May 8 00:12:37.037754 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:12:37.039071 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:12:37.051502 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:12:37.061219 ignition[771]: Ignition 2.19.0 May 8 00:12:37.061229 ignition[771]: Stage: kargs May 8 00:12:37.061410 ignition[771]: no configs at "/usr/lib/ignition/base.d" May 8 00:12:37.061419 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:12:37.064745 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:12:37.062242 ignition[771]: kargs: kargs passed May 8 00:12:37.062336 ignition[771]: Ignition finished successfully May 8 00:12:37.067033 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:12:37.079325 ignition[779]: Ignition 2.19.0 May 8 00:12:37.079334 ignition[779]: Stage: disks May 8 00:12:37.079492 ignition[779]: no configs at "/usr/lib/ignition/base.d" May 8 00:12:37.079501 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:12:37.081992 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:12:37.080384 ignition[779]: disks: disks passed May 8 00:12:37.083283 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:12:37.080434 ignition[779]: Ignition finished successfully May 8 00:12:37.085046 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:12:37.086974 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:12:37.088398 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:12:37.090182 systemd[1]: Reached target basic.target - Basic System. May 8 00:12:37.097442 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:12:37.107638 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:12:37.111775 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:12:37.121423 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:12:37.164303 kernel: EXT4-fs (vda9): mounted filesystem f1546e2a-34df-485a-a644-37e10cd925e0 r/w with ordered data mode. Quota mode: none. May 8 00:12:37.164482 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:12:37.165756 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:12:37.177359 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:12:37.179620 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:12:37.180947 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:12:37.185378 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (797) May 8 00:12:37.180988 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:12:37.181010 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:12:37.191688 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:12:37.191706 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:12:37.191716 kernel: BTRFS info (device vda6): using free space tree May 8 00:12:37.187917 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:12:37.191377 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:12:37.195289 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:12:37.196795 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:12:37.228577 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:12:37.232449 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory May 8 00:12:37.236539 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:12:37.240074 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:12:37.308169 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:12:37.319373 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:12:37.321609 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:12:37.326285 kernel: BTRFS info (device vda6): last unmount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:12:37.342850 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:12:37.344623 ignition[911]: INFO : Ignition 2.19.0 May 8 00:12:37.344623 ignition[911]: INFO : Stage: mount May 8 00:12:37.344623 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:12:37.344623 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:12:37.348921 ignition[911]: INFO : mount: mount passed May 8 00:12:37.348921 ignition[911]: INFO : Ignition finished successfully May 8 00:12:37.346625 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:12:37.353366 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:12:37.823368 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:12:37.846453 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:12:37.853184 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (923) May 8 00:12:37.853214 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:12:37.854175 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:12:37.854202 kernel: BTRFS info (device vda6): using free space tree May 8 00:12:37.857299 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:12:37.858063 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:12:37.873783 ignition[940]: INFO : Ignition 2.19.0 May 8 00:12:37.873783 ignition[940]: INFO : Stage: files May 8 00:12:37.875333 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:12:37.875333 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:12:37.875333 ignition[940]: DEBUG : files: compiled without relabeling support, skipping May 8 00:12:37.878786 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:12:37.878786 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:12:37.878786 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:12:37.878786 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:12:37.878786 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:12:37.878050 unknown[940]: wrote ssh authorized keys file for user: core May 8 00:12:37.886117 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 00:12:37.886117 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 8 00:12:37.983385 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:12:38.566460 systemd-networkd[759]: eth0: Gained IPv6LL May 8 00:12:39.384438 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 00:12:39.384438 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 8 00:12:39.388382 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:12:39.388382 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:12:39.388382 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:12:39.388382 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:12:39.388382 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:12:39.388382 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:12:39.388382 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:12:39.388382 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:12:39.388382 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:12:39.388382 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 8 00:12:39.388382 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 8 00:12:39.388382 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 8 00:12:39.388382 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 8 00:12:39.717931 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 8 00:12:40.061186 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 8 00:12:40.061186 ignition[940]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 8 00:12:40.064666 ignition[940]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:12:40.064666 ignition[940]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:12:40.064666 ignition[940]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 8 00:12:40.064666 ignition[940]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 8 00:12:40.064666 ignition[940]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:12:40.064666 ignition[940]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:12:40.064666 ignition[940]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 8 00:12:40.064666 ignition[940]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:12:40.084223 ignition[940]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:12:40.087732 ignition[940]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:12:40.090210 ignition[940]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:12:40.090210 ignition[940]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 8 00:12:40.090210 ignition[940]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:12:40.090210 ignition[940]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:12:40.090210 ignition[940]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:12:40.090210 ignition[940]: INFO : files: files passed May 8 00:12:40.090210 ignition[940]: INFO : Ignition finished successfully May 8 00:12:40.094046 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:12:40.106498 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:12:40.108960 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:12:40.110488 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:12:40.110563 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:12:40.116581 initrd-setup-root-after-ignition[968]: grep: /sysroot/oem/oem-release: No such file or directory May 8 00:12:40.118808 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:12:40.118808 initrd-setup-root-after-ignition[970]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:12:40.122960 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:12:40.120484 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:12:40.124477 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:12:40.133403 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:12:40.153137 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:12:40.153251 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:12:40.155446 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:12:40.157250 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:12:40.159082 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:12:40.159774 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:12:40.174759 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:12:40.177091 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:12:40.187721 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:12:40.188917 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:12:40.190917 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:12:40.192649 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:12:40.192759 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:12:40.195251 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:12:40.197279 systemd[1]: Stopped target basic.target - Basic System. May 8 00:12:40.198933 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:12:40.200614 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:12:40.202508 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:12:40.204444 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:12:40.206257 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:12:40.208200 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:12:40.210148 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:12:40.211849 systemd[1]: Stopped target swap.target - Swaps. May 8 00:12:40.213334 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:12:40.213448 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:12:40.215723 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:12:40.217617 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:12:40.219454 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:12:40.221365 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:12:40.222710 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:12:40.222818 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:12:40.225560 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:12:40.225676 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:12:40.227599 systemd[1]: Stopped target paths.target - Path Units. May 8 00:12:40.229120 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:12:40.232346 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:12:40.233583 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:12:40.235686 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:12:40.237195 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:12:40.237301 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:12:40.238838 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:12:40.238915 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:12:40.240399 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:12:40.240508 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:12:40.242268 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:12:40.242390 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:12:40.260477 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:12:40.261364 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:12:40.261498 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:12:40.266481 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:12:40.267331 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:12:40.267458 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:12:40.269255 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:12:40.269365 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:12:40.274030 ignition[994]: INFO : Ignition 2.19.0 May 8 00:12:40.274030 ignition[994]: INFO : Stage: umount May 8 00:12:40.274813 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:12:40.278749 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:12:40.278749 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:12:40.278749 ignition[994]: INFO : umount: umount passed May 8 00:12:40.278749 ignition[994]: INFO : Ignition finished successfully May 8 00:12:40.274894 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:12:40.276479 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:12:40.276549 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:12:40.278311 systemd[1]: Stopped target network.target - Network. May 8 00:12:40.280510 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:12:40.280571 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:12:40.283438 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:12:40.283485 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:12:40.284534 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:12:40.284574 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:12:40.286500 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:12:40.286544 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:12:40.288350 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:12:40.291330 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:12:40.294123 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:12:40.297349 systemd-networkd[759]: eth0: DHCPv6 lease lost May 8 00:12:40.298848 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:12:40.298950 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:12:40.301880 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:12:40.301991 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:12:40.304486 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:12:40.304534 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:12:40.317410 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:12:40.318304 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:12:40.318376 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:12:40.320378 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:12:40.320427 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:12:40.322222 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:12:40.322290 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:12:40.324340 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:12:40.324385 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:12:40.326491 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:12:40.335730 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:12:40.335823 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:12:40.346901 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:12:40.347081 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:12:40.349381 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:12:40.349481 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:12:40.351576 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:12:40.351630 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:12:40.352835 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:12:40.352868 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:12:40.354500 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:12:40.354547 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:12:40.357148 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:12:40.357192 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:12:40.360002 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:12:40.360047 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:12:40.362111 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:12:40.362155 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:12:40.374424 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:12:40.375451 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:12:40.375510 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:12:40.377628 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:12:40.377673 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:12:40.379817 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:12:40.381331 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:12:40.382994 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:12:40.385180 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:12:40.393818 systemd[1]: Switching root. May 8 00:12:40.422229 systemd-journald[238]: Journal stopped May 8 00:12:41.100785 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 8 00:12:41.100835 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:12:41.100850 kernel: SELinux: policy capability open_perms=1 May 8 00:12:41.100859 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:12:41.100869 kernel: SELinux: policy capability always_check_network=0 May 8 00:12:41.100878 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:12:41.100888 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:12:41.100897 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:12:41.100906 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:12:41.100916 kernel: audit: type=1403 audit(1746663160.555:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:12:41.100927 systemd[1]: Successfully loaded SELinux policy in 31.073ms. May 8 00:12:41.100947 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.081ms. May 8 00:12:41.100959 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:12:41.100972 systemd[1]: Detected virtualization kvm. May 8 00:12:41.100983 systemd[1]: Detected architecture arm64. May 8 00:12:41.100993 systemd[1]: Detected first boot. May 8 00:12:41.101003 systemd[1]: Initializing machine ID from VM UUID. May 8 00:12:41.101014 zram_generator::config[1040]: No configuration found. May 8 00:12:41.101025 systemd[1]: Populated /etc with preset unit settings. May 8 00:12:41.101037 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:12:41.101048 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:12:41.101058 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:12:41.101069 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:12:41.101080 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:12:41.101091 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:12:41.101102 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:12:41.101112 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:12:41.101124 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:12:41.101135 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:12:41.101146 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:12:41.101156 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:12:41.101167 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:12:41.101177 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:12:41.101188 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:12:41.101199 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:12:41.101210 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:12:41.101222 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 8 00:12:41.101233 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:12:41.101254 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:12:41.101267 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:12:41.101299 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:12:41.101311 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:12:41.101322 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:12:41.101332 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:12:41.101345 systemd[1]: Reached target slices.target - Slice Units. May 8 00:12:41.101356 systemd[1]: Reached target swap.target - Swaps. May 8 00:12:41.101366 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:12:41.101376 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:12:41.101387 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:12:41.101397 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:12:41.101408 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:12:41.101418 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:12:41.101428 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:12:41.101440 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:12:41.101451 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:12:41.101463 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:12:41.101473 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:12:41.101484 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:12:41.101494 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:12:41.101505 systemd[1]: Reached target machines.target - Containers. May 8 00:12:41.101515 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:12:41.101525 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:12:41.101538 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:12:41.101548 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:12:41.101558 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:12:41.101569 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:12:41.101579 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:12:41.101589 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:12:41.101599 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:12:41.101610 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:12:41.101621 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:12:41.101632 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:12:41.101642 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:12:41.101652 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:12:41.101662 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:12:41.101672 kernel: fuse: init (API version 7.39) May 8 00:12:41.101683 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:12:41.101693 kernel: loop: module loaded May 8 00:12:41.101703 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:12:41.101714 kernel: ACPI: bus type drm_connector registered May 8 00:12:41.101724 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:12:41.101735 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:12:41.101745 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:12:41.101755 systemd[1]: Stopped verity-setup.service. May 8 00:12:41.101780 systemd-journald[1108]: Collecting audit messages is disabled. May 8 00:12:41.101806 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:12:41.101818 systemd-journald[1108]: Journal started May 8 00:12:41.101839 systemd-journald[1108]: Runtime Journal (/run/log/journal/ee26b1cdab8045b78fc2c239bf17c6b0) is 5.9M, max 47.3M, 41.4M free. May 8 00:12:40.909389 systemd[1]: Queued start job for default target multi-user.target. May 8 00:12:40.925205 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 00:12:40.925554 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:12:41.105297 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:12:41.105777 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:12:41.107075 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:12:41.108161 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:12:41.109406 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:12:41.110638 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:12:41.113314 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:12:41.114730 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:12:41.116219 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:12:41.116388 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:12:41.117756 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:12:41.117897 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:12:41.119372 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:12:41.119497 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:12:41.122607 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:12:41.122758 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:12:41.124184 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:12:41.124387 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:12:41.125666 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:12:41.125803 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:12:41.127414 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:12:41.128792 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:12:41.130318 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:12:41.142679 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:12:41.149388 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:12:41.151382 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:12:41.152463 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:12:41.152503 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:12:41.154438 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 00:12:41.156561 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:12:41.158667 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:12:41.159796 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:12:41.161116 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:12:41.163783 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:12:41.164916 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:12:41.165821 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:12:41.166960 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:12:41.169951 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:12:41.176873 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:12:41.179874 systemd-journald[1108]: Time spent on flushing to /var/log/journal/ee26b1cdab8045b78fc2c239bf17c6b0 is 26.345ms for 853 entries. May 8 00:12:41.179874 systemd-journald[1108]: System Journal (/var/log/journal/ee26b1cdab8045b78fc2c239bf17c6b0) is 8.0M, max 195.6M, 187.6M free. May 8 00:12:41.224522 systemd-journald[1108]: Received client request to flush runtime journal. May 8 00:12:41.224571 kernel: loop0: detected capacity change from 0 to 114328 May 8 00:12:41.224593 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:12:41.179883 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:12:41.188454 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:12:41.189986 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:12:41.191358 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:12:41.192839 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:12:41.195579 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:12:41.200057 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:12:41.213656 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 00:12:41.217458 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:12:41.224733 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:12:41.226259 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:12:41.228401 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:12:41.245426 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:12:41.247270 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:12:41.247919 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 00:12:41.252159 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 00:12:41.255320 kernel: loop1: detected capacity change from 0 to 114432 May 8 00:12:41.265475 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. May 8 00:12:41.265494 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. May 8 00:12:41.269507 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:12:41.291304 kernel: loop2: detected capacity change from 0 to 189592 May 8 00:12:41.324534 kernel: loop3: detected capacity change from 0 to 114328 May 8 00:12:41.328296 kernel: loop4: detected capacity change from 0 to 114432 May 8 00:12:41.332403 kernel: loop5: detected capacity change from 0 to 189592 May 8 00:12:41.337162 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 00:12:41.337559 (sd-merge)[1177]: Merged extensions into '/usr'. May 8 00:12:41.340948 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:12:41.340962 systemd[1]: Reloading... May 8 00:12:41.385398 zram_generator::config[1199]: No configuration found. May 8 00:12:41.434870 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:12:41.488316 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:12:41.523560 systemd[1]: Reloading finished in 182 ms. May 8 00:12:41.558387 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:12:41.561736 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:12:41.580785 systemd[1]: Starting ensure-sysext.service... May 8 00:12:41.583462 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:12:41.592782 systemd[1]: Reloading requested from client PID 1237 ('systemctl') (unit ensure-sysext.service)... May 8 00:12:41.592796 systemd[1]: Reloading... May 8 00:12:41.605590 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:12:41.605840 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:12:41.606468 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:12:41.606669 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. May 8 00:12:41.606714 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. May 8 00:12:41.609125 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:12:41.609139 systemd-tmpfiles[1238]: Skipping /boot May 8 00:12:41.615883 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:12:41.615901 systemd-tmpfiles[1238]: Skipping /boot May 8 00:12:41.635290 zram_generator::config[1265]: No configuration found. May 8 00:12:41.715697 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:12:41.751013 systemd[1]: Reloading finished in 157 ms. May 8 00:12:41.766316 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:12:41.774810 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:12:41.781028 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:12:41.783506 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:12:41.786451 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:12:41.789499 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:12:41.794456 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:12:41.797529 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:12:41.802994 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:12:41.806413 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:12:41.811578 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:12:41.818546 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:12:41.819697 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:12:41.825100 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:12:41.826600 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:12:41.828513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:12:41.828633 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:12:41.830303 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:12:41.830425 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:12:41.837189 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:12:41.839045 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:12:41.839170 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:12:41.842453 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:12:41.845564 systemd-udevd[1307]: Using default interface naming scheme 'v255'. May 8 00:12:41.849588 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:12:41.853179 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:12:41.856355 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:12:41.859500 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:12:41.860567 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:12:41.861292 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:12:41.863048 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:12:41.867032 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:12:41.867271 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:12:41.870817 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:12:41.870957 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:12:41.873154 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:12:41.888704 systemd[1]: Finished ensure-sysext.service. May 8 00:12:41.901905 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 8 00:12:41.902170 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:12:41.902292 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1341) May 8 00:12:41.920679 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:12:41.923457 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:12:41.927381 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:12:41.930399 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:12:41.931543 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:12:41.935190 augenrules[1367]: No rules May 8 00:12:41.936545 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:12:41.941016 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:12:41.943350 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:12:41.943920 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:12:41.948785 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:12:41.950231 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:12:41.950452 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:12:41.957600 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:12:41.957759 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:12:41.959257 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:12:41.959427 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:12:41.961016 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:12:41.961159 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:12:41.974485 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:12:41.978745 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:12:41.982440 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:12:41.982525 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:12:41.999949 systemd-resolved[1306]: Positive Trust Anchors: May 8 00:12:41.999967 systemd-resolved[1306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:12:42.000000 systemd-resolved[1306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:12:42.007570 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:12:42.010198 systemd-resolved[1306]: Defaulting to hostname 'linux'. May 8 00:12:42.011912 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:12:42.013508 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:12:42.037494 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:12:42.038728 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:12:42.040127 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:12:42.044111 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:12:42.047219 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:12:42.060201 systemd-networkd[1374]: lo: Link UP May 8 00:12:42.060209 systemd-networkd[1374]: lo: Gained carrier May 8 00:12:42.060891 systemd-networkd[1374]: Enumeration completed May 8 00:12:42.065285 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:12:42.066486 systemd[1]: Reached target network.target - Network. May 8 00:12:42.068511 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:12:42.068521 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:12:42.075473 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:12:42.076648 systemd-networkd[1374]: eth0: Link UP May 8 00:12:42.076658 systemd-networkd[1374]: eth0: Gained carrier May 8 00:12:42.076685 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:12:42.089920 lvm[1395]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:12:42.102349 systemd-networkd[1374]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:12:42.104056 systemd-timesyncd[1376]: Network configuration changed, trying to establish connection. May 8 00:12:42.104650 systemd-timesyncd[1376]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:12:42.104714 systemd-timesyncd[1376]: Initial clock synchronization to Thu 2025-05-08 00:12:42.353841 UTC. May 8 00:12:42.106614 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:12:42.114770 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:12:42.116435 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:12:42.117634 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:12:42.118877 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:12:42.120221 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:12:42.121774 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:12:42.123158 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:12:42.124531 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:12:42.125811 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:12:42.125857 systemd[1]: Reached target paths.target - Path Units. May 8 00:12:42.126804 systemd[1]: Reached target timers.target - Timer Units. May 8 00:12:42.128562 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:12:42.131069 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:12:42.141324 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:12:42.143585 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:12:42.145125 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:12:42.146363 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:12:42.147327 systemd[1]: Reached target basic.target - Basic System. May 8 00:12:42.148293 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:12:42.148324 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:12:42.149161 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:12:42.153543 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:12:42.151139 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:12:42.153997 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:12:42.157453 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:12:42.161074 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:12:42.162087 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:12:42.163715 jq[1407]: false May 8 00:12:42.164163 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:12:42.168426 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:12:42.172439 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:12:42.176473 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:12:42.183630 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:12:42.184393 dbus-daemon[1406]: [system] SELinux support is enabled May 8 00:12:42.184732 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:12:42.185488 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:12:42.189486 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:12:42.189840 extend-filesystems[1408]: Found loop3 May 8 00:12:42.191357 extend-filesystems[1408]: Found loop4 May 8 00:12:42.191357 extend-filesystems[1408]: Found loop5 May 8 00:12:42.191357 extend-filesystems[1408]: Found vda May 8 00:12:42.191357 extend-filesystems[1408]: Found vda1 May 8 00:12:42.191357 extend-filesystems[1408]: Found vda2 May 8 00:12:42.191357 extend-filesystems[1408]: Found vda3 May 8 00:12:42.191357 extend-filesystems[1408]: Found usr May 8 00:12:42.191357 extend-filesystems[1408]: Found vda4 May 8 00:12:42.191357 extend-filesystems[1408]: Found vda6 May 8 00:12:42.191357 extend-filesystems[1408]: Found vda7 May 8 00:12:42.191357 extend-filesystems[1408]: Found vda9 May 8 00:12:42.191357 extend-filesystems[1408]: Checking size of /dev/vda9 May 8 00:12:42.191147 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:12:42.199265 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:12:42.213333 jq[1423]: true May 8 00:12:42.213631 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:12:42.213798 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:12:42.214131 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:12:42.214316 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:12:42.216252 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:12:42.216669 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:12:42.221384 extend-filesystems[1408]: Resized partition /dev/vda9 May 8 00:12:42.222387 update_engine[1422]: I20250508 00:12:42.221629 1422 main.cc:92] Flatcar Update Engine starting May 8 00:12:42.226382 update_engine[1422]: I20250508 00:12:42.224746 1422 update_check_scheduler.cc:74] Next update check in 7m59s May 8 00:12:42.226445 extend-filesystems[1435]: resize2fs 1.47.1 (20-May-2024) May 8 00:12:42.228755 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:12:42.228787 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:12:42.233634 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:12:42.233691 tar[1430]: linux-arm64/helm May 8 00:12:42.231671 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:12:42.231695 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:12:42.233228 systemd[1]: Started update-engine.service - Update Engine. May 8 00:12:42.239815 (ntainerd)[1432]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:12:42.240450 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:12:42.245300 jq[1431]: true May 8 00:12:42.254549 systemd-logind[1416]: Watching system buttons on /dev/input/event0 (Power Button) May 8 00:12:42.255427 systemd-logind[1416]: New seat seat0. May 8 00:12:42.256775 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:12:42.261721 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1335) May 8 00:12:42.261767 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:12:42.275367 extend-filesystems[1435]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:12:42.275367 extend-filesystems[1435]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:12:42.275367 extend-filesystems[1435]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:12:42.280775 extend-filesystems[1408]: Resized filesystem in /dev/vda9 May 8 00:12:42.276643 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:12:42.278319 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:12:42.320541 bash[1462]: Updated "/home/core/.ssh/authorized_keys" May 8 00:12:42.323116 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:12:42.325525 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:12:42.336054 locksmithd[1443]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:12:42.443895 containerd[1432]: time="2025-05-08T00:12:42.443804120Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 00:12:42.471552 containerd[1432]: time="2025-05-08T00:12:42.471381920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:12:42.473221 containerd[1432]: time="2025-05-08T00:12:42.473185280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:12:42.473848 containerd[1432]: time="2025-05-08T00:12:42.473412680Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:12:42.473848 containerd[1432]: time="2025-05-08T00:12:42.473442320Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:12:42.473848 containerd[1432]: time="2025-05-08T00:12:42.473604880Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:12:42.473848 containerd[1432]: time="2025-05-08T00:12:42.473624520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:12:42.473848 containerd[1432]: time="2025-05-08T00:12:42.473679920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:12:42.473848 containerd[1432]: time="2025-05-08T00:12:42.473692960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:12:42.473848 containerd[1432]: time="2025-05-08T00:12:42.473851880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:12:42.474013 containerd[1432]: time="2025-05-08T00:12:42.473867760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:12:42.474013 containerd[1432]: time="2025-05-08T00:12:42.473880440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:12:42.474013 containerd[1432]: time="2025-05-08T00:12:42.473889880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:12:42.474013 containerd[1432]: time="2025-05-08T00:12:42.473968680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:12:42.474183 containerd[1432]: time="2025-05-08T00:12:42.474148240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:12:42.474301 containerd[1432]: time="2025-05-08T00:12:42.474260480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:12:42.474329 containerd[1432]: time="2025-05-08T00:12:42.474301440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:12:42.474401 containerd[1432]: time="2025-05-08T00:12:42.474386480Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:12:42.474444 containerd[1432]: time="2025-05-08T00:12:42.474432120Z" level=info msg="metadata content store policy set" policy=shared May 8 00:12:42.478175 containerd[1432]: time="2025-05-08T00:12:42.478071720Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:12:42.478175 containerd[1432]: time="2025-05-08T00:12:42.478125560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:12:42.478175 containerd[1432]: time="2025-05-08T00:12:42.478141920Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:12:42.478175 containerd[1432]: time="2025-05-08T00:12:42.478164040Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:12:42.478175 containerd[1432]: time="2025-05-08T00:12:42.478179520Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:12:42.478458 containerd[1432]: time="2025-05-08T00:12:42.478349040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:12:42.478871 containerd[1432]: time="2025-05-08T00:12:42.478840720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:12:42.479049 containerd[1432]: time="2025-05-08T00:12:42.479028880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:12:42.479081 containerd[1432]: time="2025-05-08T00:12:42.479054480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:12:42.479081 containerd[1432]: time="2025-05-08T00:12:42.479069880Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:12:42.479119 containerd[1432]: time="2025-05-08T00:12:42.479083360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:12:42.479119 containerd[1432]: time="2025-05-08T00:12:42.479097280Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:12:42.479119 containerd[1432]: time="2025-05-08T00:12:42.479110080Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:12:42.479174 containerd[1432]: time="2025-05-08T00:12:42.479124880Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:12:42.479174 containerd[1432]: time="2025-05-08T00:12:42.479138760Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:12:42.479174 containerd[1432]: time="2025-05-08T00:12:42.479150200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:12:42.479174 containerd[1432]: time="2025-05-08T00:12:42.479161520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:12:42.479248 containerd[1432]: time="2025-05-08T00:12:42.479173720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:12:42.479248 containerd[1432]: time="2025-05-08T00:12:42.479199240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:12:42.479248 containerd[1432]: time="2025-05-08T00:12:42.479213440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:12:42.479248 containerd[1432]: time="2025-05-08T00:12:42.479224680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:12:42.479338 containerd[1432]: time="2025-05-08T00:12:42.479248120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:12:42.479338 containerd[1432]: time="2025-05-08T00:12:42.479261680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:12:42.479338 containerd[1432]: time="2025-05-08T00:12:42.479310440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:12:42.479338 containerd[1432]: time="2025-05-08T00:12:42.479324040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:12:42.479338 containerd[1432]: time="2025-05-08T00:12:42.479336520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:12:42.479435 containerd[1432]: time="2025-05-08T00:12:42.479348920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:12:42.479435 containerd[1432]: time="2025-05-08T00:12:42.479362960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:12:42.479435 containerd[1432]: time="2025-05-08T00:12:42.479374520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:12:42.479435 containerd[1432]: time="2025-05-08T00:12:42.479385560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:12:42.479435 containerd[1432]: time="2025-05-08T00:12:42.479397480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:12:42.479435 containerd[1432]: time="2025-05-08T00:12:42.479412760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:12:42.479534 containerd[1432]: time="2025-05-08T00:12:42.479437960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:12:42.479534 containerd[1432]: time="2025-05-08T00:12:42.479450800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:12:42.479534 containerd[1432]: time="2025-05-08T00:12:42.479461560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:12:42.480575 containerd[1432]: time="2025-05-08T00:12:42.480321120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:12:42.480643 containerd[1432]: time="2025-05-08T00:12:42.480624920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:12:42.480679 containerd[1432]: time="2025-05-08T00:12:42.480643680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:12:42.480679 containerd[1432]: time="2025-05-08T00:12:42.480656720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:12:42.480679 containerd[1432]: time="2025-05-08T00:12:42.480667720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:12:42.480730 containerd[1432]: time="2025-05-08T00:12:42.480682640Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:12:42.480754 containerd[1432]: time="2025-05-08T00:12:42.480733840Z" level=info msg="NRI interface is disabled by configuration." May 8 00:12:42.480754 containerd[1432]: time="2025-05-08T00:12:42.480747600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:12:42.481280 containerd[1432]: time="2025-05-08T00:12:42.481151040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:12:42.481385 containerd[1432]: time="2025-05-08T00:12:42.481286000Z" level=info msg="Connect containerd service" May 8 00:12:42.481385 containerd[1432]: time="2025-05-08T00:12:42.481319720Z" level=info msg="using legacy CRI server" May 8 00:12:42.481385 containerd[1432]: time="2025-05-08T00:12:42.481326800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:12:42.481436 containerd[1432]: time="2025-05-08T00:12:42.481401840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:12:42.482381 containerd[1432]: time="2025-05-08T00:12:42.482348880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:12:42.483003 containerd[1432]: time="2025-05-08T00:12:42.482593400Z" level=info msg="Start subscribing containerd event" May 8 00:12:42.483003 containerd[1432]: time="2025-05-08T00:12:42.482652680Z" level=info msg="Start recovering state" May 8 00:12:42.483003 containerd[1432]: time="2025-05-08T00:12:42.482713880Z" level=info msg="Start event monitor" May 8 00:12:42.483003 containerd[1432]: time="2025-05-08T00:12:42.482725560Z" level=info msg="Start snapshots syncer" May 8 00:12:42.483003 containerd[1432]: time="2025-05-08T00:12:42.482734120Z" level=info msg="Start cni network conf syncer for default" May 8 00:12:42.483003 containerd[1432]: time="2025-05-08T00:12:42.482741960Z" level=info msg="Start streaming server" May 8 00:12:42.483147 containerd[1432]: time="2025-05-08T00:12:42.483099640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:12:42.483171 containerd[1432]: time="2025-05-08T00:12:42.483158360Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:12:42.483332 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:12:42.485393 containerd[1432]: time="2025-05-08T00:12:42.485361680Z" level=info msg="containerd successfully booted in 0.044956s" May 8 00:12:42.595161 tar[1430]: linux-arm64/LICENSE May 8 00:12:42.595324 tar[1430]: linux-arm64/README.md May 8 00:12:42.607435 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:12:43.390316 sshd_keygen[1426]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:12:43.412406 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:12:43.426538 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:12:43.432256 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:12:43.432499 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:12:43.435649 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:12:43.448334 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:12:43.464721 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:12:43.467229 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 8 00:12:43.468723 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:12:44.006522 systemd-networkd[1374]: eth0: Gained IPv6LL May 8 00:12:44.009431 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:12:44.011246 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:12:44.024579 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 00:12:44.027272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:12:44.029513 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:12:44.044610 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:12:44.044841 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 00:12:44.046500 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:12:44.050673 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:12:44.536175 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:12:44.537837 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:12:44.540195 (kubelet)[1522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:12:44.543622 systemd[1]: Startup finished in 618ms (kernel) + 5.846s (initrd) + 4.022s (userspace) = 10.488s. May 8 00:12:44.993285 kubelet[1522]: E0508 00:12:44.993154 1522 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:12:44.995910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:12:44.996077 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:12:47.438197 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:12:47.439437 systemd[1]: Started sshd@0-10.0.0.14:22-10.0.0.1:39756.service - OpenSSH per-connection server daemon (10.0.0.1:39756). May 8 00:12:47.518937 sshd[1535]: Accepted publickey for core from 10.0.0.1 port 39756 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:12:47.519941 sshd[1535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:12:47.529566 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:12:47.538575 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:12:47.540449 systemd-logind[1416]: New session 1 of user core. May 8 00:12:47.548537 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:12:47.551206 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:12:47.558623 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:12:47.639452 systemd[1539]: Queued start job for default target default.target. May 8 00:12:47.648288 systemd[1539]: Created slice app.slice - User Application Slice. May 8 00:12:47.648349 systemd[1539]: Reached target paths.target - Paths. May 8 00:12:47.648361 systemd[1539]: Reached target timers.target - Timers. May 8 00:12:47.649705 systemd[1539]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:12:47.660442 systemd[1539]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:12:47.660604 systemd[1539]: Reached target sockets.target - Sockets. May 8 00:12:47.660622 systemd[1539]: Reached target basic.target - Basic System. May 8 00:12:47.660661 systemd[1539]: Reached target default.target - Main User Target. May 8 00:12:47.660698 systemd[1539]: Startup finished in 96ms. May 8 00:12:47.660894 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:12:47.662257 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:12:47.727517 systemd[1]: Started sshd@1-10.0.0.14:22-10.0.0.1:39768.service - OpenSSH per-connection server daemon (10.0.0.1:39768). May 8 00:12:47.763298 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 39768 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:12:47.764656 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:12:47.769510 systemd-logind[1416]: New session 2 of user core. May 8 00:12:47.779521 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:12:47.832829 sshd[1550]: pam_unix(sshd:session): session closed for user core May 8 00:12:47.847892 systemd[1]: sshd@1-10.0.0.14:22-10.0.0.1:39768.service: Deactivated successfully. May 8 00:12:47.849505 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:12:47.851463 systemd-logind[1416]: Session 2 logged out. Waiting for processes to exit. May 8 00:12:47.852200 systemd[1]: Started sshd@2-10.0.0.14:22-10.0.0.1:39776.service - OpenSSH per-connection server daemon (10.0.0.1:39776). May 8 00:12:47.852976 systemd-logind[1416]: Removed session 2. May 8 00:12:47.887475 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 39776 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:12:47.888873 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:12:47.893024 systemd-logind[1416]: New session 3 of user core. May 8 00:12:47.904473 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:12:47.954180 sshd[1557]: pam_unix(sshd:session): session closed for user core May 8 00:12:47.963141 systemd[1]: sshd@2-10.0.0.14:22-10.0.0.1:39776.service: Deactivated successfully. May 8 00:12:47.964864 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:12:47.966616 systemd-logind[1416]: Session 3 logged out. Waiting for processes to exit. May 8 00:12:47.968451 systemd[1]: Started sshd@3-10.0.0.14:22-10.0.0.1:39780.service - OpenSSH per-connection server daemon (10.0.0.1:39780). May 8 00:12:47.969248 systemd-logind[1416]: Removed session 3. May 8 00:12:48.005545 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 39780 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:12:48.006048 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:12:48.010133 systemd-logind[1416]: New session 4 of user core. May 8 00:12:48.017472 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:12:48.071539 sshd[1564]: pam_unix(sshd:session): session closed for user core May 8 00:12:48.089907 systemd[1]: sshd@3-10.0.0.14:22-10.0.0.1:39780.service: Deactivated successfully. May 8 00:12:48.091703 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:12:48.093231 systemd-logind[1416]: Session 4 logged out. Waiting for processes to exit. May 8 00:12:48.094791 systemd[1]: Started sshd@4-10.0.0.14:22-10.0.0.1:39796.service - OpenSSH per-connection server daemon (10.0.0.1:39796). May 8 00:12:48.095725 systemd-logind[1416]: Removed session 4. May 8 00:12:48.130513 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 39796 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:12:48.131953 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:12:48.136349 systemd-logind[1416]: New session 5 of user core. May 8 00:12:48.147471 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:12:48.208234 sudo[1574]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:12:48.210416 sudo[1574]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:12:48.228374 sudo[1574]: pam_unix(sudo:session): session closed for user root May 8 00:12:48.230274 sshd[1571]: pam_unix(sshd:session): session closed for user core May 8 00:12:48.241994 systemd[1]: sshd@4-10.0.0.14:22-10.0.0.1:39796.service: Deactivated successfully. May 8 00:12:48.244781 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:12:48.246065 systemd-logind[1416]: Session 5 logged out. Waiting for processes to exit. May 8 00:12:48.256689 systemd[1]: Started sshd@5-10.0.0.14:22-10.0.0.1:39804.service - OpenSSH per-connection server daemon (10.0.0.1:39804). May 8 00:12:48.257803 systemd-logind[1416]: Removed session 5. May 8 00:12:48.288053 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 39804 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:12:48.289553 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:12:48.293752 systemd-logind[1416]: New session 6 of user core. May 8 00:12:48.300506 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:12:48.352535 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:12:48.352826 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:12:48.356114 sudo[1583]: pam_unix(sudo:session): session closed for user root May 8 00:12:48.361325 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 00:12:48.361625 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:12:48.376625 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 8 00:12:48.377930 auditctl[1586]: No rules May 8 00:12:48.378830 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:12:48.380334 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 8 00:12:48.382169 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:12:48.407191 augenrules[1604]: No rules May 8 00:12:48.408583 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:12:48.409771 sudo[1582]: pam_unix(sudo:session): session closed for user root May 8 00:12:48.411530 sshd[1579]: pam_unix(sshd:session): session closed for user core May 8 00:12:48.423982 systemd[1]: sshd@5-10.0.0.14:22-10.0.0.1:39804.service: Deactivated successfully. May 8 00:12:48.426579 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:12:48.427934 systemd-logind[1416]: Session 6 logged out. Waiting for processes to exit. May 8 00:12:48.438666 systemd[1]: Started sshd@6-10.0.0.14:22-10.0.0.1:39810.service - OpenSSH per-connection server daemon (10.0.0.1:39810). May 8 00:12:48.439582 systemd-logind[1416]: Removed session 6. May 8 00:12:48.470492 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 39810 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:12:48.471805 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:12:48.476057 systemd-logind[1416]: New session 7 of user core. May 8 00:12:48.485557 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:12:48.537792 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:12:48.539027 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:12:48.843813 (dockerd)[1634]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:12:48.843830 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:12:49.104425 dockerd[1634]: time="2025-05-08T00:12:49.104045943Z" level=info msg="Starting up" May 8 00:12:49.287281 dockerd[1634]: time="2025-05-08T00:12:49.286981844Z" level=info msg="Loading containers: start." May 8 00:12:49.371324 kernel: Initializing XFRM netlink socket May 8 00:12:49.435642 systemd-networkd[1374]: docker0: Link UP May 8 00:12:49.454768 dockerd[1634]: time="2025-05-08T00:12:49.454707241Z" level=info msg="Loading containers: done." May 8 00:12:49.469212 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck800559742-merged.mount: Deactivated successfully. May 8 00:12:49.473351 dockerd[1634]: time="2025-05-08T00:12:49.473300399Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:12:49.473432 dockerd[1634]: time="2025-05-08T00:12:49.473412586Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 8 00:12:49.473569 dockerd[1634]: time="2025-05-08T00:12:49.473539780Z" level=info msg="Daemon has completed initialization" May 8 00:12:49.503212 dockerd[1634]: time="2025-05-08T00:12:49.503068725Z" level=info msg="API listen on /run/docker.sock" May 8 00:12:49.503349 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:12:50.311415 containerd[1432]: time="2025-05-08T00:12:50.311348135Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 8 00:12:51.062673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1030947684.mount: Deactivated successfully. May 8 00:12:52.756209 containerd[1432]: time="2025-05-08T00:12:52.756139753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:52.756732 containerd[1432]: time="2025-05-08T00:12:52.756687437Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" May 8 00:12:52.757464 containerd[1432]: time="2025-05-08T00:12:52.757429202Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:52.760478 containerd[1432]: time="2025-05-08T00:12:52.760417499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:52.761896 containerd[1432]: time="2025-05-08T00:12:52.761742315Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 2.450345385s" May 8 00:12:52.761896 containerd[1432]: time="2025-05-08T00:12:52.761784790Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 8 00:12:52.762530 containerd[1432]: time="2025-05-08T00:12:52.762481819Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 8 00:12:54.602703 containerd[1432]: time="2025-05-08T00:12:54.602648746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:54.603755 containerd[1432]: time="2025-05-08T00:12:54.603675055Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" May 8 00:12:54.604377 containerd[1432]: time="2025-05-08T00:12:54.604343898Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:54.611775 containerd[1432]: time="2025-05-08T00:12:54.610691868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:54.613925 containerd[1432]: time="2025-05-08T00:12:54.613876589Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.851356039s" May 8 00:12:54.613925 containerd[1432]: time="2025-05-08T00:12:54.613926464Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 8 00:12:54.614384 containerd[1432]: time="2025-05-08T00:12:54.614362651Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 8 00:12:55.226915 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:12:55.239531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:12:55.335814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:12:55.340200 (kubelet)[1848]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:12:55.389228 kubelet[1848]: E0508 00:12:55.389170 1848 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:12:55.392488 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:12:55.392632 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:12:56.400049 containerd[1432]: time="2025-05-08T00:12:56.400000194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:56.401085 containerd[1432]: time="2025-05-08T00:12:56.400998647Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" May 8 00:12:56.401756 containerd[1432]: time="2025-05-08T00:12:56.401722118Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:56.405169 containerd[1432]: time="2025-05-08T00:12:56.405102220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:56.406523 containerd[1432]: time="2025-05-08T00:12:56.406486098Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.792089187s" May 8 00:12:56.406610 containerd[1432]: time="2025-05-08T00:12:56.406528128Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 8 00:12:56.406982 containerd[1432]: time="2025-05-08T00:12:56.406931290Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 8 00:12:57.783749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3386674169.mount: Deactivated successfully. May 8 00:12:57.991771 containerd[1432]: time="2025-05-08T00:12:57.991719421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:57.992658 containerd[1432]: time="2025-05-08T00:12:57.992445735Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 8 00:12:57.993563 containerd[1432]: time="2025-05-08T00:12:57.993522444Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:57.995675 containerd[1432]: time="2025-05-08T00:12:57.995635992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:57.996587 containerd[1432]: time="2025-05-08T00:12:57.996412907Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.589358432s" May 8 00:12:57.996587 containerd[1432]: time="2025-05-08T00:12:57.996448316Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 8 00:12:57.997263 containerd[1432]: time="2025-05-08T00:12:57.997241711Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:12:58.538354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1554046901.mount: Deactivated successfully. May 8 00:12:59.665060 containerd[1432]: time="2025-05-08T00:12:59.664879786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:59.666015 containerd[1432]: time="2025-05-08T00:12:59.665760735Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 8 00:12:59.666838 containerd[1432]: time="2025-05-08T00:12:59.666771119Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:59.669810 containerd[1432]: time="2025-05-08T00:12:59.669767743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:12:59.671134 containerd[1432]: time="2025-05-08T00:12:59.671030813Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.673755154s" May 8 00:12:59.671134 containerd[1432]: time="2025-05-08T00:12:59.671066343Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 8 00:12:59.671916 containerd[1432]: time="2025-05-08T00:12:59.671817055Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 00:13:00.173684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2206654503.mount: Deactivated successfully. May 8 00:13:00.178023 containerd[1432]: time="2025-05-08T00:13:00.177977022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:00.178713 containerd[1432]: time="2025-05-08T00:13:00.178644323Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 8 00:13:00.179512 containerd[1432]: time="2025-05-08T00:13:00.179445173Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:00.182112 containerd[1432]: time="2025-05-08T00:13:00.182060325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:00.183058 containerd[1432]: time="2025-05-08T00:13:00.182870204Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 511.021839ms" May 8 00:13:00.183058 containerd[1432]: time="2025-05-08T00:13:00.182915870Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 8 00:13:00.183480 containerd[1432]: time="2025-05-08T00:13:00.183455923Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 8 00:13:00.665339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3663765834.mount: Deactivated successfully. May 8 00:13:04.428290 containerd[1432]: time="2025-05-08T00:13:04.428207557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:04.428732 containerd[1432]: time="2025-05-08T00:13:04.428684615Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 8 00:13:04.429692 containerd[1432]: time="2025-05-08T00:13:04.429656606Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:04.433985 containerd[1432]: time="2025-05-08T00:13:04.433934984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:04.434832 containerd[1432]: time="2025-05-08T00:13:04.434774806Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.251284947s" May 8 00:13:04.434832 containerd[1432]: time="2025-05-08T00:13:04.434810834Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 8 00:13:05.477019 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:13:05.486470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:13:05.582721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:13:05.586901 (kubelet)[2000]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:13:05.641741 kubelet[2000]: E0508 00:13:05.641671 2000 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:13:05.643759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:13:05.643883 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:13:08.613405 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:13:08.627577 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:13:08.648104 systemd[1]: Reloading requested from client PID 2015 ('systemctl') (unit session-7.scope)... May 8 00:13:08.648122 systemd[1]: Reloading... May 8 00:13:08.717313 zram_generator::config[2054]: No configuration found. May 8 00:13:08.836174 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:13:08.890785 systemd[1]: Reloading finished in 242 ms. May 8 00:13:08.931140 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 8 00:13:08.931205 systemd[1]: kubelet.service: Failed with result 'signal'. May 8 00:13:08.931438 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:13:08.933736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:13:09.029435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:13:09.036859 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:13:09.079634 kubelet[2100]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:13:09.079634 kubelet[2100]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:13:09.079634 kubelet[2100]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:13:09.079973 kubelet[2100]: I0508 00:13:09.079805 2100 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:13:09.872294 kubelet[2100]: I0508 00:13:09.872235 2100 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 00:13:09.872294 kubelet[2100]: I0508 00:13:09.872306 2100 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:13:09.873083 kubelet[2100]: I0508 00:13:09.872814 2100 server.go:929] "Client rotation is on, will bootstrap in background" May 8 00:13:09.925928 kubelet[2100]: E0508 00:13:09.925867 2100 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 8 00:13:09.926566 kubelet[2100]: I0508 00:13:09.926484 2100 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:13:09.937698 kubelet[2100]: E0508 00:13:09.937435 2100 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:13:09.937698 kubelet[2100]: I0508 00:13:09.937470 2100 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:13:09.941323 kubelet[2100]: I0508 00:13:09.941172 2100 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:13:09.942193 kubelet[2100]: I0508 00:13:09.942158 2100 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 00:13:09.942353 kubelet[2100]: I0508 00:13:09.942314 2100 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:13:09.942522 kubelet[2100]: I0508 00:13:09.942350 2100 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:13:09.942781 kubelet[2100]: I0508 00:13:09.942760 2100 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:13:09.942781 kubelet[2100]: I0508 00:13:09.942773 2100 container_manager_linux.go:300] "Creating device plugin manager" May 8 00:13:09.943026 kubelet[2100]: I0508 00:13:09.943007 2100 state_mem.go:36] "Initialized new in-memory state store" May 8 00:13:09.953344 kubelet[2100]: I0508 00:13:09.953314 2100 kubelet.go:408] "Attempting to sync node with API server" May 8 00:13:09.953392 kubelet[2100]: I0508 00:13:09.953348 2100 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:13:09.953517 kubelet[2100]: I0508 00:13:09.953497 2100 kubelet.go:314] "Adding apiserver pod source" May 8 00:13:09.953517 kubelet[2100]: I0508 00:13:09.953512 2100 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:13:09.955412 kubelet[2100]: W0508 00:13:09.955366 2100 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused May 8 00:13:09.955458 kubelet[2100]: E0508 00:13:09.955427 2100 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 8 00:13:09.957559 kubelet[2100]: W0508 00:13:09.957419 2100 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused May 8 00:13:09.957559 kubelet[2100]: E0508 00:13:09.957501 2100 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 8 00:13:09.976294 kubelet[2100]: I0508 00:13:09.976249 2100 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:13:09.978191 kubelet[2100]: I0508 00:13:09.978164 2100 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:13:09.978996 kubelet[2100]: W0508 00:13:09.978970 2100 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:13:09.979861 kubelet[2100]: I0508 00:13:09.979835 2100 server.go:1269] "Started kubelet" May 8 00:13:09.980148 kubelet[2100]: I0508 00:13:09.980104 2100 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:13:09.985075 kubelet[2100]: I0508 00:13:09.981546 2100 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:13:09.985075 kubelet[2100]: I0508 00:13:09.981820 2100 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:13:09.985075 kubelet[2100]: I0508 00:13:09.981868 2100 server.go:460] "Adding debug handlers to kubelet server" May 8 00:13:09.985075 kubelet[2100]: I0508 00:13:09.982966 2100 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:13:09.987341 kubelet[2100]: I0508 00:13:09.985391 2100 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:13:09.987341 kubelet[2100]: I0508 00:13:09.986788 2100 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 00:13:09.987341 kubelet[2100]: I0508 00:13:09.986909 2100 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 00:13:09.987341 kubelet[2100]: E0508 00:13:09.986915 2100 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:13:09.987341 kubelet[2100]: I0508 00:13:09.986965 2100 reconciler.go:26] "Reconciler: start to sync state" May 8 00:13:09.987341 kubelet[2100]: E0508 00:13:09.987229 2100 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="200ms" May 8 00:13:09.987341 kubelet[2100]: W0508 00:13:09.987242 2100 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused May 8 00:13:09.987341 kubelet[2100]: E0508 00:13:09.987305 2100 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 8 00:13:09.988761 kubelet[2100]: I0508 00:13:09.988741 2100 factory.go:221] Registration of the systemd container factory successfully May 8 00:13:09.989314 kubelet[2100]: I0508 00:13:09.989294 2100 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:13:09.990446 kubelet[2100]: E0508 00:13:09.990423 2100 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:13:09.991154 kubelet[2100]: E0508 00:13:09.989817 2100 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d64eac4ff3fa2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:13:09.97980765 +0000 UTC m=+0.938870756,LastTimestamp:2025-05-08 00:13:09.97980765 +0000 UTC m=+0.938870756,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:13:09.992294 kubelet[2100]: I0508 00:13:09.991420 2100 factory.go:221] Registration of the containerd container factory successfully May 8 00:13:09.997920 kubelet[2100]: I0508 00:13:09.997844 2100 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:13:09.999347 kubelet[2100]: I0508 00:13:09.998870 2100 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:13:09.999347 kubelet[2100]: I0508 00:13:09.998895 2100 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:13:09.999347 kubelet[2100]: I0508 00:13:09.998924 2100 kubelet.go:2321] "Starting kubelet main sync loop" May 8 00:13:10.000807 kubelet[2100]: E0508 00:13:10.000769 2100 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:13:10.004142 kubelet[2100]: W0508 00:13:10.004075 2100 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused May 8 00:13:10.004335 kubelet[2100]: E0508 00:13:10.004149 2100 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 8 00:13:10.004778 kubelet[2100]: I0508 00:13:10.004762 2100 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:13:10.004872 kubelet[2100]: I0508 00:13:10.004860 2100 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:13:10.005135 kubelet[2100]: I0508 00:13:10.004915 2100 state_mem.go:36] "Initialized new in-memory state store" May 8 00:13:10.087853 kubelet[2100]: E0508 00:13:10.087800 2100 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:13:10.101092 kubelet[2100]: E0508 00:13:10.101047 2100 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:13:10.142154 kubelet[2100]: I0508 00:13:10.141949 2100 policy_none.go:49] "None policy: Start" May 8 00:13:10.142920 kubelet[2100]: I0508 00:13:10.142886 2100 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:13:10.142959 kubelet[2100]: I0508 00:13:10.142923 2100 state_mem.go:35] "Initializing new in-memory state store" May 8 00:13:10.172138 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:13:10.184717 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:13:10.187685 kubelet[2100]: E0508 00:13:10.187618 2100 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="400ms" May 8 00:13:10.187764 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:13:10.187966 kubelet[2100]: E0508 00:13:10.187899 2100 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:13:10.199966 kubelet[2100]: I0508 00:13:10.199933 2100 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:13:10.200617 kubelet[2100]: I0508 00:13:10.200149 2100 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:13:10.200617 kubelet[2100]: I0508 00:13:10.200169 2100 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:13:10.200617 kubelet[2100]: I0508 00:13:10.200478 2100 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:13:10.202076 kubelet[2100]: E0508 00:13:10.202049 2100 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:13:10.301335 kubelet[2100]: I0508 00:13:10.301306 2100 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:13:10.301706 kubelet[2100]: E0508 00:13:10.301674 2100 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" May 8 00:13:10.309903 systemd[1]: Created slice kubepods-burstable-podda447344b7e6bac78cc2fdc7fed46f5c.slice - libcontainer container kubepods-burstable-podda447344b7e6bac78cc2fdc7fed46f5c.slice. May 8 00:13:10.334038 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 8 00:13:10.346391 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 8 00:13:10.389031 kubelet[2100]: I0508 00:13:10.388990 2100 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da447344b7e6bac78cc2fdc7fed46f5c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"da447344b7e6bac78cc2fdc7fed46f5c\") " pod="kube-system/kube-apiserver-localhost" May 8 00:13:10.389185 kubelet[2100]: I0508 00:13:10.389047 2100 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:13:10.389185 kubelet[2100]: I0508 00:13:10.389075 2100 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:13:10.389185 kubelet[2100]: I0508 00:13:10.389095 2100 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:13:10.389185 kubelet[2100]: I0508 00:13:10.389111 2100 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da447344b7e6bac78cc2fdc7fed46f5c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"da447344b7e6bac78cc2fdc7fed46f5c\") " pod="kube-system/kube-apiserver-localhost" May 8 00:13:10.389185 kubelet[2100]: I0508 00:13:10.389144 2100 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da447344b7e6bac78cc2fdc7fed46f5c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"da447344b7e6bac78cc2fdc7fed46f5c\") " pod="kube-system/kube-apiserver-localhost" May 8 00:13:10.389320 kubelet[2100]: I0508 00:13:10.389167 2100 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:13:10.389320 kubelet[2100]: I0508 00:13:10.389196 2100 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:13:10.389320 kubelet[2100]: I0508 00:13:10.389213 2100 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 8 00:13:10.502987 kubelet[2100]: I0508 00:13:10.502958 2100 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:13:10.503337 kubelet[2100]: E0508 00:13:10.503309 2100 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" May 8 00:13:10.588863 kubelet[2100]: E0508 00:13:10.588824 2100 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="800ms" May 8 00:13:10.631270 kubelet[2100]: E0508 00:13:10.631246 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:10.633663 containerd[1432]: time="2025-05-08T00:13:10.633619695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:da447344b7e6bac78cc2fdc7fed46f5c,Namespace:kube-system,Attempt:0,}" May 8 00:13:10.645223 kubelet[2100]: E0508 00:13:10.644979 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:10.649062 kubelet[2100]: E0508 00:13:10.648993 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:10.649294 containerd[1432]: time="2025-05-08T00:13:10.649033538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 8 00:13:10.649362 containerd[1432]: time="2025-05-08T00:13:10.649318339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 8 00:13:10.850743 kubelet[2100]: W0508 00:13:10.849998 2100 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused May 8 00:13:10.850743 kubelet[2100]: E0508 00:13:10.850065 2100 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 8 00:13:10.904822 kubelet[2100]: I0508 00:13:10.904791 2100 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:13:10.905137 kubelet[2100]: E0508 00:13:10.905111 2100 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" May 8 00:13:11.049473 kubelet[2100]: W0508 00:13:11.049411 2100 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused May 8 00:13:11.049787 kubelet[2100]: E0508 00:13:11.049678 2100 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 8 00:13:11.264337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount968078024.mount: Deactivated successfully. May 8 00:13:11.269980 containerd[1432]: time="2025-05-08T00:13:11.269928338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:13:11.271132 containerd[1432]: time="2025-05-08T00:13:11.271094041Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:13:11.272237 containerd[1432]: time="2025-05-08T00:13:11.272026612Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:13:11.272442 containerd[1432]: time="2025-05-08T00:13:11.272411256Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 8 00:13:11.273081 containerd[1432]: time="2025-05-08T00:13:11.273051891Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:13:11.273647 containerd[1432]: time="2025-05-08T00:13:11.273594533Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:13:11.274539 containerd[1432]: time="2025-05-08T00:13:11.274453609Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:13:11.277011 containerd[1432]: time="2025-05-08T00:13:11.276926040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:13:11.279496 containerd[1432]: time="2025-05-08T00:13:11.279026195Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 629.815707ms" May 8 00:13:11.282985 containerd[1432]: time="2025-05-08T00:13:11.282952622Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 633.556618ms" May 8 00:13:11.283300 containerd[1432]: time="2025-05-08T00:13:11.282958426Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 649.251458ms" May 8 00:13:11.393026 kubelet[2100]: E0508 00:13:11.389708 2100 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="1.6s" May 8 00:13:11.433280 kubelet[2100]: W0508 00:13:11.433200 2100 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused May 8 00:13:11.433404 kubelet[2100]: E0508 00:13:11.433292 2100 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 8 00:13:11.450706 containerd[1432]: time="2025-05-08T00:13:11.450627823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:11.450706 containerd[1432]: time="2025-05-08T00:13:11.450671896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:11.450706 containerd[1432]: time="2025-05-08T00:13:11.450682423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:11.450868 containerd[1432]: time="2025-05-08T00:13:11.450746831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:11.452014 containerd[1432]: time="2025-05-08T00:13:11.451861897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:11.452014 containerd[1432]: time="2025-05-08T00:13:11.451916377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:11.452014 containerd[1432]: time="2025-05-08T00:13:11.451932189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:11.452133 containerd[1432]: time="2025-05-08T00:13:11.452005643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:11.452133 containerd[1432]: time="2025-05-08T00:13:11.452071372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:11.452246 containerd[1432]: time="2025-05-08T00:13:11.452202909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:11.452413 containerd[1432]: time="2025-05-08T00:13:11.452386605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:11.452671 containerd[1432]: time="2025-05-08T00:13:11.452591877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:11.475448 systemd[1]: Started cri-containerd-287d17353e196100c52bb6e6549b8e31f03ce27ca4a17793a8dfbb1cecbe1ae2.scope - libcontainer container 287d17353e196100c52bb6e6549b8e31f03ce27ca4a17793a8dfbb1cecbe1ae2. May 8 00:13:11.476788 systemd[1]: Started cri-containerd-9138c1d4d5224685aa7f27bf33643e2cc0ee5382cc4ead642c09ae63b5fb57f1.scope - libcontainer container 9138c1d4d5224685aa7f27bf33643e2cc0ee5382cc4ead642c09ae63b5fb57f1. May 8 00:13:11.478226 systemd[1]: Started cri-containerd-c3f3799641afa5ca07339ddc8aef7067cd85be65b5be8282d505a31da68545b5.scope - libcontainer container c3f3799641afa5ca07339ddc8aef7067cd85be65b5be8282d505a31da68545b5. May 8 00:13:11.507965 containerd[1432]: time="2025-05-08T00:13:11.507158483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9138c1d4d5224685aa7f27bf33643e2cc0ee5382cc4ead642c09ae63b5fb57f1\"" May 8 00:13:11.509234 kubelet[2100]: E0508 00:13:11.509208 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:11.512360 containerd[1432]: time="2025-05-08T00:13:11.512151060Z" level=info msg="CreateContainer within sandbox \"9138c1d4d5224685aa7f27bf33643e2cc0ee5382cc4ead642c09ae63b5fb57f1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:13:11.513260 containerd[1432]: time="2025-05-08T00:13:11.513130385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3f3799641afa5ca07339ddc8aef7067cd85be65b5be8282d505a31da68545b5\"" May 8 00:13:11.514088 containerd[1432]: time="2025-05-08T00:13:11.514058032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:da447344b7e6bac78cc2fdc7fed46f5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"287d17353e196100c52bb6e6549b8e31f03ce27ca4a17793a8dfbb1cecbe1ae2\"" May 8 00:13:11.514620 kubelet[2100]: E0508 00:13:11.514443 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:11.515557 kubelet[2100]: E0508 00:13:11.515505 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:11.516574 containerd[1432]: time="2025-05-08T00:13:11.516503843Z" level=info msg="CreateContainer within sandbox \"c3f3799641afa5ca07339ddc8aef7067cd85be65b5be8282d505a31da68545b5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:13:11.517607 containerd[1432]: time="2025-05-08T00:13:11.517577198Z" level=info msg="CreateContainer within sandbox \"287d17353e196100c52bb6e6549b8e31f03ce27ca4a17793a8dfbb1cecbe1ae2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:13:11.538699 containerd[1432]: time="2025-05-08T00:13:11.538616857Z" level=info msg="CreateContainer within sandbox \"9138c1d4d5224685aa7f27bf33643e2cc0ee5382cc4ead642c09ae63b5fb57f1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"60ad56ec4758fd2500e14b4476701cec03aa3fb5a722650a98da432d943c273a\"" May 8 00:13:11.539495 containerd[1432]: time="2025-05-08T00:13:11.539464325Z" level=info msg="CreateContainer within sandbox \"c3f3799641afa5ca07339ddc8aef7067cd85be65b5be8282d505a31da68545b5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a8d45ba2916fc5c7dcdb8f4cfb390f7e754fde7958cd1a8683bbb3f1d2bb75ab\"" May 8 00:13:11.540102 containerd[1432]: time="2025-05-08T00:13:11.539691293Z" level=info msg="StartContainer for \"60ad56ec4758fd2500e14b4476701cec03aa3fb5a722650a98da432d943c273a\"" May 8 00:13:11.540102 containerd[1432]: time="2025-05-08T00:13:11.539892882Z" level=info msg="StartContainer for \"a8d45ba2916fc5c7dcdb8f4cfb390f7e754fde7958cd1a8683bbb3f1d2bb75ab\"" May 8 00:13:11.541219 containerd[1432]: time="2025-05-08T00:13:11.541181276Z" level=info msg="CreateContainer within sandbox \"287d17353e196100c52bb6e6549b8e31f03ce27ca4a17793a8dfbb1cecbe1ae2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"067d6bdf52d10460e7d0f6102428877e7dbe9623dfe64ad4e1ef97c365e844a3\"" May 8 00:13:11.542474 containerd[1432]: time="2025-05-08T00:13:11.542408545Z" level=info msg="StartContainer for \"067d6bdf52d10460e7d0f6102428877e7dbe9623dfe64ad4e1ef97c365e844a3\"" May 8 00:13:11.549502 kubelet[2100]: W0508 00:13:11.549444 2100 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused May 8 00:13:11.549599 kubelet[2100]: E0508 00:13:11.549507 2100 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 8 00:13:11.567408 systemd[1]: Started cri-containerd-a8d45ba2916fc5c7dcdb8f4cfb390f7e754fde7958cd1a8683bbb3f1d2bb75ab.scope - libcontainer container a8d45ba2916fc5c7dcdb8f4cfb390f7e754fde7958cd1a8683bbb3f1d2bb75ab. May 8 00:13:11.571027 systemd[1]: Started cri-containerd-067d6bdf52d10460e7d0f6102428877e7dbe9623dfe64ad4e1ef97c365e844a3.scope - libcontainer container 067d6bdf52d10460e7d0f6102428877e7dbe9623dfe64ad4e1ef97c365e844a3. May 8 00:13:11.571897 systemd[1]: Started cri-containerd-60ad56ec4758fd2500e14b4476701cec03aa3fb5a722650a98da432d943c273a.scope - libcontainer container 60ad56ec4758fd2500e14b4476701cec03aa3fb5a722650a98da432d943c273a. May 8 00:13:11.621387 containerd[1432]: time="2025-05-08T00:13:11.619191722Z" level=info msg="StartContainer for \"60ad56ec4758fd2500e14b4476701cec03aa3fb5a722650a98da432d943c273a\" returns successfully" May 8 00:13:11.621387 containerd[1432]: time="2025-05-08T00:13:11.619349399Z" level=info msg="StartContainer for \"a8d45ba2916fc5c7dcdb8f4cfb390f7e754fde7958cd1a8683bbb3f1d2bb75ab\" returns successfully" May 8 00:13:11.633760 containerd[1432]: time="2025-05-08T00:13:11.633463290Z" level=info msg="StartContainer for \"067d6bdf52d10460e7d0f6102428877e7dbe9623dfe64ad4e1ef97c365e844a3\" returns successfully" May 8 00:13:11.706447 kubelet[2100]: I0508 00:13:11.706347 2100 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:13:11.706809 kubelet[2100]: E0508 00:13:11.706781 2100 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" May 8 00:13:12.010280 kubelet[2100]: E0508 00:13:12.010239 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:12.013098 kubelet[2100]: E0508 00:13:12.013075 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:12.013526 kubelet[2100]: E0508 00:13:12.013469 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:13.018657 kubelet[2100]: E0508 00:13:13.018624 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:13.308441 kubelet[2100]: I0508 00:13:13.308341 2100 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:13:13.867074 kubelet[2100]: E0508 00:13:13.867032 2100 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:13:13.924697 kubelet[2100]: I0508 00:13:13.924636 2100 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 8 00:13:13.924697 kubelet[2100]: E0508 00:13:13.924676 2100 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 8 00:13:13.957156 kubelet[2100]: I0508 00:13:13.957127 2100 apiserver.go:52] "Watching apiserver" May 8 00:13:13.987086 kubelet[2100]: I0508 00:13:13.987000 2100 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 00:13:15.991713 systemd[1]: Reloading requested from client PID 2376 ('systemctl') (unit session-7.scope)... May 8 00:13:15.991728 systemd[1]: Reloading... May 8 00:13:16.062304 zram_generator::config[2418]: No configuration found. May 8 00:13:16.109377 kubelet[2100]: E0508 00:13:16.109266 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:16.148993 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:13:16.212810 systemd[1]: Reloading finished in 220 ms. May 8 00:13:16.246106 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:13:16.267626 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:13:16.267822 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:13:16.267865 systemd[1]: kubelet.service: Consumed 1.284s CPU time, 120.9M memory peak, 0B memory swap peak. May 8 00:13:16.276598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:13:16.363611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:13:16.367980 (kubelet)[2457]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:13:16.404049 kubelet[2457]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:13:16.404395 kubelet[2457]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:13:16.404435 kubelet[2457]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:13:16.404574 kubelet[2457]: I0508 00:13:16.404539 2457 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:13:16.409502 kubelet[2457]: I0508 00:13:16.409475 2457 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 00:13:16.409612 kubelet[2457]: I0508 00:13:16.409603 2457 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:13:16.409863 kubelet[2457]: I0508 00:13:16.409834 2457 server.go:929] "Client rotation is on, will bootstrap in background" May 8 00:13:16.411367 kubelet[2457]: I0508 00:13:16.411343 2457 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:13:16.414534 kubelet[2457]: I0508 00:13:16.414510 2457 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:13:16.420918 kubelet[2457]: E0508 00:13:16.420886 2457 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:13:16.421115 kubelet[2457]: I0508 00:13:16.421101 2457 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:13:16.423813 kubelet[2457]: I0508 00:13:16.423761 2457 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:13:16.423898 kubelet[2457]: I0508 00:13:16.423884 2457 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 00:13:16.423994 kubelet[2457]: I0508 00:13:16.423971 2457 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:13:16.424144 kubelet[2457]: I0508 00:13:16.423995 2457 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:13:16.424224 kubelet[2457]: I0508 00:13:16.424154 2457 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:13:16.424224 kubelet[2457]: I0508 00:13:16.424163 2457 container_manager_linux.go:300] "Creating device plugin manager" May 8 00:13:16.424224 kubelet[2457]: I0508 00:13:16.424191 2457 state_mem.go:36] "Initialized new in-memory state store" May 8 00:13:16.424531 kubelet[2457]: I0508 00:13:16.424517 2457 kubelet.go:408] "Attempting to sync node with API server" May 8 00:13:16.424577 kubelet[2457]: I0508 00:13:16.424537 2457 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:13:16.424577 kubelet[2457]: I0508 00:13:16.424559 2457 kubelet.go:314] "Adding apiserver pod source" May 8 00:13:16.424577 kubelet[2457]: I0508 00:13:16.424568 2457 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:13:16.425159 kubelet[2457]: I0508 00:13:16.425120 2457 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:13:16.426054 kubelet[2457]: I0508 00:13:16.426032 2457 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:13:16.426666 kubelet[2457]: I0508 00:13:16.426645 2457 server.go:1269] "Started kubelet" May 8 00:13:16.427611 kubelet[2457]: I0508 00:13:16.427483 2457 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:13:16.428418 kubelet[2457]: I0508 00:13:16.428335 2457 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:13:16.428539 kubelet[2457]: I0508 00:13:16.428496 2457 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:13:16.428586 kubelet[2457]: I0508 00:13:16.428538 2457 server.go:460] "Adding debug handlers to kubelet server" May 8 00:13:16.429656 kubelet[2457]: I0508 00:13:16.429621 2457 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:13:16.434423 kubelet[2457]: I0508 00:13:16.430954 2457 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:13:16.437784 kubelet[2457]: I0508 00:13:16.437757 2457 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 00:13:16.438104 kubelet[2457]: I0508 00:13:16.437939 2457 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 00:13:16.438700 kubelet[2457]: I0508 00:13:16.438271 2457 reconciler.go:26] "Reconciler: start to sync state" May 8 00:13:16.440091 kubelet[2457]: E0508 00:13:16.440019 2457 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:13:16.441416 kubelet[2457]: I0508 00:13:16.441389 2457 factory.go:221] Registration of the systemd container factory successfully May 8 00:13:16.442162 kubelet[2457]: I0508 00:13:16.442136 2457 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:13:16.444881 kubelet[2457]: I0508 00:13:16.444320 2457 factory.go:221] Registration of the containerd container factory successfully May 8 00:13:16.451789 kubelet[2457]: E0508 00:13:16.451747 2457 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:13:16.454664 kubelet[2457]: I0508 00:13:16.454341 2457 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:13:16.455627 kubelet[2457]: I0508 00:13:16.455603 2457 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:13:16.455627 kubelet[2457]: I0508 00:13:16.455628 2457 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:13:16.455741 kubelet[2457]: I0508 00:13:16.455643 2457 kubelet.go:2321] "Starting kubelet main sync loop" May 8 00:13:16.455741 kubelet[2457]: E0508 00:13:16.455685 2457 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:13:16.479008 kubelet[2457]: I0508 00:13:16.478977 2457 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:13:16.479008 kubelet[2457]: I0508 00:13:16.479001 2457 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:13:16.479148 kubelet[2457]: I0508 00:13:16.479021 2457 state_mem.go:36] "Initialized new in-memory state store" May 8 00:13:16.479181 kubelet[2457]: I0508 00:13:16.479158 2457 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:13:16.479181 kubelet[2457]: I0508 00:13:16.479168 2457 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:13:16.479234 kubelet[2457]: I0508 00:13:16.479184 2457 policy_none.go:49] "None policy: Start" May 8 00:13:16.479775 kubelet[2457]: I0508 00:13:16.479756 2457 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:13:16.479829 kubelet[2457]: I0508 00:13:16.479822 2457 state_mem.go:35] "Initializing new in-memory state store" May 8 00:13:16.480016 kubelet[2457]: I0508 00:13:16.480001 2457 state_mem.go:75] "Updated machine memory state" May 8 00:13:16.483806 kubelet[2457]: I0508 00:13:16.483723 2457 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:13:16.484070 kubelet[2457]: I0508 00:13:16.483870 2457 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:13:16.484070 kubelet[2457]: I0508 00:13:16.483889 2457 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:13:16.484070 kubelet[2457]: I0508 00:13:16.484057 2457 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:13:16.562363 kubelet[2457]: E0508 00:13:16.562250 2457 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 8 00:13:16.587827 kubelet[2457]: I0508 00:13:16.587803 2457 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:13:16.594386 kubelet[2457]: I0508 00:13:16.594345 2457 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 8 00:13:16.594509 kubelet[2457]: I0508 00:13:16.594426 2457 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 8 00:13:16.638827 kubelet[2457]: I0508 00:13:16.638790 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:13:16.638827 kubelet[2457]: I0508 00:13:16.638826 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da447344b7e6bac78cc2fdc7fed46f5c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"da447344b7e6bac78cc2fdc7fed46f5c\") " pod="kube-system/kube-apiserver-localhost" May 8 00:13:16.638962 kubelet[2457]: I0508 00:13:16.638844 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da447344b7e6bac78cc2fdc7fed46f5c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"da447344b7e6bac78cc2fdc7fed46f5c\") " pod="kube-system/kube-apiserver-localhost" May 8 00:13:16.638962 kubelet[2457]: I0508 00:13:16.638860 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da447344b7e6bac78cc2fdc7fed46f5c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"da447344b7e6bac78cc2fdc7fed46f5c\") " pod="kube-system/kube-apiserver-localhost" May 8 00:13:16.638962 kubelet[2457]: I0508 00:13:16.638883 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:13:16.638962 kubelet[2457]: I0508 00:13:16.638899 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:13:16.638962 kubelet[2457]: I0508 00:13:16.638936 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:13:16.639143 kubelet[2457]: I0508 00:13:16.638988 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:13:16.639143 kubelet[2457]: I0508 00:13:16.639023 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 8 00:13:16.861804 kubelet[2457]: E0508 00:13:16.861646 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:16.862353 kubelet[2457]: E0508 00:13:16.862327 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:16.862507 kubelet[2457]: E0508 00:13:16.862446 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:17.425628 kubelet[2457]: I0508 00:13:17.425594 2457 apiserver.go:52] "Watching apiserver" May 8 00:13:17.438642 kubelet[2457]: I0508 00:13:17.438604 2457 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 00:13:17.464576 kubelet[2457]: E0508 00:13:17.464437 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:17.464740 kubelet[2457]: E0508 00:13:17.464672 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:17.472238 kubelet[2457]: E0508 00:13:17.472189 2457 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:13:17.472529 kubelet[2457]: E0508 00:13:17.472397 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:17.487392 kubelet[2457]: I0508 00:13:17.487308 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.487293661 podStartE2EDuration="1.487293661s" podCreationTimestamp="2025-05-08 00:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:13:17.486713617 +0000 UTC m=+1.115542603" watchObservedRunningTime="2025-05-08 00:13:17.487293661 +0000 UTC m=+1.116122607" May 8 00:13:17.502140 kubelet[2457]: I0508 00:13:17.502090 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.502077168 podStartE2EDuration="1.502077168s" podCreationTimestamp="2025-05-08 00:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:13:17.494409059 +0000 UTC m=+1.123238085" watchObservedRunningTime="2025-05-08 00:13:17.502077168 +0000 UTC m=+1.130906114" May 8 00:13:17.520305 kubelet[2457]: I0508 00:13:17.519362 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.519328841 podStartE2EDuration="1.519328841s" podCreationTimestamp="2025-05-08 00:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:13:17.502014737 +0000 UTC m=+1.130843643" watchObservedRunningTime="2025-05-08 00:13:17.519328841 +0000 UTC m=+1.148157787" May 8 00:13:18.467059 kubelet[2457]: E0508 00:13:18.467011 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:19.468927 kubelet[2457]: E0508 00:13:19.468896 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:20.301521 kubelet[2457]: E0508 00:13:20.301492 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:20.796600 kubelet[2457]: I0508 00:13:20.796435 2457 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:13:20.797034 containerd[1432]: time="2025-05-08T00:13:20.796710666Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:13:20.797806 kubelet[2457]: I0508 00:13:20.797447 2457 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:13:21.135812 sudo[1615]: pam_unix(sudo:session): session closed for user root May 8 00:13:21.137341 sshd[1612]: pam_unix(sshd:session): session closed for user core May 8 00:13:21.140668 systemd[1]: sshd@6-10.0.0.14:22-10.0.0.1:39810.service: Deactivated successfully. May 8 00:13:21.142259 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:13:21.142453 systemd[1]: session-7.scope: Consumed 5.850s CPU time, 153.1M memory peak, 0B memory swap peak. May 8 00:13:21.142838 systemd-logind[1416]: Session 7 logged out. Waiting for processes to exit. May 8 00:13:21.143719 systemd-logind[1416]: Removed session 7. May 8 00:13:21.340643 kubelet[2457]: E0508 00:13:21.340611 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:21.471795 kubelet[2457]: E0508 00:13:21.471651 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:21.699816 systemd[1]: Created slice kubepods-besteffort-pod307cd881_b6cc_4be4_a068_60ea1db375be.slice - libcontainer container kubepods-besteffort-pod307cd881_b6cc_4be4_a068_60ea1db375be.slice. May 8 00:13:21.813399 systemd[1]: Created slice kubepods-besteffort-podaa258238_4f76_4238_a879_3ba7de6fd91e.slice - libcontainer container kubepods-besteffort-podaa258238_4f76_4238_a879_3ba7de6fd91e.slice. May 8 00:13:21.864610 kubelet[2457]: I0508 00:13:21.864561 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/307cd881-b6cc-4be4-a068-60ea1db375be-kube-proxy\") pod \"kube-proxy-cq2pg\" (UID: \"307cd881-b6cc-4be4-a068-60ea1db375be\") " pod="kube-system/kube-proxy-cq2pg" May 8 00:13:21.864610 kubelet[2457]: I0508 00:13:21.864613 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/307cd881-b6cc-4be4-a068-60ea1db375be-xtables-lock\") pod \"kube-proxy-cq2pg\" (UID: \"307cd881-b6cc-4be4-a068-60ea1db375be\") " pod="kube-system/kube-proxy-cq2pg" May 8 00:13:21.864610 kubelet[2457]: I0508 00:13:21.864634 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/307cd881-b6cc-4be4-a068-60ea1db375be-lib-modules\") pod \"kube-proxy-cq2pg\" (UID: \"307cd881-b6cc-4be4-a068-60ea1db375be\") " pod="kube-system/kube-proxy-cq2pg" May 8 00:13:21.864610 kubelet[2457]: I0508 00:13:21.864652 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xprh2\" (UniqueName: \"kubernetes.io/projected/307cd881-b6cc-4be4-a068-60ea1db375be-kube-api-access-xprh2\") pod \"kube-proxy-cq2pg\" (UID: \"307cd881-b6cc-4be4-a068-60ea1db375be\") " pod="kube-system/kube-proxy-cq2pg" May 8 00:13:21.965006 kubelet[2457]: I0508 00:13:21.964949 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlq8p\" (UniqueName: \"kubernetes.io/projected/aa258238-4f76-4238-a879-3ba7de6fd91e-kube-api-access-hlq8p\") pod \"tigera-operator-6f6897fdc5-btcmq\" (UID: \"aa258238-4f76-4238-a879-3ba7de6fd91e\") " pod="tigera-operator/tigera-operator-6f6897fdc5-btcmq" May 8 00:13:21.965120 kubelet[2457]: I0508 00:13:21.965044 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aa258238-4f76-4238-a879-3ba7de6fd91e-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-btcmq\" (UID: \"aa258238-4f76-4238-a879-3ba7de6fd91e\") " pod="tigera-operator/tigera-operator-6f6897fdc5-btcmq" May 8 00:13:22.015911 kubelet[2457]: E0508 00:13:22.015872 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:22.016663 containerd[1432]: time="2025-05-08T00:13:22.016579059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cq2pg,Uid:307cd881-b6cc-4be4-a068-60ea1db375be,Namespace:kube-system,Attempt:0,}" May 8 00:13:22.035036 containerd[1432]: time="2025-05-08T00:13:22.034651041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:22.035036 containerd[1432]: time="2025-05-08T00:13:22.035009294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:22.035036 containerd[1432]: time="2025-05-08T00:13:22.035022258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:22.035192 containerd[1432]: time="2025-05-08T00:13:22.035100768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:22.060456 systemd[1]: Started cri-containerd-50923a56fb65d18bd95dfd4eced60970d04720132e78bcae28c61fba60adf5d8.scope - libcontainer container 50923a56fb65d18bd95dfd4eced60970d04720132e78bcae28c61fba60adf5d8. May 8 00:13:22.079525 containerd[1432]: time="2025-05-08T00:13:22.079427246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cq2pg,Uid:307cd881-b6cc-4be4-a068-60ea1db375be,Namespace:kube-system,Attempt:0,} returns sandbox id \"50923a56fb65d18bd95dfd4eced60970d04720132e78bcae28c61fba60adf5d8\"" May 8 00:13:22.080374 kubelet[2457]: E0508 00:13:22.080334 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:22.084073 containerd[1432]: time="2025-05-08T00:13:22.083999662Z" level=info msg="CreateContainer within sandbox \"50923a56fb65d18bd95dfd4eced60970d04720132e78bcae28c61fba60adf5d8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:13:22.100964 containerd[1432]: time="2025-05-08T00:13:22.100917857Z" level=info msg="CreateContainer within sandbox \"50923a56fb65d18bd95dfd4eced60970d04720132e78bcae28c61fba60adf5d8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6744c0a786637d054ebdca35969db750074cde73a8757b132bd9dbeebb35b48b\"" May 8 00:13:22.101734 containerd[1432]: time="2025-05-08T00:13:22.101668455Z" level=info msg="StartContainer for \"6744c0a786637d054ebdca35969db750074cde73a8757b132bd9dbeebb35b48b\"" May 8 00:13:22.116094 containerd[1432]: time="2025-05-08T00:13:22.116036784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-btcmq,Uid:aa258238-4f76-4238-a879-3ba7de6fd91e,Namespace:tigera-operator,Attempt:0,}" May 8 00:13:22.133446 systemd[1]: Started cri-containerd-6744c0a786637d054ebdca35969db750074cde73a8757b132bd9dbeebb35b48b.scope - libcontainer container 6744c0a786637d054ebdca35969db750074cde73a8757b132bd9dbeebb35b48b. May 8 00:13:22.136030 containerd[1432]: time="2025-05-08T00:13:22.135871700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:22.136030 containerd[1432]: time="2025-05-08T00:13:22.135949889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:22.136030 containerd[1432]: time="2025-05-08T00:13:22.135967615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:22.136179 containerd[1432]: time="2025-05-08T00:13:22.136045324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:22.154432 systemd[1]: Started cri-containerd-c90b82f45f1b3d3bce08affd92cfc86670630381f2a238b5a0410c9eef221a30.scope - libcontainer container c90b82f45f1b3d3bce08affd92cfc86670630381f2a238b5a0410c9eef221a30. May 8 00:13:22.158515 containerd[1432]: time="2025-05-08T00:13:22.158465799Z" level=info msg="StartContainer for \"6744c0a786637d054ebdca35969db750074cde73a8757b132bd9dbeebb35b48b\" returns successfully" May 8 00:13:22.188994 containerd[1432]: time="2025-05-08T00:13:22.188958307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-btcmq,Uid:aa258238-4f76-4238-a879-3ba7de6fd91e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c90b82f45f1b3d3bce08affd92cfc86670630381f2a238b5a0410c9eef221a30\"" May 8 00:13:22.190535 containerd[1432]: time="2025-05-08T00:13:22.190505241Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 8 00:13:22.475783 kubelet[2457]: E0508 00:13:22.475265 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:23.590610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3231186381.mount: Deactivated successfully. May 8 00:13:24.453207 containerd[1432]: time="2025-05-08T00:13:24.453157566Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:24.454048 containerd[1432]: time="2025-05-08T00:13:24.453958313Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 8 00:13:24.454909 containerd[1432]: time="2025-05-08T00:13:24.454847530Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:24.456902 containerd[1432]: time="2025-05-08T00:13:24.456874965Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:24.458551 containerd[1432]: time="2025-05-08T00:13:24.458515272Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.267973938s" May 8 00:13:24.458551 containerd[1432]: time="2025-05-08T00:13:24.458549684Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 8 00:13:24.460459 containerd[1432]: time="2025-05-08T00:13:24.460430070Z" level=info msg="CreateContainer within sandbox \"c90b82f45f1b3d3bce08affd92cfc86670630381f2a238b5a0410c9eef221a30\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 8 00:13:24.469806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2069384370.mount: Deactivated successfully. May 8 00:13:24.470058 containerd[1432]: time="2025-05-08T00:13:24.470027309Z" level=info msg="CreateContainer within sandbox \"c90b82f45f1b3d3bce08affd92cfc86670630381f2a238b5a0410c9eef221a30\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"65ecc7cf572683b5da9f2939cddb080ae4d7cb7c31606b1f75abda15e5d85de0\"" May 8 00:13:24.471328 containerd[1432]: time="2025-05-08T00:13:24.471054492Z" level=info msg="StartContainer for \"65ecc7cf572683b5da9f2939cddb080ae4d7cb7c31606b1f75abda15e5d85de0\"" May 8 00:13:24.498420 systemd[1]: Started cri-containerd-65ecc7cf572683b5da9f2939cddb080ae4d7cb7c31606b1f75abda15e5d85de0.scope - libcontainer container 65ecc7cf572683b5da9f2939cddb080ae4d7cb7c31606b1f75abda15e5d85de0. May 8 00:13:24.534934 containerd[1432]: time="2025-05-08T00:13:24.534709310Z" level=info msg="StartContainer for \"65ecc7cf572683b5da9f2939cddb080ae4d7cb7c31606b1f75abda15e5d85de0\" returns successfully" May 8 00:13:25.489558 kubelet[2457]: I0508 00:13:25.489236 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cq2pg" podStartSLOduration=4.489220204 podStartE2EDuration="4.489220204s" podCreationTimestamp="2025-05-08 00:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:13:22.484976449 +0000 UTC m=+6.113805435" watchObservedRunningTime="2025-05-08 00:13:25.489220204 +0000 UTC m=+9.118049150" May 8 00:13:25.489558 kubelet[2457]: I0508 00:13:25.489375 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-btcmq" podStartSLOduration=2.220202419 podStartE2EDuration="4.489370011s" podCreationTimestamp="2025-05-08 00:13:21 +0000 UTC" firstStartedPulling="2025-05-08 00:13:22.190092448 +0000 UTC m=+5.818921394" lastFinishedPulling="2025-05-08 00:13:24.45926004 +0000 UTC m=+8.088088986" observedRunningTime="2025-05-08 00:13:25.489339161 +0000 UTC m=+9.118168147" watchObservedRunningTime="2025-05-08 00:13:25.489370011 +0000 UTC m=+9.118198957" May 8 00:13:27.057375 update_engine[1422]: I20250508 00:13:27.057304 1422 update_attempter.cc:509] Updating boot flags... May 8 00:13:27.115329 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2850) May 8 00:13:27.163544 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2853) May 8 00:13:28.116141 kubelet[2457]: E0508 00:13:28.115835 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:28.410509 systemd[1]: Created slice kubepods-besteffort-pod38442b32_fb01_4a01_b675_fa202b100160.slice - libcontainer container kubepods-besteffort-pod38442b32_fb01_4a01_b675_fa202b100160.slice. May 8 00:13:28.415208 kubelet[2457]: I0508 00:13:28.414398 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tm7t\" (UniqueName: \"kubernetes.io/projected/38442b32-fb01-4a01-b675-fa202b100160-kube-api-access-8tm7t\") pod \"calico-typha-548d7c9475-6cpzt\" (UID: \"38442b32-fb01-4a01-b675-fa202b100160\") " pod="calico-system/calico-typha-548d7c9475-6cpzt" May 8 00:13:28.415208 kubelet[2457]: I0508 00:13:28.414443 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/38442b32-fb01-4a01-b675-fa202b100160-typha-certs\") pod \"calico-typha-548d7c9475-6cpzt\" (UID: \"38442b32-fb01-4a01-b675-fa202b100160\") " pod="calico-system/calico-typha-548d7c9475-6cpzt" May 8 00:13:28.415208 kubelet[2457]: I0508 00:13:28.414468 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38442b32-fb01-4a01-b675-fa202b100160-tigera-ca-bundle\") pod \"calico-typha-548d7c9475-6cpzt\" (UID: \"38442b32-fb01-4a01-b675-fa202b100160\") " pod="calico-system/calico-typha-548d7c9475-6cpzt" May 8 00:13:28.456417 systemd[1]: Created slice kubepods-besteffort-pod9e61fb0b_19a9_44dc_9224_91c4f100ecdd.slice - libcontainer container kubepods-besteffort-pod9e61fb0b_19a9_44dc_9224_91c4f100ecdd.slice. May 8 00:13:28.515380 kubelet[2457]: I0508 00:13:28.515269 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e61fb0b-19a9-44dc-9224-91c4f100ecdd-tigera-ca-bundle\") pod \"calico-node-h8nmm\" (UID: \"9e61fb0b-19a9-44dc-9224-91c4f100ecdd\") " pod="calico-system/calico-node-h8nmm" May 8 00:13:28.515380 kubelet[2457]: I0508 00:13:28.515371 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9e61fb0b-19a9-44dc-9224-91c4f100ecdd-cni-net-dir\") pod \"calico-node-h8nmm\" (UID: \"9e61fb0b-19a9-44dc-9224-91c4f100ecdd\") " pod="calico-system/calico-node-h8nmm" May 8 00:13:28.515380 kubelet[2457]: I0508 00:13:28.515391 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e61fb0b-19a9-44dc-9224-91c4f100ecdd-lib-modules\") pod \"calico-node-h8nmm\" (UID: \"9e61fb0b-19a9-44dc-9224-91c4f100ecdd\") " pod="calico-system/calico-node-h8nmm" May 8 00:13:28.515563 kubelet[2457]: I0508 00:13:28.515409 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e61fb0b-19a9-44dc-9224-91c4f100ecdd-xtables-lock\") pod \"calico-node-h8nmm\" (UID: \"9e61fb0b-19a9-44dc-9224-91c4f100ecdd\") " pod="calico-system/calico-node-h8nmm" May 8 00:13:28.515563 kubelet[2457]: I0508 00:13:28.515428 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9e61fb0b-19a9-44dc-9224-91c4f100ecdd-cni-log-dir\") pod \"calico-node-h8nmm\" (UID: \"9e61fb0b-19a9-44dc-9224-91c4f100ecdd\") " pod="calico-system/calico-node-h8nmm" May 8 00:13:28.515563 kubelet[2457]: I0508 00:13:28.515467 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9e61fb0b-19a9-44dc-9224-91c4f100ecdd-flexvol-driver-host\") pod \"calico-node-h8nmm\" (UID: \"9e61fb0b-19a9-44dc-9224-91c4f100ecdd\") " pod="calico-system/calico-node-h8nmm" May 8 00:13:28.515563 kubelet[2457]: I0508 00:13:28.515494 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9e61fb0b-19a9-44dc-9224-91c4f100ecdd-policysync\") pod \"calico-node-h8nmm\" (UID: \"9e61fb0b-19a9-44dc-9224-91c4f100ecdd\") " pod="calico-system/calico-node-h8nmm" May 8 00:13:28.515563 kubelet[2457]: I0508 00:13:28.515508 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9e61fb0b-19a9-44dc-9224-91c4f100ecdd-node-certs\") pod \"calico-node-h8nmm\" (UID: \"9e61fb0b-19a9-44dc-9224-91c4f100ecdd\") " pod="calico-system/calico-node-h8nmm" May 8 00:13:28.515668 kubelet[2457]: I0508 00:13:28.515524 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9e61fb0b-19a9-44dc-9224-91c4f100ecdd-var-lib-calico\") pod \"calico-node-h8nmm\" (UID: \"9e61fb0b-19a9-44dc-9224-91c4f100ecdd\") " pod="calico-system/calico-node-h8nmm" May 8 00:13:28.515668 kubelet[2457]: I0508 00:13:28.515539 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6j2m\" (UniqueName: \"kubernetes.io/projected/9e61fb0b-19a9-44dc-9224-91c4f100ecdd-kube-api-access-s6j2m\") pod \"calico-node-h8nmm\" (UID: \"9e61fb0b-19a9-44dc-9224-91c4f100ecdd\") " pod="calico-system/calico-node-h8nmm" May 8 00:13:28.515668 kubelet[2457]: I0508 00:13:28.515556 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9e61fb0b-19a9-44dc-9224-91c4f100ecdd-cni-bin-dir\") pod \"calico-node-h8nmm\" (UID: \"9e61fb0b-19a9-44dc-9224-91c4f100ecdd\") " pod="calico-system/calico-node-h8nmm" May 8 00:13:28.515668 kubelet[2457]: I0508 00:13:28.515571 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9e61fb0b-19a9-44dc-9224-91c4f100ecdd-var-run-calico\") pod \"calico-node-h8nmm\" (UID: \"9e61fb0b-19a9-44dc-9224-91c4f100ecdd\") " pod="calico-system/calico-node-h8nmm" May 8 00:13:28.563550 kubelet[2457]: E0508 00:13:28.563031 2457 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w679k" podUID="afea4b03-2e4e-494b-bfd2-bbc94939e0ab" May 8 00:13:28.617299 kubelet[2457]: I0508 00:13:28.616595 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/afea4b03-2e4e-494b-bfd2-bbc94939e0ab-registration-dir\") pod \"csi-node-driver-w679k\" (UID: \"afea4b03-2e4e-494b-bfd2-bbc94939e0ab\") " pod="calico-system/csi-node-driver-w679k" May 8 00:13:28.617299 kubelet[2457]: I0508 00:13:28.616652 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/afea4b03-2e4e-494b-bfd2-bbc94939e0ab-varrun\") pod \"csi-node-driver-w679k\" (UID: \"afea4b03-2e4e-494b-bfd2-bbc94939e0ab\") " pod="calico-system/csi-node-driver-w679k" May 8 00:13:28.617299 kubelet[2457]: I0508 00:13:28.616670 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/afea4b03-2e4e-494b-bfd2-bbc94939e0ab-socket-dir\") pod \"csi-node-driver-w679k\" (UID: \"afea4b03-2e4e-494b-bfd2-bbc94939e0ab\") " pod="calico-system/csi-node-driver-w679k" May 8 00:13:28.617299 kubelet[2457]: I0508 00:13:28.616710 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwrcb\" (UniqueName: \"kubernetes.io/projected/afea4b03-2e4e-494b-bfd2-bbc94939e0ab-kube-api-access-gwrcb\") pod \"csi-node-driver-w679k\" (UID: \"afea4b03-2e4e-494b-bfd2-bbc94939e0ab\") " pod="calico-system/csi-node-driver-w679k" May 8 00:13:28.617299 kubelet[2457]: I0508 00:13:28.616753 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afea4b03-2e4e-494b-bfd2-bbc94939e0ab-kubelet-dir\") pod \"csi-node-driver-w679k\" (UID: \"afea4b03-2e4e-494b-bfd2-bbc94939e0ab\") " pod="calico-system/csi-node-driver-w679k" May 8 00:13:28.636995 kubelet[2457]: E0508 00:13:28.636961 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.637139 kubelet[2457]: W0508 00:13:28.637122 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.637212 kubelet[2457]: E0508 00:13:28.637200 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.637624 kubelet[2457]: E0508 00:13:28.637600 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.638170 kubelet[2457]: W0508 00:13:28.638149 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.638260 kubelet[2457]: E0508 00:13:28.638248 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.716347 kubelet[2457]: E0508 00:13:28.715430 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:28.716725 containerd[1432]: time="2025-05-08T00:13:28.716674488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-548d7c9475-6cpzt,Uid:38442b32-fb01-4a01-b675-fa202b100160,Namespace:calico-system,Attempt:0,}" May 8 00:13:28.718284 kubelet[2457]: E0508 00:13:28.718254 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.718284 kubelet[2457]: W0508 00:13:28.718280 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.718370 kubelet[2457]: E0508 00:13:28.718298 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.718587 kubelet[2457]: E0508 00:13:28.718559 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.718587 kubelet[2457]: W0508 00:13:28.718572 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.718644 kubelet[2457]: E0508 00:13:28.718589 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.719529 kubelet[2457]: E0508 00:13:28.719514 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.719529 kubelet[2457]: W0508 00:13:28.719528 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.719591 kubelet[2457]: E0508 00:13:28.719541 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.719804 kubelet[2457]: E0508 00:13:28.719785 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.719804 kubelet[2457]: W0508 00:13:28.719802 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.719866 kubelet[2457]: E0508 00:13:28.719827 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.720570 kubelet[2457]: E0508 00:13:28.720550 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.720803 kubelet[2457]: W0508 00:13:28.720628 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.720803 kubelet[2457]: E0508 00:13:28.720665 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.720867 kubelet[2457]: E0508 00:13:28.720833 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.720867 kubelet[2457]: W0508 00:13:28.720841 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.720916 kubelet[2457]: E0508 00:13:28.720871 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.721005 kubelet[2457]: E0508 00:13:28.720991 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.721005 kubelet[2457]: W0508 00:13:28.721001 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.721062 kubelet[2457]: E0508 00:13:28.721042 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.721492 kubelet[2457]: E0508 00:13:28.721474 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.721492 kubelet[2457]: W0508 00:13:28.721489 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.721587 kubelet[2457]: E0508 00:13:28.721526 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.721711 kubelet[2457]: E0508 00:13:28.721697 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.721741 kubelet[2457]: W0508 00:13:28.721713 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.721741 kubelet[2457]: E0508 00:13:28.721726 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.722005 kubelet[2457]: E0508 00:13:28.721988 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.722005 kubelet[2457]: W0508 00:13:28.722003 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.722067 kubelet[2457]: E0508 00:13:28.722019 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.722235 kubelet[2457]: E0508 00:13:28.722219 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.722235 kubelet[2457]: W0508 00:13:28.722231 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.722303 kubelet[2457]: E0508 00:13:28.722261 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.722445 kubelet[2457]: E0508 00:13:28.722434 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.722445 kubelet[2457]: W0508 00:13:28.722444 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.722557 kubelet[2457]: E0508 00:13:28.722500 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.722610 kubelet[2457]: E0508 00:13:28.722599 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.722635 kubelet[2457]: W0508 00:13:28.722611 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.722658 kubelet[2457]: E0508 00:13:28.722634 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.722916 kubelet[2457]: E0508 00:13:28.722901 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.722953 kubelet[2457]: W0508 00:13:28.722916 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.723018 kubelet[2457]: E0508 00:13:28.722997 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.723171 kubelet[2457]: E0508 00:13:28.723157 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.723171 kubelet[2457]: W0508 00:13:28.723170 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.723310 kubelet[2457]: E0508 00:13:28.723259 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.723350 kubelet[2457]: E0508 00:13:28.723346 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.723460 kubelet[2457]: W0508 00:13:28.723353 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.723460 kubelet[2457]: E0508 00:13:28.723362 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.724246 kubelet[2457]: E0508 00:13:28.724227 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.724246 kubelet[2457]: W0508 00:13:28.724245 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.724364 kubelet[2457]: E0508 00:13:28.724259 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.724506 kubelet[2457]: E0508 00:13:28.724490 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.724543 kubelet[2457]: W0508 00:13:28.724505 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.724615 kubelet[2457]: E0508 00:13:28.724580 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.724800 kubelet[2457]: E0508 00:13:28.724786 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.724830 kubelet[2457]: W0508 00:13:28.724799 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.724961 kubelet[2457]: E0508 00:13:28.724864 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.725269 kubelet[2457]: E0508 00:13:28.725115 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.725269 kubelet[2457]: W0508 00:13:28.725131 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.725269 kubelet[2457]: E0508 00:13:28.725174 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.725456 kubelet[2457]: E0508 00:13:28.725373 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.725456 kubelet[2457]: W0508 00:13:28.725391 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.725456 kubelet[2457]: E0508 00:13:28.725407 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.726312 kubelet[2457]: E0508 00:13:28.725747 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.726312 kubelet[2457]: W0508 00:13:28.725764 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.726312 kubelet[2457]: E0508 00:13:28.725778 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.726312 kubelet[2457]: E0508 00:13:28.726001 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.726312 kubelet[2457]: W0508 00:13:28.726012 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.726312 kubelet[2457]: E0508 00:13:28.726030 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.726517 kubelet[2457]: E0508 00:13:28.726454 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.726517 kubelet[2457]: W0508 00:13:28.726467 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.726517 kubelet[2457]: E0508 00:13:28.726485 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.728263 kubelet[2457]: E0508 00:13:28.726961 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.728263 kubelet[2457]: W0508 00:13:28.726975 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.728263 kubelet[2457]: E0508 00:13:28.726987 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.739224 kubelet[2457]: E0508 00:13:28.739194 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:28.739224 kubelet[2457]: W0508 00:13:28.739215 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:28.739377 kubelet[2457]: E0508 00:13:28.739232 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:28.739984 containerd[1432]: time="2025-05-08T00:13:28.739910073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:28.739984 containerd[1432]: time="2025-05-08T00:13:28.739956366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:28.739984 containerd[1432]: time="2025-05-08T00:13:28.739967289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:28.740069 containerd[1432]: time="2025-05-08T00:13:28.740038548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:28.759592 systemd[1]: Started cri-containerd-8322bf1c6b3c037b61aa1e5a2e2a2593dd4898db24c5ed8ae3ac817f51939ae7.scope - libcontainer container 8322bf1c6b3c037b61aa1e5a2e2a2593dd4898db24c5ed8ae3ac817f51939ae7. May 8 00:13:28.761414 kubelet[2457]: E0508 00:13:28.760045 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:28.761464 containerd[1432]: time="2025-05-08T00:13:28.760588525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h8nmm,Uid:9e61fb0b-19a9-44dc-9224-91c4f100ecdd,Namespace:calico-system,Attempt:0,}" May 8 00:13:28.783920 containerd[1432]: time="2025-05-08T00:13:28.783558838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:28.783920 containerd[1432]: time="2025-05-08T00:13:28.783765654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:28.783920 containerd[1432]: time="2025-05-08T00:13:28.783788780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:28.784721 containerd[1432]: time="2025-05-08T00:13:28.784641732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:28.802489 systemd[1]: Started cri-containerd-df4b35468aad120b396f0b04f21829540e28cbb9fe35245f53c84672e0a5aa7d.scope - libcontainer container df4b35468aad120b396f0b04f21829540e28cbb9fe35245f53c84672e0a5aa7d. May 8 00:13:28.804248 containerd[1432]: time="2025-05-08T00:13:28.804209482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-548d7c9475-6cpzt,Uid:38442b32-fb01-4a01-b675-fa202b100160,Namespace:calico-system,Attempt:0,} returns sandbox id \"8322bf1c6b3c037b61aa1e5a2e2a2593dd4898db24c5ed8ae3ac817f51939ae7\"" May 8 00:13:28.806083 kubelet[2457]: E0508 00:13:28.804943 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:28.807993 containerd[1432]: time="2025-05-08T00:13:28.807828424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 8 00:13:28.832234 containerd[1432]: time="2025-05-08T00:13:28.832151264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h8nmm,Uid:9e61fb0b-19a9-44dc-9224-91c4f100ecdd,Namespace:calico-system,Attempt:0,} returns sandbox id \"df4b35468aad120b396f0b04f21829540e28cbb9fe35245f53c84672e0a5aa7d\"" May 8 00:13:28.834480 kubelet[2457]: E0508 00:13:28.834453 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:30.199616 containerd[1432]: time="2025-05-08T00:13:30.199555402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:30.200076 containerd[1432]: time="2025-05-08T00:13:30.200033119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 8 00:13:30.200768 containerd[1432]: time="2025-05-08T00:13:30.200728450Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:30.203182 containerd[1432]: time="2025-05-08T00:13:30.203145525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:30.204020 containerd[1432]: time="2025-05-08T00:13:30.203986011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.396121618s" May 8 00:13:30.204081 containerd[1432]: time="2025-05-08T00:13:30.204057909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 8 00:13:30.208830 containerd[1432]: time="2025-05-08T00:13:30.208651158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 00:13:30.229112 containerd[1432]: time="2025-05-08T00:13:30.229072980Z" level=info msg="CreateContainer within sandbox \"8322bf1c6b3c037b61aa1e5a2e2a2593dd4898db24c5ed8ae3ac817f51939ae7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 00:13:30.243864 containerd[1432]: time="2025-05-08T00:13:30.243814204Z" level=info msg="CreateContainer within sandbox \"8322bf1c6b3c037b61aa1e5a2e2a2593dd4898db24c5ed8ae3ac817f51939ae7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"70a000a1ff138e8194f9fdbc586b6063e037ec23cf32fcc120961a9abea3a19e\"" May 8 00:13:30.247032 containerd[1432]: time="2025-05-08T00:13:30.247001188Z" level=info msg="StartContainer for \"70a000a1ff138e8194f9fdbc586b6063e037ec23cf32fcc120961a9abea3a19e\"" May 8 00:13:30.277468 systemd[1]: Started cri-containerd-70a000a1ff138e8194f9fdbc586b6063e037ec23cf32fcc120961a9abea3a19e.scope - libcontainer container 70a000a1ff138e8194f9fdbc586b6063e037ec23cf32fcc120961a9abea3a19e. May 8 00:13:30.319601 containerd[1432]: time="2025-05-08T00:13:30.319555548Z" level=info msg="StartContainer for \"70a000a1ff138e8194f9fdbc586b6063e037ec23cf32fcc120961a9abea3a19e\" returns successfully" May 8 00:13:30.342511 kubelet[2457]: E0508 00:13:30.342467 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:30.425774 kubelet[2457]: E0508 00:13:30.425736 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.425970 kubelet[2457]: W0508 00:13:30.425953 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.426042 kubelet[2457]: E0508 00:13:30.426029 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.426674 kubelet[2457]: E0508 00:13:30.426654 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.426757 kubelet[2457]: W0508 00:13:30.426744 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.426921 kubelet[2457]: E0508 00:13:30.426838 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.429202 kubelet[2457]: E0508 00:13:30.429182 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.429573 kubelet[2457]: W0508 00:13:30.429553 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.429769 kubelet[2457]: E0508 00:13:30.429660 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.430020 kubelet[2457]: E0508 00:13:30.429893 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.430020 kubelet[2457]: W0508 00:13:30.429909 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.430020 kubelet[2457]: E0508 00:13:30.429920 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.430211 kubelet[2457]: E0508 00:13:30.430198 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.430291 kubelet[2457]: W0508 00:13:30.430260 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.430360 kubelet[2457]: E0508 00:13:30.430349 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.458334 kubelet[2457]: E0508 00:13:30.458184 2457 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w679k" podUID="afea4b03-2e4e-494b-bfd2-bbc94939e0ab" May 8 00:13:30.503840 kubelet[2457]: E0508 00:13:30.503785 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:30.521969 kubelet[2457]: I0508 00:13:30.521897 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-548d7c9475-6cpzt" podStartSLOduration=1.121072958 podStartE2EDuration="2.521878935s" podCreationTimestamp="2025-05-08 00:13:28 +0000 UTC" firstStartedPulling="2025-05-08 00:13:28.807571434 +0000 UTC m=+12.436400380" lastFinishedPulling="2025-05-08 00:13:30.208377411 +0000 UTC m=+13.837206357" observedRunningTime="2025-05-08 00:13:30.521113907 +0000 UTC m=+14.149942853" watchObservedRunningTime="2025-05-08 00:13:30.521878935 +0000 UTC m=+14.150707881" May 8 00:13:30.531545 kubelet[2457]: E0508 00:13:30.531505 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.531545 kubelet[2457]: W0508 00:13:30.531528 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.531545 kubelet[2457]: E0508 00:13:30.531548 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.531714 kubelet[2457]: E0508 00:13:30.531693 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.531714 kubelet[2457]: W0508 00:13:30.531701 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.531714 kubelet[2457]: E0508 00:13:30.531710 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.531925 kubelet[2457]: E0508 00:13:30.531899 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.531925 kubelet[2457]: W0508 00:13:30.531910 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.531925 kubelet[2457]: E0508 00:13:30.531923 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.532083 kubelet[2457]: E0508 00:13:30.532065 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.532083 kubelet[2457]: W0508 00:13:30.532076 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.532083 kubelet[2457]: E0508 00:13:30.532083 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.532245 kubelet[2457]: E0508 00:13:30.532227 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.532245 kubelet[2457]: W0508 00:13:30.532239 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.532331 kubelet[2457]: E0508 00:13:30.532249 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.532419 kubelet[2457]: E0508 00:13:30.532408 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.532419 kubelet[2457]: W0508 00:13:30.532418 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.532471 kubelet[2457]: E0508 00:13:30.532426 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.532610 kubelet[2457]: E0508 00:13:30.532587 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.532610 kubelet[2457]: W0508 00:13:30.532603 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.532664 kubelet[2457]: E0508 00:13:30.532611 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.532748 kubelet[2457]: E0508 00:13:30.532738 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.532775 kubelet[2457]: W0508 00:13:30.532752 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.532775 kubelet[2457]: E0508 00:13:30.532760 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.532911 kubelet[2457]: E0508 00:13:30.532899 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.532936 kubelet[2457]: W0508 00:13:30.532914 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.532936 kubelet[2457]: E0508 00:13:30.532922 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.533064 kubelet[2457]: E0508 00:13:30.533050 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.533086 kubelet[2457]: W0508 00:13:30.533066 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.533086 kubelet[2457]: E0508 00:13:30.533073 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.533226 kubelet[2457]: E0508 00:13:30.533212 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.533226 kubelet[2457]: W0508 00:13:30.533222 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.533298 kubelet[2457]: E0508 00:13:30.533230 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.533400 kubelet[2457]: E0508 00:13:30.533389 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.533425 kubelet[2457]: W0508 00:13:30.533399 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.533425 kubelet[2457]: E0508 00:13:30.533407 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.533571 kubelet[2457]: E0508 00:13:30.533562 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.533599 kubelet[2457]: W0508 00:13:30.533572 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.533599 kubelet[2457]: E0508 00:13:30.533581 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.533743 kubelet[2457]: E0508 00:13:30.533733 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.533769 kubelet[2457]: W0508 00:13:30.533743 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.533769 kubelet[2457]: E0508 00:13:30.533751 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.533900 kubelet[2457]: E0508 00:13:30.533890 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.533928 kubelet[2457]: W0508 00:13:30.533899 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.533928 kubelet[2457]: E0508 00:13:30.533907 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.534137 kubelet[2457]: E0508 00:13:30.534126 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.534159 kubelet[2457]: W0508 00:13:30.534138 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.534159 kubelet[2457]: E0508 00:13:30.534146 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.534353 kubelet[2457]: E0508 00:13:30.534340 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.534353 kubelet[2457]: W0508 00:13:30.534351 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.534440 kubelet[2457]: E0508 00:13:30.534364 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.534585 kubelet[2457]: E0508 00:13:30.534574 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.534585 kubelet[2457]: W0508 00:13:30.534585 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.534646 kubelet[2457]: E0508 00:13:30.534603 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.534788 kubelet[2457]: E0508 00:13:30.534777 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.534788 kubelet[2457]: W0508 00:13:30.534787 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.534907 kubelet[2457]: E0508 00:13:30.534800 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.534956 kubelet[2457]: E0508 00:13:30.534941 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.534984 kubelet[2457]: W0508 00:13:30.534957 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.534984 kubelet[2457]: E0508 00:13:30.534970 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.535120 kubelet[2457]: E0508 00:13:30.535109 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.535120 kubelet[2457]: W0508 00:13:30.535119 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.535177 kubelet[2457]: E0508 00:13:30.535131 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.535318 kubelet[2457]: E0508 00:13:30.535305 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.535318 kubelet[2457]: W0508 00:13:30.535316 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.535387 kubelet[2457]: E0508 00:13:30.535330 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.535740 kubelet[2457]: E0508 00:13:30.535616 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.535740 kubelet[2457]: W0508 00:13:30.535636 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.535740 kubelet[2457]: E0508 00:13:30.535658 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.535932 kubelet[2457]: E0508 00:13:30.535918 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.536070 kubelet[2457]: W0508 00:13:30.535981 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.536070 kubelet[2457]: E0508 00:13:30.536015 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.536209 kubelet[2457]: E0508 00:13:30.536195 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.536284 kubelet[2457]: W0508 00:13:30.536261 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.536426 kubelet[2457]: E0508 00:13:30.536363 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.536551 kubelet[2457]: E0508 00:13:30.536539 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.536606 kubelet[2457]: W0508 00:13:30.536595 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.536768 kubelet[2457]: E0508 00:13:30.536663 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.536905 kubelet[2457]: E0508 00:13:30.536891 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.536966 kubelet[2457]: W0508 00:13:30.536955 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.537041 kubelet[2457]: E0508 00:13:30.537027 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.537265 kubelet[2457]: E0508 00:13:30.537247 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.537265 kubelet[2457]: W0508 00:13:30.537261 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.537368 kubelet[2457]: E0508 00:13:30.537352 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.537506 kubelet[2457]: E0508 00:13:30.537495 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.537536 kubelet[2457]: W0508 00:13:30.537506 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.537536 kubelet[2457]: E0508 00:13:30.537521 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.537681 kubelet[2457]: E0508 00:13:30.537671 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.537714 kubelet[2457]: W0508 00:13:30.537683 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.537714 kubelet[2457]: E0508 00:13:30.537699 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.537889 kubelet[2457]: E0508 00:13:30.537874 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.537922 kubelet[2457]: W0508 00:13:30.537889 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.537922 kubelet[2457]: E0508 00:13:30.537900 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.538210 kubelet[2457]: E0508 00:13:30.538197 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.538241 kubelet[2457]: W0508 00:13:30.538210 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.538241 kubelet[2457]: E0508 00:13:30.538228 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:30.538421 kubelet[2457]: E0508 00:13:30.538411 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:30.538421 kubelet[2457]: W0508 00:13:30.538421 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:30.538481 kubelet[2457]: E0508 00:13:30.538430 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.505222 kubelet[2457]: I0508 00:13:31.505184 2457 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:13:31.506504 kubelet[2457]: E0508 00:13:31.506475 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:31.540683 kubelet[2457]: E0508 00:13:31.540563 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.540683 kubelet[2457]: W0508 00:13:31.540587 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.540683 kubelet[2457]: E0508 00:13:31.540609 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.541008 kubelet[2457]: E0508 00:13:31.540907 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.541008 kubelet[2457]: W0508 00:13:31.540919 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.541008 kubelet[2457]: E0508 00:13:31.540929 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.541182 kubelet[2457]: E0508 00:13:31.541169 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.541331 kubelet[2457]: W0508 00:13:31.541225 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.541331 kubelet[2457]: E0508 00:13:31.541239 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.541567 kubelet[2457]: E0508 00:13:31.541439 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.541567 kubelet[2457]: W0508 00:13:31.541450 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.541567 kubelet[2457]: E0508 00:13:31.541460 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.541750 kubelet[2457]: E0508 00:13:31.541736 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.543127 kubelet[2457]: W0508 00:13:31.543020 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.543127 kubelet[2457]: E0508 00:13:31.543040 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.543295 kubelet[2457]: E0508 00:13:31.543268 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.543448 kubelet[2457]: W0508 00:13:31.543352 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.543448 kubelet[2457]: E0508 00:13:31.543370 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.543577 kubelet[2457]: E0508 00:13:31.543565 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.543703 kubelet[2457]: W0508 00:13:31.543617 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.543703 kubelet[2457]: E0508 00:13:31.543631 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.543840 kubelet[2457]: E0508 00:13:31.543828 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.543998 kubelet[2457]: W0508 00:13:31.543890 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.543998 kubelet[2457]: E0508 00:13:31.543905 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.544149 kubelet[2457]: E0508 00:13:31.544135 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.544307 kubelet[2457]: W0508 00:13:31.544191 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.544307 kubelet[2457]: E0508 00:13:31.544206 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.544452 kubelet[2457]: E0508 00:13:31.544438 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.544586 kubelet[2457]: W0508 00:13:31.544494 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.544586 kubelet[2457]: E0508 00:13:31.544511 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.544724 kubelet[2457]: E0508 00:13:31.544711 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.544859 kubelet[2457]: W0508 00:13:31.544766 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.544859 kubelet[2457]: E0508 00:13:31.544781 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.544994 kubelet[2457]: E0508 00:13:31.544982 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.545155 kubelet[2457]: W0508 00:13:31.545037 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.545155 kubelet[2457]: E0508 00:13:31.545052 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.545289 kubelet[2457]: E0508 00:13:31.545263 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.545343 kubelet[2457]: W0508 00:13:31.545333 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.545405 kubelet[2457]: E0508 00:13:31.545394 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.545713 kubelet[2457]: E0508 00:13:31.545699 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.545778 kubelet[2457]: W0508 00:13:31.545768 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.545838 kubelet[2457]: E0508 00:13:31.545827 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.546089 kubelet[2457]: E0508 00:13:31.546076 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.546338 kubelet[2457]: W0508 00:13:31.546147 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.546338 kubelet[2457]: E0508 00:13:31.546163 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.546494 kubelet[2457]: E0508 00:13:31.546480 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.546547 kubelet[2457]: W0508 00:13:31.546537 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.546596 kubelet[2457]: E0508 00:13:31.546586 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.546859 kubelet[2457]: E0508 00:13:31.546845 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.546925 kubelet[2457]: W0508 00:13:31.546915 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.546991 kubelet[2457]: E0508 00:13:31.546980 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.547257 kubelet[2457]: E0508 00:13:31.547237 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.547257 kubelet[2457]: W0508 00:13:31.547255 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.547352 kubelet[2457]: E0508 00:13:31.547283 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.547630 kubelet[2457]: E0508 00:13:31.547605 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.547630 kubelet[2457]: W0508 00:13:31.547619 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.547688 kubelet[2457]: E0508 00:13:31.547633 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.548067 kubelet[2457]: E0508 00:13:31.548044 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.548067 kubelet[2457]: W0508 00:13:31.548064 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.548115 kubelet[2457]: E0508 00:13:31.548079 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.548440 kubelet[2457]: E0508 00:13:31.548426 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.548440 kubelet[2457]: W0508 00:13:31.548438 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.548503 kubelet[2457]: E0508 00:13:31.548471 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.548704 kubelet[2457]: E0508 00:13:31.548693 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.548704 kubelet[2457]: W0508 00:13:31.548703 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.548774 kubelet[2457]: E0508 00:13:31.548752 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.548979 kubelet[2457]: E0508 00:13:31.548967 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.548979 kubelet[2457]: W0508 00:13:31.548979 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.549054 kubelet[2457]: E0508 00:13:31.548994 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.549519 kubelet[2457]: E0508 00:13:31.549507 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.549554 kubelet[2457]: W0508 00:13:31.549520 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.549554 kubelet[2457]: E0508 00:13:31.549535 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.549744 kubelet[2457]: E0508 00:13:31.549733 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.549744 kubelet[2457]: W0508 00:13:31.549744 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.549801 kubelet[2457]: E0508 00:13:31.549758 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.550217 kubelet[2457]: E0508 00:13:31.550204 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.550217 kubelet[2457]: W0508 00:13:31.550217 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.550486 kubelet[2457]: E0508 00:13:31.550298 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.550656 kubelet[2457]: E0508 00:13:31.550644 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.550656 kubelet[2457]: W0508 00:13:31.550656 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.550790 kubelet[2457]: E0508 00:13:31.550713 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.550953 kubelet[2457]: E0508 00:13:31.550940 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.550953 kubelet[2457]: W0508 00:13:31.550953 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.551075 kubelet[2457]: E0508 00:13:31.551004 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.551457 kubelet[2457]: E0508 00:13:31.551444 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.551457 kubelet[2457]: W0508 00:13:31.551456 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.551552 kubelet[2457]: E0508 00:13:31.551470 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.551793 kubelet[2457]: E0508 00:13:31.551781 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.551793 kubelet[2457]: W0508 00:13:31.551793 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.551849 kubelet[2457]: E0508 00:13:31.551802 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.552172 kubelet[2457]: E0508 00:13:31.552160 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.552200 kubelet[2457]: W0508 00:13:31.552172 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.552200 kubelet[2457]: E0508 00:13:31.552181 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.552637 kubelet[2457]: E0508 00:13:31.552625 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.552637 kubelet[2457]: W0508 00:13:31.552637 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.552711 kubelet[2457]: E0508 00:13:31.552651 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.553682 kubelet[2457]: E0508 00:13:31.553666 2457 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:13:31.553682 kubelet[2457]: W0508 00:13:31.553680 2457 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:13:31.553751 kubelet[2457]: E0508 00:13:31.553691 2457 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:13:31.591911 containerd[1432]: time="2025-05-08T00:13:31.591863305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:31.592325 containerd[1432]: time="2025-05-08T00:13:31.592296487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 8 00:13:31.593292 containerd[1432]: time="2025-05-08T00:13:31.593248150Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:31.595170 containerd[1432]: time="2025-05-08T00:13:31.595137713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:31.596135 containerd[1432]: time="2025-05-08T00:13:31.595803549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.387116822s" May 8 00:13:31.596135 containerd[1432]: time="2025-05-08T00:13:31.595839037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 8 00:13:31.598993 containerd[1432]: time="2025-05-08T00:13:31.598960808Z" level=info msg="CreateContainer within sandbox \"df4b35468aad120b396f0b04f21829540e28cbb9fe35245f53c84672e0a5aa7d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:13:31.613247 containerd[1432]: time="2025-05-08T00:13:31.613189903Z" level=info msg="CreateContainer within sandbox \"df4b35468aad120b396f0b04f21829540e28cbb9fe35245f53c84672e0a5aa7d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6beda8c90cea85675d1d6b0be0a2b240ca4ff4e87647a5da333428c9060e2217\"" May 8 00:13:31.613730 containerd[1432]: time="2025-05-08T00:13:31.613701342Z" level=info msg="StartContainer for \"6beda8c90cea85675d1d6b0be0a2b240ca4ff4e87647a5da333428c9060e2217\"" May 8 00:13:31.664449 systemd[1]: Started cri-containerd-6beda8c90cea85675d1d6b0be0a2b240ca4ff4e87647a5da333428c9060e2217.scope - libcontainer container 6beda8c90cea85675d1d6b0be0a2b240ca4ff4e87647a5da333428c9060e2217. May 8 00:13:31.709498 systemd[1]: cri-containerd-6beda8c90cea85675d1d6b0be0a2b240ca4ff4e87647a5da333428c9060e2217.scope: Deactivated successfully. May 8 00:13:31.729848 containerd[1432]: time="2025-05-08T00:13:31.729796146Z" level=info msg="StartContainer for \"6beda8c90cea85675d1d6b0be0a2b240ca4ff4e87647a5da333428c9060e2217\" returns successfully" May 8 00:13:31.752629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6beda8c90cea85675d1d6b0be0a2b240ca4ff4e87647a5da333428c9060e2217-rootfs.mount: Deactivated successfully. May 8 00:13:31.763313 containerd[1432]: time="2025-05-08T00:13:31.756980716Z" level=info msg="shim disconnected" id=6beda8c90cea85675d1d6b0be0a2b240ca4ff4e87647a5da333428c9060e2217 namespace=k8s.io May 8 00:13:31.763313 containerd[1432]: time="2025-05-08T00:13:31.762081831Z" level=warning msg="cleaning up after shim disconnected" id=6beda8c90cea85675d1d6b0be0a2b240ca4ff4e87647a5da333428c9060e2217 namespace=k8s.io May 8 00:13:31.763313 containerd[1432]: time="2025-05-08T00:13:31.762096274Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:13:32.456630 kubelet[2457]: E0508 00:13:32.456536 2457 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w679k" podUID="afea4b03-2e4e-494b-bfd2-bbc94939e0ab" May 8 00:13:32.510359 kubelet[2457]: E0508 00:13:32.509222 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:32.512472 containerd[1432]: time="2025-05-08T00:13:32.512436145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 00:13:34.456389 kubelet[2457]: E0508 00:13:34.456300 2457 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w679k" podUID="afea4b03-2e4e-494b-bfd2-bbc94939e0ab" May 8 00:13:35.722270 containerd[1432]: time="2025-05-08T00:13:35.722223714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:35.724041 containerd[1432]: time="2025-05-08T00:13:35.722942414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 8 00:13:35.724041 containerd[1432]: time="2025-05-08T00:13:35.723793340Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:35.726695 containerd[1432]: time="2025-05-08T00:13:35.726663539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:35.727892 containerd[1432]: time="2025-05-08T00:13:35.727860453Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.21520782s" May 8 00:13:35.728016 containerd[1432]: time="2025-05-08T00:13:35.727998880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 8 00:13:35.730163 containerd[1432]: time="2025-05-08T00:13:35.730131375Z" level=info msg="CreateContainer within sandbox \"df4b35468aad120b396f0b04f21829540e28cbb9fe35245f53c84672e0a5aa7d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:13:35.752696 containerd[1432]: time="2025-05-08T00:13:35.752125221Z" level=info msg="CreateContainer within sandbox \"df4b35468aad120b396f0b04f21829540e28cbb9fe35245f53c84672e0a5aa7d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"98e442c1b5149c97c909c7c58030fca5936fd295a85fcbe8cc74c818f8c2ed93\"" May 8 00:13:35.752696 containerd[1432]: time="2025-05-08T00:13:35.752640522Z" level=info msg="StartContainer for \"98e442c1b5149c97c909c7c58030fca5936fd295a85fcbe8cc74c818f8c2ed93\"" May 8 00:13:35.786472 systemd[1]: run-containerd-runc-k8s.io-98e442c1b5149c97c909c7c58030fca5936fd295a85fcbe8cc74c818f8c2ed93-runc.j8RVPq.mount: Deactivated successfully. May 8 00:13:35.799494 systemd[1]: Started cri-containerd-98e442c1b5149c97c909c7c58030fca5936fd295a85fcbe8cc74c818f8c2ed93.scope - libcontainer container 98e442c1b5149c97c909c7c58030fca5936fd295a85fcbe8cc74c818f8c2ed93. May 8 00:13:35.826477 containerd[1432]: time="2025-05-08T00:13:35.826433461Z" level=info msg="StartContainer for \"98e442c1b5149c97c909c7c58030fca5936fd295a85fcbe8cc74c818f8c2ed93\" returns successfully" May 8 00:13:36.425497 containerd[1432]: time="2025-05-08T00:13:36.425433508Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:13:36.434235 systemd[1]: cri-containerd-98e442c1b5149c97c909c7c58030fca5936fd295a85fcbe8cc74c818f8c2ed93.scope: Deactivated successfully. May 8 00:13:36.456130 kubelet[2457]: E0508 00:13:36.456078 2457 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w679k" podUID="afea4b03-2e4e-494b-bfd2-bbc94939e0ab" May 8 00:13:36.504944 kubelet[2457]: I0508 00:13:36.504588 2457 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 8 00:13:36.515937 containerd[1432]: time="2025-05-08T00:13:36.515429332Z" level=info msg="shim disconnected" id=98e442c1b5149c97c909c7c58030fca5936fd295a85fcbe8cc74c818f8c2ed93 namespace=k8s.io May 8 00:13:36.515937 containerd[1432]: time="2025-05-08T00:13:36.515521349Z" level=warning msg="cleaning up after shim disconnected" id=98e442c1b5149c97c909c7c58030fca5936fd295a85fcbe8cc74c818f8c2ed93 namespace=k8s.io May 8 00:13:36.515937 containerd[1432]: time="2025-05-08T00:13:36.515534791Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:13:36.526997 kubelet[2457]: E0508 00:13:36.526918 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:36.563067 systemd[1]: Created slice kubepods-burstable-podba17bc01_d9db_4f11_a32c_d317dc8f04b0.slice - libcontainer container kubepods-burstable-podba17bc01_d9db_4f11_a32c_d317dc8f04b0.slice. May 8 00:13:36.584547 kubelet[2457]: I0508 00:13:36.584505 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk2qt\" (UniqueName: \"kubernetes.io/projected/12b9a996-dd18-4e57-9070-f80361b7270b-kube-api-access-pk2qt\") pod \"calico-apiserver-79c5df75c-scvb7\" (UID: \"12b9a996-dd18-4e57-9070-f80361b7270b\") " pod="calico-apiserver/calico-apiserver-79c5df75c-scvb7" May 8 00:13:36.584547 kubelet[2457]: I0508 00:13:36.584549 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gn7b\" (UniqueName: \"kubernetes.io/projected/cdbc0319-9800-491a-bfc5-f62d3ecc390b-kube-api-access-8gn7b\") pod \"calico-kube-controllers-749dfb98f-2zbqs\" (UID: \"cdbc0319-9800-491a-bfc5-f62d3ecc390b\") " pod="calico-system/calico-kube-controllers-749dfb98f-2zbqs" May 8 00:13:36.584707 kubelet[2457]: I0508 00:13:36.584569 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/276ac77e-604a-4871-b9fd-fa9015b47098-calico-apiserver-certs\") pod \"calico-apiserver-79c5df75c-cmszx\" (UID: \"276ac77e-604a-4871-b9fd-fa9015b47098\") " pod="calico-apiserver/calico-apiserver-79c5df75c-cmszx" May 8 00:13:36.584707 kubelet[2457]: I0508 00:13:36.584587 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtswz\" (UniqueName: \"kubernetes.io/projected/276ac77e-604a-4871-b9fd-fa9015b47098-kube-api-access-rtswz\") pod \"calico-apiserver-79c5df75c-cmszx\" (UID: \"276ac77e-604a-4871-b9fd-fa9015b47098\") " pod="calico-apiserver/calico-apiserver-79c5df75c-cmszx" May 8 00:13:36.584707 kubelet[2457]: I0508 00:13:36.584617 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/12b9a996-dd18-4e57-9070-f80361b7270b-calico-apiserver-certs\") pod \"calico-apiserver-79c5df75c-scvb7\" (UID: \"12b9a996-dd18-4e57-9070-f80361b7270b\") " pod="calico-apiserver/calico-apiserver-79c5df75c-scvb7" May 8 00:13:36.584707 kubelet[2457]: I0508 00:13:36.584634 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdbc0319-9800-491a-bfc5-f62d3ecc390b-tigera-ca-bundle\") pod \"calico-kube-controllers-749dfb98f-2zbqs\" (UID: \"cdbc0319-9800-491a-bfc5-f62d3ecc390b\") " pod="calico-system/calico-kube-controllers-749dfb98f-2zbqs" May 8 00:13:36.584707 kubelet[2457]: I0508 00:13:36.584652 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba17bc01-d9db-4f11-a32c-d317dc8f04b0-config-volume\") pod \"coredns-6f6b679f8f-dpj96\" (UID: \"ba17bc01-d9db-4f11-a32c-d317dc8f04b0\") " pod="kube-system/coredns-6f6b679f8f-dpj96" May 8 00:13:36.584823 kubelet[2457]: I0508 00:13:36.584668 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdp7j\" (UniqueName: \"kubernetes.io/projected/ba17bc01-d9db-4f11-a32c-d317dc8f04b0-kube-api-access-gdp7j\") pod \"coredns-6f6b679f8f-dpj96\" (UID: \"ba17bc01-d9db-4f11-a32c-d317dc8f04b0\") " pod="kube-system/coredns-6f6b679f8f-dpj96" May 8 00:13:36.584823 kubelet[2457]: I0508 00:13:36.584686 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3ed82ee-0db3-4b0d-9d13-06ab474af0f9-config-volume\") pod \"coredns-6f6b679f8f-6rqx8\" (UID: \"a3ed82ee-0db3-4b0d-9d13-06ab474af0f9\") " pod="kube-system/coredns-6f6b679f8f-6rqx8" May 8 00:13:36.584823 kubelet[2457]: I0508 00:13:36.584721 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g5sc\" (UniqueName: \"kubernetes.io/projected/a3ed82ee-0db3-4b0d-9d13-06ab474af0f9-kube-api-access-6g5sc\") pod \"coredns-6f6b679f8f-6rqx8\" (UID: \"a3ed82ee-0db3-4b0d-9d13-06ab474af0f9\") " pod="kube-system/coredns-6f6b679f8f-6rqx8" May 8 00:13:36.585864 systemd[1]: Created slice kubepods-besteffort-podcdbc0319_9800_491a_bfc5_f62d3ecc390b.slice - libcontainer container kubepods-besteffort-podcdbc0319_9800_491a_bfc5_f62d3ecc390b.slice. May 8 00:13:36.593500 systemd[1]: Created slice kubepods-burstable-poda3ed82ee_0db3_4b0d_9d13_06ab474af0f9.slice - libcontainer container kubepods-burstable-poda3ed82ee_0db3_4b0d_9d13_06ab474af0f9.slice. May 8 00:13:36.599343 systemd[1]: Created slice kubepods-besteffort-pod12b9a996_dd18_4e57_9070_f80361b7270b.slice - libcontainer container kubepods-besteffort-pod12b9a996_dd18_4e57_9070_f80361b7270b.slice. May 8 00:13:36.603233 systemd[1]: Created slice kubepods-besteffort-pod276ac77e_604a_4871_b9fd_fa9015b47098.slice - libcontainer container kubepods-besteffort-pod276ac77e_604a_4871_b9fd_fa9015b47098.slice. May 8 00:13:36.752817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98e442c1b5149c97c909c7c58030fca5936fd295a85fcbe8cc74c818f8c2ed93-rootfs.mount: Deactivated successfully. May 8 00:13:36.868969 kubelet[2457]: E0508 00:13:36.868889 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:36.870902 containerd[1432]: time="2025-05-08T00:13:36.870736315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dpj96,Uid:ba17bc01-d9db-4f11-a32c-d317dc8f04b0,Namespace:kube-system,Attempt:0,}" May 8 00:13:36.891377 containerd[1432]: time="2025-05-08T00:13:36.891330436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749dfb98f-2zbqs,Uid:cdbc0319-9800-491a-bfc5-f62d3ecc390b,Namespace:calico-system,Attempt:0,}" May 8 00:13:36.897766 kubelet[2457]: E0508 00:13:36.896593 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:36.899656 containerd[1432]: time="2025-05-08T00:13:36.899612460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6rqx8,Uid:a3ed82ee-0db3-4b0d-9d13-06ab474af0f9,Namespace:kube-system,Attempt:0,}" May 8 00:13:36.906490 containerd[1432]: time="2025-05-08T00:13:36.906435893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c5df75c-scvb7,Uid:12b9a996-dd18-4e57-9070-f80361b7270b,Namespace:calico-apiserver,Attempt:0,}" May 8 00:13:36.906692 containerd[1432]: time="2025-05-08T00:13:36.906669096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c5df75c-cmszx,Uid:276ac77e-604a-4871-b9fd-fa9015b47098,Namespace:calico-apiserver,Attempt:0,}" May 8 00:13:37.290793 containerd[1432]: time="2025-05-08T00:13:37.290641514Z" level=error msg="Failed to destroy network for sandbox \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.291042 containerd[1432]: time="2025-05-08T00:13:37.290995178Z" level=error msg="encountered an error cleaning up failed sandbox \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.291175 containerd[1432]: time="2025-05-08T00:13:37.291042466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6rqx8,Uid:a3ed82ee-0db3-4b0d-9d13-06ab474af0f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.295621 kubelet[2457]: E0508 00:13:37.295143 2457 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.295621 kubelet[2457]: E0508 00:13:37.295247 2457 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6rqx8" May 8 00:13:37.295621 kubelet[2457]: E0508 00:13:37.295271 2457 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6rqx8" May 8 00:13:37.296488 kubelet[2457]: E0508 00:13:37.295343 2457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-6rqx8_kube-system(a3ed82ee-0db3-4b0d-9d13-06ab474af0f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-6rqx8_kube-system(a3ed82ee-0db3-4b0d-9d13-06ab474af0f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-6rqx8" podUID="a3ed82ee-0db3-4b0d-9d13-06ab474af0f9" May 8 00:13:37.300318 containerd[1432]: time="2025-05-08T00:13:37.300159295Z" level=error msg="Failed to destroy network for sandbox \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.301222 containerd[1432]: time="2025-05-08T00:13:37.301099783Z" level=error msg="encountered an error cleaning up failed sandbox \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.301501 containerd[1432]: time="2025-05-08T00:13:37.301464568Z" level=error msg="Failed to destroy network for sandbox \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.301570 containerd[1432]: time="2025-05-08T00:13:37.301465568Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c5df75c-scvb7,Uid:12b9a996-dd18-4e57-9070-f80361b7270b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.301845 kubelet[2457]: E0508 00:13:37.301748 2457 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.301845 kubelet[2457]: E0508 00:13:37.301813 2457 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79c5df75c-scvb7" May 8 00:13:37.301845 kubelet[2457]: E0508 00:13:37.301833 2457 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79c5df75c-scvb7" May 8 00:13:37.304040 containerd[1432]: time="2025-05-08T00:13:37.302511875Z" level=error msg="encountered an error cleaning up failed sandbox \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.304040 containerd[1432]: time="2025-05-08T00:13:37.302562004Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749dfb98f-2zbqs,Uid:cdbc0319-9800-491a-bfc5-f62d3ecc390b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.304180 kubelet[2457]: E0508 00:13:37.302708 2457 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.304180 kubelet[2457]: E0508 00:13:37.302748 2457 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-749dfb98f-2zbqs" May 8 00:13:37.304180 kubelet[2457]: E0508 00:13:37.302765 2457 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-749dfb98f-2zbqs" May 8 00:13:37.304342 containerd[1432]: time="2025-05-08T00:13:37.304043669Z" level=error msg="Failed to destroy network for sandbox \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.304371 kubelet[2457]: E0508 00:13:37.302794 2457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-749dfb98f-2zbqs_calico-system(cdbc0319-9800-491a-bfc5-f62d3ecc390b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-749dfb98f-2zbqs_calico-system(cdbc0319-9800-491a-bfc5-f62d3ecc390b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-749dfb98f-2zbqs" podUID="cdbc0319-9800-491a-bfc5-f62d3ecc390b" May 8 00:13:37.304419 containerd[1432]: time="2025-05-08T00:13:37.304352764Z" level=error msg="encountered an error cleaning up failed sandbox \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.304419 containerd[1432]: time="2025-05-08T00:13:37.304389410Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c5df75c-cmszx,Uid:276ac77e-604a-4871-b9fd-fa9015b47098,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.304800 containerd[1432]: time="2025-05-08T00:13:37.304777680Z" level=error msg="Failed to destroy network for sandbox \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.304917 kubelet[2457]: E0508 00:13:37.304533 2457 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.304917 kubelet[2457]: E0508 00:13:37.304578 2457 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79c5df75c-cmszx" May 8 00:13:37.304917 kubelet[2457]: E0508 00:13:37.304595 2457 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79c5df75c-cmszx" May 8 00:13:37.305017 kubelet[2457]: E0508 00:13:37.304623 2457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79c5df75c-cmszx_calico-apiserver(276ac77e-604a-4871-b9fd-fa9015b47098)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79c5df75c-cmszx_calico-apiserver(276ac77e-604a-4871-b9fd-fa9015b47098)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79c5df75c-cmszx" podUID="276ac77e-604a-4871-b9fd-fa9015b47098" May 8 00:13:37.305376 containerd[1432]: time="2025-05-08T00:13:37.305345101Z" level=error msg="encountered an error cleaning up failed sandbox \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.306625 containerd[1432]: time="2025-05-08T00:13:37.305389429Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dpj96,Uid:ba17bc01-d9db-4f11-a32c-d317dc8f04b0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.306790 kubelet[2457]: E0508 00:13:37.306757 2457 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.306840 kubelet[2457]: E0508 00:13:37.306796 2457 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-dpj96" May 8 00:13:37.306840 kubelet[2457]: E0508 00:13:37.306813 2457 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-dpj96" May 8 00:13:37.306894 kubelet[2457]: E0508 00:13:37.306838 2457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-dpj96_kube-system(ba17bc01-d9db-4f11-a32c-d317dc8f04b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-dpj96_kube-system(ba17bc01-d9db-4f11-a32c-d317dc8f04b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-dpj96" podUID="ba17bc01-d9db-4f11-a32c-d317dc8f04b0" May 8 00:13:37.309543 kubelet[2457]: E0508 00:13:37.301879 2457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79c5df75c-scvb7_calico-apiserver(12b9a996-dd18-4e57-9070-f80361b7270b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79c5df75c-scvb7_calico-apiserver(12b9a996-dd18-4e57-9070-f80361b7270b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79c5df75c-scvb7" podUID="12b9a996-dd18-4e57-9070-f80361b7270b" May 8 00:13:37.531332 kubelet[2457]: E0508 00:13:37.531300 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:37.532978 kubelet[2457]: I0508 00:13:37.532046 2457 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" May 8 00:13:37.533061 containerd[1432]: time="2025-05-08T00:13:37.532016275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 00:13:37.533425 containerd[1432]: time="2025-05-08T00:13:37.533354954Z" level=info msg="StopPodSandbox for \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\"" May 8 00:13:37.533551 containerd[1432]: time="2025-05-08T00:13:37.533522864Z" level=info msg="Ensure that sandbox 28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f in task-service has been cleanup successfully" May 8 00:13:37.536371 kubelet[2457]: I0508 00:13:37.536267 2457 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" May 8 00:13:37.536803 containerd[1432]: time="2025-05-08T00:13:37.536761403Z" level=info msg="StopPodSandbox for \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\"" May 8 00:13:37.537816 containerd[1432]: time="2025-05-08T00:13:37.536928113Z" level=info msg="Ensure that sandbox 4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8 in task-service has been cleanup successfully" May 8 00:13:37.538055 kubelet[2457]: I0508 00:13:37.537924 2457 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" May 8 00:13:37.539290 containerd[1432]: time="2025-05-08T00:13:37.539167153Z" level=info msg="StopPodSandbox for \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\"" May 8 00:13:37.540455 containerd[1432]: time="2025-05-08T00:13:37.539794185Z" level=info msg="Ensure that sandbox 89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da in task-service has been cleanup successfully" May 8 00:13:37.550625 kubelet[2457]: I0508 00:13:37.550507 2457 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" May 8 00:13:37.552083 containerd[1432]: time="2025-05-08T00:13:37.551854939Z" level=info msg="StopPodSandbox for \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\"" May 8 00:13:37.552190 containerd[1432]: time="2025-05-08T00:13:37.552120627Z" level=info msg="Ensure that sandbox c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88 in task-service has been cleanup successfully" May 8 00:13:37.554166 kubelet[2457]: I0508 00:13:37.554110 2457 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" May 8 00:13:37.554709 containerd[1432]: time="2025-05-08T00:13:37.554667922Z" level=info msg="StopPodSandbox for \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\"" May 8 00:13:37.555328 containerd[1432]: time="2025-05-08T00:13:37.555296834Z" level=info msg="Ensure that sandbox 251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251 in task-service has been cleanup successfully" May 8 00:13:37.591360 containerd[1432]: time="2025-05-08T00:13:37.590419309Z" level=error msg="StopPodSandbox for \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\" failed" error="failed to destroy network for sandbox \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.591634 kubelet[2457]: E0508 00:13:37.591577 2457 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" May 8 00:13:37.591709 kubelet[2457]: E0508 00:13:37.591654 2457 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8"} May 8 00:13:37.591736 kubelet[2457]: E0508 00:13:37.591719 2457 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"12b9a996-dd18-4e57-9070-f80361b7270b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:13:37.591803 kubelet[2457]: E0508 00:13:37.591741 2457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"12b9a996-dd18-4e57-9070-f80361b7270b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79c5df75c-scvb7" podUID="12b9a996-dd18-4e57-9070-f80361b7270b" May 8 00:13:37.592208 containerd[1432]: time="2025-05-08T00:13:37.592154579Z" level=error msg="StopPodSandbox for \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\" failed" error="failed to destroy network for sandbox \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.592398 kubelet[2457]: E0508 00:13:37.592364 2457 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" May 8 00:13:37.592455 kubelet[2457]: E0508 00:13:37.592403 2457 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f"} May 8 00:13:37.592455 kubelet[2457]: E0508 00:13:37.592430 2457 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"276ac77e-604a-4871-b9fd-fa9015b47098\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:13:37.592455 kubelet[2457]: E0508 00:13:37.592448 2457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"276ac77e-604a-4871-b9fd-fa9015b47098\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79c5df75c-cmszx" podUID="276ac77e-604a-4871-b9fd-fa9015b47098" May 8 00:13:37.597323 containerd[1432]: time="2025-05-08T00:13:37.597086940Z" level=error msg="StopPodSandbox for \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\" failed" error="failed to destroy network for sandbox \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.597452 kubelet[2457]: E0508 00:13:37.597334 2457 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" May 8 00:13:37.597452 kubelet[2457]: E0508 00:13:37.597375 2457 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88"} May 8 00:13:37.597452 kubelet[2457]: E0508 00:13:37.597402 2457 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a3ed82ee-0db3-4b0d-9d13-06ab474af0f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:13:37.597452 kubelet[2457]: E0508 00:13:37.597420 2457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a3ed82ee-0db3-4b0d-9d13-06ab474af0f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-6rqx8" podUID="a3ed82ee-0db3-4b0d-9d13-06ab474af0f9" May 8 00:13:37.599910 containerd[1432]: time="2025-05-08T00:13:37.599864076Z" level=error msg="StopPodSandbox for \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\" failed" error="failed to destroy network for sandbox \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.600225 kubelet[2457]: E0508 00:13:37.600190 2457 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" May 8 00:13:37.600347 kubelet[2457]: E0508 00:13:37.600246 2457 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251"} May 8 00:13:37.600451 kubelet[2457]: E0508 00:13:37.600367 2457 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba17bc01-d9db-4f11-a32c-d317dc8f04b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:13:37.600451 kubelet[2457]: E0508 00:13:37.600407 2457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba17bc01-d9db-4f11-a32c-d317dc8f04b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-dpj96" podUID="ba17bc01-d9db-4f11-a32c-d317dc8f04b0" May 8 00:13:37.605644 containerd[1432]: time="2025-05-08T00:13:37.605590499Z" level=error msg="StopPodSandbox for \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\" failed" error="failed to destroy network for sandbox \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:37.605830 kubelet[2457]: E0508 00:13:37.605800 2457 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" May 8 00:13:37.605881 kubelet[2457]: E0508 00:13:37.605847 2457 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da"} May 8 00:13:37.605881 kubelet[2457]: E0508 00:13:37.605879 2457 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cdbc0319-9800-491a-bfc5-f62d3ecc390b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:13:37.605960 kubelet[2457]: E0508 00:13:37.605901 2457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cdbc0319-9800-491a-bfc5-f62d3ecc390b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-749dfb98f-2zbqs" podUID="cdbc0319-9800-491a-bfc5-f62d3ecc390b" May 8 00:13:37.745031 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da-shm.mount: Deactivated successfully. May 8 00:13:37.745135 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251-shm.mount: Deactivated successfully. May 8 00:13:38.464640 systemd[1]: Created slice kubepods-besteffort-podafea4b03_2e4e_494b_bfd2_bbc94939e0ab.slice - libcontainer container kubepods-besteffort-podafea4b03_2e4e_494b_bfd2_bbc94939e0ab.slice. May 8 00:13:38.468826 containerd[1432]: time="2025-05-08T00:13:38.468770270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w679k,Uid:afea4b03-2e4e-494b-bfd2-bbc94939e0ab,Namespace:calico-system,Attempt:0,}" May 8 00:13:38.531844 containerd[1432]: time="2025-05-08T00:13:38.531783383Z" level=error msg="Failed to destroy network for sandbox \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:38.533586 containerd[1432]: time="2025-05-08T00:13:38.533548966Z" level=error msg="encountered an error cleaning up failed sandbox \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:38.533699 containerd[1432]: time="2025-05-08T00:13:38.533618018Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w679k,Uid:afea4b03-2e4e-494b-bfd2-bbc94939e0ab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:38.533879 kubelet[2457]: E0508 00:13:38.533835 2457 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:38.535735 kubelet[2457]: E0508 00:13:38.533896 2457 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w679k" May 8 00:13:38.535735 kubelet[2457]: E0508 00:13:38.533915 2457 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w679k" May 8 00:13:38.535735 kubelet[2457]: E0508 00:13:38.533961 2457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w679k_calico-system(afea4b03-2e4e-494b-bfd2-bbc94939e0ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w679k_calico-system(afea4b03-2e4e-494b-bfd2-bbc94939e0ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w679k" podUID="afea4b03-2e4e-494b-bfd2-bbc94939e0ab" May 8 00:13:38.533710 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40-shm.mount: Deactivated successfully. May 8 00:13:38.560070 kubelet[2457]: I0508 00:13:38.559455 2457 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" May 8 00:13:38.560626 containerd[1432]: time="2025-05-08T00:13:38.560202171Z" level=info msg="StopPodSandbox for \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\"" May 8 00:13:38.560626 containerd[1432]: time="2025-05-08T00:13:38.560408447Z" level=info msg="Ensure that sandbox 2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40 in task-service has been cleanup successfully" May 8 00:13:38.589320 containerd[1432]: time="2025-05-08T00:13:38.589263749Z" level=error msg="StopPodSandbox for \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\" failed" error="failed to destroy network for sandbox \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:13:38.589735 kubelet[2457]: E0508 00:13:38.589695 2457 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" May 8 00:13:38.589793 kubelet[2457]: E0508 00:13:38.589749 2457 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40"} May 8 00:13:38.589827 kubelet[2457]: E0508 00:13:38.589798 2457 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"afea4b03-2e4e-494b-bfd2-bbc94939e0ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:13:38.589879 kubelet[2457]: E0508 00:13:38.589822 2457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"afea4b03-2e4e-494b-bfd2-bbc94939e0ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w679k" podUID="afea4b03-2e4e-494b-bfd2-bbc94939e0ab" May 8 00:13:41.263706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3966560281.mount: Deactivated successfully. May 8 00:13:41.438913 containerd[1432]: time="2025-05-08T00:13:41.438846922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:41.439682 containerd[1432]: time="2025-05-08T00:13:41.439652364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 8 00:13:41.440754 containerd[1432]: time="2025-05-08T00:13:41.440714085Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:41.442490 containerd[1432]: time="2025-05-08T00:13:41.442458110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:41.443022 containerd[1432]: time="2025-05-08T00:13:41.442989871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 3.910935669s" May 8 00:13:41.443022 containerd[1432]: time="2025-05-08T00:13:41.443017635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 8 00:13:41.466923 containerd[1432]: time="2025-05-08T00:13:41.466881379Z" level=info msg="CreateContainer within sandbox \"df4b35468aad120b396f0b04f21829540e28cbb9fe35245f53c84672e0a5aa7d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:13:41.485300 containerd[1432]: time="2025-05-08T00:13:41.485210482Z" level=info msg="CreateContainer within sandbox \"df4b35468aad120b396f0b04f21829540e28cbb9fe35245f53c84672e0a5aa7d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d8d32fcf208c1b88d9ff004ebdc5ca1d548866279424b577668570bfbd893a68\"" May 8 00:13:41.485932 containerd[1432]: time="2025-05-08T00:13:41.485899387Z" level=info msg="StartContainer for \"d8d32fcf208c1b88d9ff004ebdc5ca1d548866279424b577668570bfbd893a68\"" May 8 00:13:41.540913 systemd[1]: Started cri-containerd-d8d32fcf208c1b88d9ff004ebdc5ca1d548866279424b577668570bfbd893a68.scope - libcontainer container d8d32fcf208c1b88d9ff004ebdc5ca1d548866279424b577668570bfbd893a68. May 8 00:13:41.575553 containerd[1432]: time="2025-05-08T00:13:41.575051085Z" level=info msg="StartContainer for \"d8d32fcf208c1b88d9ff004ebdc5ca1d548866279424b577668570bfbd893a68\" returns successfully" May 8 00:13:41.578805 kubelet[2457]: E0508 00:13:41.578780 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:41.595755 kubelet[2457]: I0508 00:13:41.595088 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-h8nmm" podStartSLOduration=0.96969935 podStartE2EDuration="13.595073686s" podCreationTimestamp="2025-05-08 00:13:28 +0000 UTC" firstStartedPulling="2025-05-08 00:13:28.835152518 +0000 UTC m=+12.463981464" lastFinishedPulling="2025-05-08 00:13:41.460526894 +0000 UTC m=+25.089355800" observedRunningTime="2025-05-08 00:13:41.59404809 +0000 UTC m=+25.222877036" watchObservedRunningTime="2025-05-08 00:13:41.595073686 +0000 UTC m=+25.223902632" May 8 00:13:41.736308 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 00:13:41.736421 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 00:13:42.581534 kubelet[2457]: I0508 00:13:42.581505 2457 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:13:42.581890 kubelet[2457]: E0508 00:13:42.581875 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:46.604548 systemd[1]: Started sshd@7-10.0.0.14:22-10.0.0.1:38346.service - OpenSSH per-connection server daemon (10.0.0.1:38346). May 8 00:13:46.677191 sshd[3845]: Accepted publickey for core from 10.0.0.1 port 38346 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:13:46.678649 sshd[3845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:46.682336 systemd-logind[1416]: New session 8 of user core. May 8 00:13:46.693496 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:13:46.838079 sshd[3845]: pam_unix(sshd:session): session closed for user core May 8 00:13:46.841442 systemd[1]: sshd@7-10.0.0.14:22-10.0.0.1:38346.service: Deactivated successfully. May 8 00:13:46.844457 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:13:46.845220 systemd-logind[1416]: Session 8 logged out. Waiting for processes to exit. May 8 00:13:46.846099 systemd-logind[1416]: Removed session 8. May 8 00:13:48.345322 kubelet[2457]: I0508 00:13:48.345267 2457 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:13:48.345791 kubelet[2457]: E0508 00:13:48.345642 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:48.590769 kubelet[2457]: E0508 00:13:48.590725 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:49.323086 kernel: bpftool[3957]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 8 00:13:49.457766 containerd[1432]: time="2025-05-08T00:13:49.457721599Z" level=info msg="StopPodSandbox for \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\"" May 8 00:13:49.523924 systemd-networkd[1374]: vxlan.calico: Link UP May 8 00:13:49.523933 systemd-networkd[1374]: vxlan.calico: Gained carrier May 8 00:13:49.665838 containerd[1432]: 2025-05-08 00:13:49.575 [INFO][3997] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" May 8 00:13:49.665838 containerd[1432]: 2025-05-08 00:13:49.576 [INFO][3997] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" iface="eth0" netns="/var/run/netns/cni-25591068-2521-899e-2550-be901f8e0c11" May 8 00:13:49.665838 containerd[1432]: 2025-05-08 00:13:49.576 [INFO][3997] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" iface="eth0" netns="/var/run/netns/cni-25591068-2521-899e-2550-be901f8e0c11" May 8 00:13:49.665838 containerd[1432]: 2025-05-08 00:13:49.579 [INFO][3997] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" iface="eth0" netns="/var/run/netns/cni-25591068-2521-899e-2550-be901f8e0c11" May 8 00:13:49.665838 containerd[1432]: 2025-05-08 00:13:49.579 [INFO][3997] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" May 8 00:13:49.665838 containerd[1432]: 2025-05-08 00:13:49.579 [INFO][3997] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" May 8 00:13:49.665838 containerd[1432]: 2025-05-08 00:13:49.650 [INFO][4035] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" HandleID="k8s-pod-network.c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" Workload="localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0" May 8 00:13:49.665838 containerd[1432]: 2025-05-08 00:13:49.650 [INFO][4035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:13:49.665838 containerd[1432]: 2025-05-08 00:13:49.650 [INFO][4035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:13:49.665838 containerd[1432]: 2025-05-08 00:13:49.659 [WARNING][4035] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" HandleID="k8s-pod-network.c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" Workload="localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0" May 8 00:13:49.665838 containerd[1432]: 2025-05-08 00:13:49.659 [INFO][4035] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" HandleID="k8s-pod-network.c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" Workload="localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0" May 8 00:13:49.665838 containerd[1432]: 2025-05-08 00:13:49.661 [INFO][4035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:13:49.665838 containerd[1432]: 2025-05-08 00:13:49.663 [INFO][3997] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" May 8 00:13:49.667805 containerd[1432]: time="2025-05-08T00:13:49.667765951Z" level=info msg="TearDown network for sandbox \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\" successfully" May 8 00:13:49.667805 containerd[1432]: time="2025-05-08T00:13:49.667800875Z" level=info msg="StopPodSandbox for \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\" returns successfully" May 8 00:13:49.668216 kubelet[2457]: E0508 00:13:49.668171 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:49.669534 containerd[1432]: time="2025-05-08T00:13:49.668823993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6rqx8,Uid:a3ed82ee-0db3-4b0d-9d13-06ab474af0f9,Namespace:kube-system,Attempt:1,}" May 8 00:13:49.672180 systemd[1]: run-netns-cni\x2d25591068\x2d2521\x2d899e\x2d2550\x2dbe901f8e0c11.mount: Deactivated successfully. May 8 00:13:49.951517 systemd-networkd[1374]: cali10ec180fb3e: Link UP May 8 00:13:49.951899 systemd-networkd[1374]: cali10ec180fb3e: Gained carrier May 8 00:13:49.969103 containerd[1432]: 2025-05-08 00:13:49.775 [INFO][4056] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0 coredns-6f6b679f8f- kube-system a3ed82ee-0db3-4b0d-9d13-06ab474af0f9 842 0 2025-05-08 00:13:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-6rqx8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali10ec180fb3e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377" Namespace="kube-system" Pod="coredns-6f6b679f8f-6rqx8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6rqx8-" May 8 00:13:49.969103 containerd[1432]: 2025-05-08 00:13:49.775 [INFO][4056] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377" Namespace="kube-system" Pod="coredns-6f6b679f8f-6rqx8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0" May 8 00:13:49.969103 containerd[1432]: 2025-05-08 00:13:49.803 [INFO][4091] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377" HandleID="k8s-pod-network.2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377" Workload="localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0" May 8 00:13:49.969103 containerd[1432]: 2025-05-08 00:13:49.917 [INFO][4091] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377" HandleID="k8s-pod-network.2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377" Workload="localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000293eb0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-6rqx8", "timestamp":"2025-05-08 00:13:49.803738332 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:13:49.969103 containerd[1432]: 2025-05-08 00:13:49.917 [INFO][4091] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:13:49.969103 containerd[1432]: 2025-05-08 00:13:49.917 [INFO][4091] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:13:49.969103 containerd[1432]: 2025-05-08 00:13:49.917 [INFO][4091] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:13:49.969103 containerd[1432]: 2025-05-08 00:13:49.919 [INFO][4091] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377" host="localhost" May 8 00:13:49.969103 containerd[1432]: 2025-05-08 00:13:49.930 [INFO][4091] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:13:49.969103 containerd[1432]: 2025-05-08 00:13:49.934 [INFO][4091] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:13:49.969103 containerd[1432]: 2025-05-08 00:13:49.936 [INFO][4091] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:13:49.969103 containerd[1432]: 2025-05-08 00:13:49.938 [INFO][4091] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:13:49.969103 containerd[1432]: 2025-05-08 00:13:49.938 [INFO][4091] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377" host="localhost" May 8 00:13:49.969103 containerd[1432]: 2025-05-08 00:13:49.939 [INFO][4091] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377 May 8 00:13:49.969103 containerd[1432]: 2025-05-08 00:13:49.942 [INFO][4091] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377" host="localhost" May 8 00:13:49.969103 containerd[1432]: 2025-05-08 00:13:49.947 [INFO][4091] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377" host="localhost" May 8 00:13:49.969103 containerd[1432]: 2025-05-08 00:13:49.947 [INFO][4091] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377" host="localhost" May 8 00:13:49.969103 containerd[1432]: 2025-05-08 00:13:49.947 [INFO][4091] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:13:49.969103 containerd[1432]: 2025-05-08 00:13:49.947 [INFO][4091] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377" HandleID="k8s-pod-network.2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377" Workload="localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0" May 8 00:13:49.969645 containerd[1432]: 2025-05-08 00:13:49.949 [INFO][4056] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377" Namespace="kube-system" Pod="coredns-6f6b679f8f-6rqx8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a3ed82ee-0db3-4b0d-9d13-06ab474af0f9", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-6rqx8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10ec180fb3e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:13:49.969645 containerd[1432]: 2025-05-08 00:13:49.949 [INFO][4056] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377" Namespace="kube-system" Pod="coredns-6f6b679f8f-6rqx8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0" May 8 00:13:49.969645 containerd[1432]: 2025-05-08 00:13:49.949 [INFO][4056] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali10ec180fb3e ContainerID="2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377" Namespace="kube-system" Pod="coredns-6f6b679f8f-6rqx8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0" May 8 00:13:49.969645 containerd[1432]: 2025-05-08 00:13:49.953 [INFO][4056] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377" Namespace="kube-system" Pod="coredns-6f6b679f8f-6rqx8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0" May 8 00:13:49.969645 containerd[1432]: 2025-05-08 00:13:49.953 [INFO][4056] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377" Namespace="kube-system" Pod="coredns-6f6b679f8f-6rqx8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a3ed82ee-0db3-4b0d-9d13-06ab474af0f9", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377", Pod:"coredns-6f6b679f8f-6rqx8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10ec180fb3e", MAC:"2a:d8:41:26:d4:79", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:13:49.969645 containerd[1432]: 2025-05-08 00:13:49.965 [INFO][4056] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377" Namespace="kube-system" Pod="coredns-6f6b679f8f-6rqx8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0" May 8 00:13:49.988250 containerd[1432]: time="2025-05-08T00:13:49.987864299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:49.988250 containerd[1432]: time="2025-05-08T00:13:49.988221740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:49.988250 containerd[1432]: time="2025-05-08T00:13:49.988232982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:49.988866 containerd[1432]: time="2025-05-08T00:13:49.988331073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:50.008454 systemd[1]: Started cri-containerd-2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377.scope - libcontainer container 2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377. May 8 00:13:50.017951 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:13:50.034481 containerd[1432]: time="2025-05-08T00:13:50.034447429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6rqx8,Uid:a3ed82ee-0db3-4b0d-9d13-06ab474af0f9,Namespace:kube-system,Attempt:1,} returns sandbox id \"2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377\"" May 8 00:13:50.035044 kubelet[2457]: E0508 00:13:50.035014 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:50.037105 containerd[1432]: time="2025-05-08T00:13:50.037072162Z" level=info msg="CreateContainer within sandbox \"2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:13:50.054910 containerd[1432]: time="2025-05-08T00:13:50.054863151Z" level=info msg="CreateContainer within sandbox \"2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"362e67d3054cce4f0b94a7c5901b54979730c1eab4c88daa7f061def9a4f094a\"" May 8 00:13:50.055309 containerd[1432]: time="2025-05-08T00:13:50.055285718Z" level=info msg="StartContainer for \"362e67d3054cce4f0b94a7c5901b54979730c1eab4c88daa7f061def9a4f094a\"" May 8 00:13:50.084425 systemd[1]: Started cri-containerd-362e67d3054cce4f0b94a7c5901b54979730c1eab4c88daa7f061def9a4f094a.scope - libcontainer container 362e67d3054cce4f0b94a7c5901b54979730c1eab4c88daa7f061def9a4f094a. May 8 00:13:50.113062 containerd[1432]: time="2025-05-08T00:13:50.113015531Z" level=info msg="StartContainer for \"362e67d3054cce4f0b94a7c5901b54979730c1eab4c88daa7f061def9a4f094a\" returns successfully" May 8 00:13:50.596930 kubelet[2457]: E0508 00:13:50.596129 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:50.615638 kubelet[2457]: I0508 00:13:50.615192 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-6rqx8" podStartSLOduration=29.615178265 podStartE2EDuration="29.615178265s" podCreationTimestamp="2025-05-08 00:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:13:50.614759098 +0000 UTC m=+34.243588004" watchObservedRunningTime="2025-05-08 00:13:50.615178265 +0000 UTC m=+34.244007211" May 8 00:13:50.758436 systemd-networkd[1374]: vxlan.calico: Gained IPv6LL May 8 00:13:51.457375 containerd[1432]: time="2025-05-08T00:13:51.457232860Z" level=info msg="StopPodSandbox for \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\"" May 8 00:13:51.458503 containerd[1432]: time="2025-05-08T00:13:51.457802522Z" level=info msg="StopPodSandbox for \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\"" May 8 00:13:51.546211 containerd[1432]: 2025-05-08 00:13:51.504 [INFO][4232] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" May 8 00:13:51.546211 containerd[1432]: 2025-05-08 00:13:51.504 [INFO][4232] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" iface="eth0" netns="/var/run/netns/cni-d595d7a2-9047-0836-d254-d6d1034b1bac" May 8 00:13:51.546211 containerd[1432]: 2025-05-08 00:13:51.505 [INFO][4232] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" iface="eth0" netns="/var/run/netns/cni-d595d7a2-9047-0836-d254-d6d1034b1bac" May 8 00:13:51.546211 containerd[1432]: 2025-05-08 00:13:51.506 [INFO][4232] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" iface="eth0" netns="/var/run/netns/cni-d595d7a2-9047-0836-d254-d6d1034b1bac" May 8 00:13:51.546211 containerd[1432]: 2025-05-08 00:13:51.506 [INFO][4232] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" May 8 00:13:51.546211 containerd[1432]: 2025-05-08 00:13:51.506 [INFO][4232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" May 8 00:13:51.546211 containerd[1432]: 2025-05-08 00:13:51.531 [INFO][4247] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" HandleID="k8s-pod-network.28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" Workload="localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0" May 8 00:13:51.546211 containerd[1432]: 2025-05-08 00:13:51.531 [INFO][4247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:13:51.546211 containerd[1432]: 2025-05-08 00:13:51.531 [INFO][4247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:13:51.546211 containerd[1432]: 2025-05-08 00:13:51.540 [WARNING][4247] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" HandleID="k8s-pod-network.28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" Workload="localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0" May 8 00:13:51.546211 containerd[1432]: 2025-05-08 00:13:51.540 [INFO][4247] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" HandleID="k8s-pod-network.28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" Workload="localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0" May 8 00:13:51.546211 containerd[1432]: 2025-05-08 00:13:51.542 [INFO][4247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:13:51.546211 containerd[1432]: 2025-05-08 00:13:51.544 [INFO][4232] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" May 8 00:13:51.546829 containerd[1432]: time="2025-05-08T00:13:51.546405745Z" level=info msg="TearDown network for sandbox \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\" successfully" May 8 00:13:51.546829 containerd[1432]: time="2025-05-08T00:13:51.546434348Z" level=info msg="StopPodSandbox for \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\" returns successfully" May 8 00:13:51.548853 systemd[1]: run-netns-cni\x2dd595d7a2\x2d9047\x2d0836\x2dd254\x2dd6d1034b1bac.mount: Deactivated successfully. May 8 00:13:51.550515 containerd[1432]: time="2025-05-08T00:13:51.549188167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c5df75c-cmszx,Uid:276ac77e-604a-4871-b9fd-fa9015b47098,Namespace:calico-apiserver,Attempt:1,}" May 8 00:13:51.567976 containerd[1432]: 2025-05-08 00:13:51.520 [INFO][4231] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" May 8 00:13:51.567976 containerd[1432]: 2025-05-08 00:13:51.520 [INFO][4231] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" iface="eth0" netns="/var/run/netns/cni-29e166c9-22d9-b1d4-7630-7f42826c8e30" May 8 00:13:51.567976 containerd[1432]: 2025-05-08 00:13:51.520 [INFO][4231] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" iface="eth0" netns="/var/run/netns/cni-29e166c9-22d9-b1d4-7630-7f42826c8e30" May 8 00:13:51.567976 containerd[1432]: 2025-05-08 00:13:51.522 [INFO][4231] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" iface="eth0" netns="/var/run/netns/cni-29e166c9-22d9-b1d4-7630-7f42826c8e30" May 8 00:13:51.567976 containerd[1432]: 2025-05-08 00:13:51.522 [INFO][4231] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" May 8 00:13:51.567976 containerd[1432]: 2025-05-08 00:13:51.522 [INFO][4231] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" May 8 00:13:51.567976 containerd[1432]: 2025-05-08 00:13:51.550 [INFO][4254] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" HandleID="k8s-pod-network.251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" Workload="localhost-k8s-coredns--6f6b679f8f--dpj96-eth0" May 8 00:13:51.567976 containerd[1432]: 2025-05-08 00:13:51.550 [INFO][4254] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:13:51.567976 containerd[1432]: 2025-05-08 00:13:51.550 [INFO][4254] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:13:51.567976 containerd[1432]: 2025-05-08 00:13:51.561 [WARNING][4254] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" HandleID="k8s-pod-network.251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" Workload="localhost-k8s-coredns--6f6b679f8f--dpj96-eth0" May 8 00:13:51.567976 containerd[1432]: 2025-05-08 00:13:51.561 [INFO][4254] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" HandleID="k8s-pod-network.251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" Workload="localhost-k8s-coredns--6f6b679f8f--dpj96-eth0" May 8 00:13:51.567976 containerd[1432]: 2025-05-08 00:13:51.562 [INFO][4254] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:13:51.567976 containerd[1432]: 2025-05-08 00:13:51.566 [INFO][4231] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" May 8 00:13:51.569163 containerd[1432]: time="2025-05-08T00:13:51.568176549Z" level=info msg="TearDown network for sandbox \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\" successfully" May 8 00:13:51.569163 containerd[1432]: time="2025-05-08T00:13:51.568200952Z" level=info msg="StopPodSandbox for \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\" returns successfully" May 8 00:13:51.569210 kubelet[2457]: E0508 00:13:51.568534 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:51.571252 containerd[1432]: time="2025-05-08T00:13:51.569884535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dpj96,Uid:ba17bc01-d9db-4f11-a32c-d317dc8f04b0,Namespace:kube-system,Attempt:1,}" May 8 00:13:51.571847 systemd[1]: run-netns-cni\x2d29e166c9\x2d22d9\x2db1d4\x2d7630\x2d7f42826c8e30.mount: Deactivated successfully. May 8 00:13:51.599326 kubelet[2457]: E0508 00:13:51.597657 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:51.693804 systemd-networkd[1374]: cali760735a3beb: Link UP May 8 00:13:51.694345 systemd-networkd[1374]: cali760735a3beb: Gained carrier May 8 00:13:51.708232 containerd[1432]: 2025-05-08 00:13:51.617 [INFO][4264] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0 calico-apiserver-79c5df75c- calico-apiserver 276ac77e-604a-4871-b9fd-fa9015b47098 881 0 2025-05-08 00:13:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79c5df75c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79c5df75c-cmszx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali760735a3beb [] []}} ContainerID="0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc" Namespace="calico-apiserver" Pod="calico-apiserver-79c5df75c-cmszx" WorkloadEndpoint="localhost-k8s-calico--apiserver--79c5df75c--cmszx-" May 8 00:13:51.708232 containerd[1432]: 2025-05-08 00:13:51.617 [INFO][4264] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc" Namespace="calico-apiserver" Pod="calico-apiserver-79c5df75c-cmszx" WorkloadEndpoint="localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0" May 8 00:13:51.708232 containerd[1432]: 2025-05-08 00:13:51.649 [INFO][4291] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc" HandleID="k8s-pod-network.0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc" Workload="localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0" May 8 00:13:51.708232 containerd[1432]: 2025-05-08 00:13:51.660 [INFO][4291] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc" HandleID="k8s-pod-network.0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc" Workload="localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f38e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79c5df75c-cmszx", "timestamp":"2025-05-08 00:13:51.649192348 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:13:51.708232 containerd[1432]: 2025-05-08 00:13:51.660 [INFO][4291] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:13:51.708232 containerd[1432]: 2025-05-08 00:13:51.660 [INFO][4291] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:13:51.708232 containerd[1432]: 2025-05-08 00:13:51.660 [INFO][4291] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:13:51.708232 containerd[1432]: 2025-05-08 00:13:51.662 [INFO][4291] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc" host="localhost" May 8 00:13:51.708232 containerd[1432]: 2025-05-08 00:13:51.666 [INFO][4291] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:13:51.708232 containerd[1432]: 2025-05-08 00:13:51.673 [INFO][4291] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:13:51.708232 containerd[1432]: 2025-05-08 00:13:51.675 [INFO][4291] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:13:51.708232 containerd[1432]: 2025-05-08 00:13:51.677 [INFO][4291] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:13:51.708232 containerd[1432]: 2025-05-08 00:13:51.677 [INFO][4291] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc" host="localhost" May 8 00:13:51.708232 containerd[1432]: 2025-05-08 00:13:51.678 [INFO][4291] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc May 8 00:13:51.708232 containerd[1432]: 2025-05-08 00:13:51.684 [INFO][4291] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc" host="localhost" May 8 00:13:51.708232 containerd[1432]: 2025-05-08 00:13:51.689 [INFO][4291] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc" host="localhost" May 8 00:13:51.708232 containerd[1432]: 2025-05-08 00:13:51.689 [INFO][4291] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc" host="localhost" May 8 00:13:51.708232 containerd[1432]: 2025-05-08 00:13:51.689 [INFO][4291] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:13:51.708232 containerd[1432]: 2025-05-08 00:13:51.689 [INFO][4291] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc" HandleID="k8s-pod-network.0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc" Workload="localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0" May 8 00:13:51.708779 containerd[1432]: 2025-05-08 00:13:51.691 [INFO][4264] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc" Namespace="calico-apiserver" Pod="calico-apiserver-79c5df75c-cmszx" WorkloadEndpoint="localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0", GenerateName:"calico-apiserver-79c5df75c-", Namespace:"calico-apiserver", SelfLink:"", UID:"276ac77e-604a-4871-b9fd-fa9015b47098", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c5df75c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79c5df75c-cmszx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali760735a3beb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:13:51.708779 containerd[1432]: 2025-05-08 00:13:51.692 [INFO][4264] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc" Namespace="calico-apiserver" Pod="calico-apiserver-79c5df75c-cmszx" WorkloadEndpoint="localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0" May 8 00:13:51.708779 containerd[1432]: 2025-05-08 00:13:51.692 [INFO][4264] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali760735a3beb ContainerID="0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc" Namespace="calico-apiserver" Pod="calico-apiserver-79c5df75c-cmszx" WorkloadEndpoint="localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0" May 8 00:13:51.708779 containerd[1432]: 2025-05-08 00:13:51.694 [INFO][4264] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc" Namespace="calico-apiserver" Pod="calico-apiserver-79c5df75c-cmszx" WorkloadEndpoint="localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0" May 8 00:13:51.708779 containerd[1432]: 2025-05-08 00:13:51.694 [INFO][4264] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc" Namespace="calico-apiserver" Pod="calico-apiserver-79c5df75c-cmszx" WorkloadEndpoint="localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0", GenerateName:"calico-apiserver-79c5df75c-", Namespace:"calico-apiserver", SelfLink:"", UID:"276ac77e-604a-4871-b9fd-fa9015b47098", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c5df75c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc", Pod:"calico-apiserver-79c5df75c-cmszx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali760735a3beb", MAC:"1a:d1:29:a5:c6:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:13:51.708779 containerd[1432]: 2025-05-08 00:13:51.706 [INFO][4264] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc" Namespace="calico-apiserver" Pod="calico-apiserver-79c5df75c-cmszx" WorkloadEndpoint="localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0" May 8 00:13:51.725987 containerd[1432]: time="2025-05-08T00:13:51.725897678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:51.725987 containerd[1432]: time="2025-05-08T00:13:51.725947684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:51.725987 containerd[1432]: time="2025-05-08T00:13:51.725958765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:51.726158 containerd[1432]: time="2025-05-08T00:13:51.726028972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:51.751567 systemd[1]: Started cri-containerd-0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc.scope - libcontainer container 0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc. May 8 00:13:51.764526 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:13:51.789345 containerd[1432]: time="2025-05-08T00:13:51.789305244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c5df75c-cmszx,Uid:276ac77e-604a-4871-b9fd-fa9015b47098,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc\"" May 8 00:13:51.790877 containerd[1432]: time="2025-05-08T00:13:51.790737800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:13:51.799830 systemd-networkd[1374]: calieb5ee9e837b: Link UP May 8 00:13:51.800479 systemd-networkd[1374]: calieb5ee9e837b: Gained carrier May 8 00:13:51.811999 containerd[1432]: 2025-05-08 00:13:51.626 [INFO][4274] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--dpj96-eth0 coredns-6f6b679f8f- kube-system ba17bc01-d9db-4f11-a32c-d317dc8f04b0 883 0 2025-05-08 00:13:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-dpj96 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calieb5ee9e837b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c" Namespace="kube-system" Pod="coredns-6f6b679f8f-dpj96" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dpj96-" May 8 00:13:51.811999 containerd[1432]: 2025-05-08 00:13:51.626 [INFO][4274] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c" Namespace="kube-system" Pod="coredns-6f6b679f8f-dpj96" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dpj96-eth0" May 8 00:13:51.811999 containerd[1432]: 2025-05-08 00:13:51.657 [INFO][4297] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c" HandleID="k8s-pod-network.8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c" Workload="localhost-k8s-coredns--6f6b679f8f--dpj96-eth0" May 8 00:13:51.811999 containerd[1432]: 2025-05-08 00:13:51.672 [INFO][4297] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c" HandleID="k8s-pod-network.8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c" Workload="localhost-k8s-coredns--6f6b679f8f--dpj96-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e1c40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-dpj96", "timestamp":"2025-05-08 00:13:51.656910626 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:13:51.811999 containerd[1432]: 2025-05-08 00:13:51.672 [INFO][4297] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:13:51.811999 containerd[1432]: 2025-05-08 00:13:51.689 [INFO][4297] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:13:51.811999 containerd[1432]: 2025-05-08 00:13:51.689 [INFO][4297] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:13:51.811999 containerd[1432]: 2025-05-08 00:13:51.763 [INFO][4297] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c" host="localhost" May 8 00:13:51.811999 containerd[1432]: 2025-05-08 00:13:51.769 [INFO][4297] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:13:51.811999 containerd[1432]: 2025-05-08 00:13:51.777 [INFO][4297] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:13:51.811999 containerd[1432]: 2025-05-08 00:13:51.779 [INFO][4297] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:13:51.811999 containerd[1432]: 2025-05-08 00:13:51.782 [INFO][4297] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:13:51.811999 containerd[1432]: 2025-05-08 00:13:51.782 [INFO][4297] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c" host="localhost" May 8 00:13:51.811999 containerd[1432]: 2025-05-08 00:13:51.785 [INFO][4297] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c May 8 00:13:51.811999 containerd[1432]: 2025-05-08 00:13:51.788 [INFO][4297] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c" host="localhost" May 8 00:13:51.811999 containerd[1432]: 2025-05-08 00:13:51.795 [INFO][4297] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c" host="localhost" May 8 00:13:51.811999 containerd[1432]: 2025-05-08 00:13:51.795 [INFO][4297] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c" host="localhost" May 8 00:13:51.811999 containerd[1432]: 2025-05-08 00:13:51.795 [INFO][4297] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:13:51.811999 containerd[1432]: 2025-05-08 00:13:51.795 [INFO][4297] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c" HandleID="k8s-pod-network.8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c" Workload="localhost-k8s-coredns--6f6b679f8f--dpj96-eth0" May 8 00:13:51.812592 containerd[1432]: 2025-05-08 00:13:51.797 [INFO][4274] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c" Namespace="kube-system" Pod="coredns-6f6b679f8f-dpj96" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dpj96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--dpj96-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ba17bc01-d9db-4f11-a32c-d317dc8f04b0", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-dpj96", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb5ee9e837b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:13:51.812592 containerd[1432]: 2025-05-08 00:13:51.798 [INFO][4274] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c" Namespace="kube-system" Pod="coredns-6f6b679f8f-dpj96" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dpj96-eth0" May 8 00:13:51.812592 containerd[1432]: 2025-05-08 00:13:51.798 [INFO][4274] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb5ee9e837b ContainerID="8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c" Namespace="kube-system" Pod="coredns-6f6b679f8f-dpj96" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dpj96-eth0" May 8 00:13:51.812592 containerd[1432]: 2025-05-08 00:13:51.800 [INFO][4274] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c" Namespace="kube-system" Pod="coredns-6f6b679f8f-dpj96" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dpj96-eth0" May 8 00:13:51.812592 containerd[1432]: 2025-05-08 00:13:51.801 [INFO][4274] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c" Namespace="kube-system" Pod="coredns-6f6b679f8f-dpj96" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dpj96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--dpj96-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ba17bc01-d9db-4f11-a32c-d317dc8f04b0", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c", Pod:"coredns-6f6b679f8f-dpj96", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb5ee9e837b", MAC:"f6:51:02:00:aa:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:13:51.812592 containerd[1432]: 2025-05-08 00:13:51.809 [INFO][4274] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c" Namespace="kube-system" Pod="coredns-6f6b679f8f-dpj96" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dpj96-eth0" May 8 00:13:51.829311 containerd[1432]: time="2025-05-08T00:13:51.829185736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:51.829311 containerd[1432]: time="2025-05-08T00:13:51.829253343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:51.829311 containerd[1432]: time="2025-05-08T00:13:51.829263744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:51.829560 containerd[1432]: time="2025-05-08T00:13:51.829362995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:51.846431 systemd-networkd[1374]: cali10ec180fb3e: Gained IPv6LL May 8 00:13:51.854949 systemd[1]: Started cri-containerd-8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c.scope - libcontainer container 8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c. May 8 00:13:51.856756 systemd[1]: Started sshd@8-10.0.0.14:22-10.0.0.1:38350.service - OpenSSH per-connection server daemon (10.0.0.1:38350). May 8 00:13:51.870341 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:13:51.888733 containerd[1432]: time="2025-05-08T00:13:51.888690278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dpj96,Uid:ba17bc01-d9db-4f11-a32c-d317dc8f04b0,Namespace:kube-system,Attempt:1,} returns sandbox id \"8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c\"" May 8 00:13:51.889870 kubelet[2457]: E0508 00:13:51.889729 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:51.892813 containerd[1432]: time="2025-05-08T00:13:51.892673471Z" level=info msg="CreateContainer within sandbox \"8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:13:51.907100 sshd[4404]: Accepted publickey for core from 10.0.0.1 port 38350 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:13:51.907267 sshd[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:51.912041 systemd-logind[1416]: New session 9 of user core. May 8 00:13:51.914839 containerd[1432]: time="2025-05-08T00:13:51.914442675Z" level=info msg="CreateContainer within sandbox \"8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"320fa4f29423ba0725418f8e7eefb6422346068c3173c5527f0a2cf3704d9d47\"" May 8 00:13:51.915322 containerd[1432]: time="2025-05-08T00:13:51.915265084Z" level=info msg="StartContainer for \"320fa4f29423ba0725418f8e7eefb6422346068c3173c5527f0a2cf3704d9d47\"" May 8 00:13:51.919427 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:13:51.944446 systemd[1]: Started cri-containerd-320fa4f29423ba0725418f8e7eefb6422346068c3173c5527f0a2cf3704d9d47.scope - libcontainer container 320fa4f29423ba0725418f8e7eefb6422346068c3173c5527f0a2cf3704d9d47. May 8 00:13:51.966982 containerd[1432]: time="2025-05-08T00:13:51.966297427Z" level=info msg="StartContainer for \"320fa4f29423ba0725418f8e7eefb6422346068c3173c5527f0a2cf3704d9d47\" returns successfully" May 8 00:13:52.131811 sshd[4404]: pam_unix(sshd:session): session closed for user core May 8 00:13:52.136029 systemd[1]: sshd@8-10.0.0.14:22-10.0.0.1:38350.service: Deactivated successfully. May 8 00:13:52.137713 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:13:52.138339 systemd-logind[1416]: Session 9 logged out. Waiting for processes to exit. May 8 00:13:52.139773 systemd-logind[1416]: Removed session 9. May 8 00:13:52.459221 containerd[1432]: time="2025-05-08T00:13:52.458880118Z" level=info msg="StopPodSandbox for \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\"" May 8 00:13:52.459221 containerd[1432]: time="2025-05-08T00:13:52.458878438Z" level=info msg="StopPodSandbox for \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\"" May 8 00:13:52.459221 containerd[1432]: time="2025-05-08T00:13:52.458882879Z" level=info msg="StopPodSandbox for \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\"" May 8 00:13:52.581018 containerd[1432]: 2025-05-08 00:13:52.520 [INFO][4520] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" May 8 00:13:52.581018 containerd[1432]: 2025-05-08 00:13:52.520 [INFO][4520] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" iface="eth0" netns="/var/run/netns/cni-b76f5de8-83e0-35d3-746f-360d7bd1c9e7" May 8 00:13:52.581018 containerd[1432]: 2025-05-08 00:13:52.521 [INFO][4520] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" iface="eth0" netns="/var/run/netns/cni-b76f5de8-83e0-35d3-746f-360d7bd1c9e7" May 8 00:13:52.581018 containerd[1432]: 2025-05-08 00:13:52.524 [INFO][4520] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" iface="eth0" netns="/var/run/netns/cni-b76f5de8-83e0-35d3-746f-360d7bd1c9e7" May 8 00:13:52.581018 containerd[1432]: 2025-05-08 00:13:52.524 [INFO][4520] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" May 8 00:13:52.581018 containerd[1432]: 2025-05-08 00:13:52.524 [INFO][4520] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" May 8 00:13:52.581018 containerd[1432]: 2025-05-08 00:13:52.558 [INFO][4550] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" HandleID="k8s-pod-network.89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" Workload="localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0" May 8 00:13:52.581018 containerd[1432]: 2025-05-08 00:13:52.558 [INFO][4550] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:13:52.581018 containerd[1432]: 2025-05-08 00:13:52.558 [INFO][4550] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:13:52.581018 containerd[1432]: 2025-05-08 00:13:52.570 [WARNING][4550] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" HandleID="k8s-pod-network.89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" Workload="localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0" May 8 00:13:52.581018 containerd[1432]: 2025-05-08 00:13:52.570 [INFO][4550] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" HandleID="k8s-pod-network.89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" Workload="localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0" May 8 00:13:52.581018 containerd[1432]: 2025-05-08 00:13:52.572 [INFO][4550] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:13:52.581018 containerd[1432]: 2025-05-08 00:13:52.575 [INFO][4520] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" May 8 00:13:52.581742 containerd[1432]: time="2025-05-08T00:13:52.581709092Z" level=info msg="TearDown network for sandbox \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\" successfully" May 8 00:13:52.581802 containerd[1432]: time="2025-05-08T00:13:52.581789420Z" level=info msg="StopPodSandbox for \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\" returns successfully" May 8 00:13:52.582971 containerd[1432]: time="2025-05-08T00:13:52.582944342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749dfb98f-2zbqs,Uid:cdbc0319-9800-491a-bfc5-f62d3ecc390b,Namespace:calico-system,Attempt:1,}" May 8 00:13:52.594318 containerd[1432]: 2025-05-08 00:13:52.534 [INFO][4522] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" May 8 00:13:52.594318 containerd[1432]: 2025-05-08 00:13:52.535 [INFO][4522] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" iface="eth0" netns="/var/run/netns/cni-69597545-77df-e4b7-a268-41c09400e682" May 8 00:13:52.594318 containerd[1432]: 2025-05-08 00:13:52.535 [INFO][4522] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" iface="eth0" netns="/var/run/netns/cni-69597545-77df-e4b7-a268-41c09400e682" May 8 00:13:52.594318 containerd[1432]: 2025-05-08 00:13:52.535 [INFO][4522] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" iface="eth0" netns="/var/run/netns/cni-69597545-77df-e4b7-a268-41c09400e682" May 8 00:13:52.594318 containerd[1432]: 2025-05-08 00:13:52.535 [INFO][4522] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" May 8 00:13:52.594318 containerd[1432]: 2025-05-08 00:13:52.535 [INFO][4522] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" May 8 00:13:52.594318 containerd[1432]: 2025-05-08 00:13:52.565 [INFO][4558] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" HandleID="k8s-pod-network.2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" Workload="localhost-k8s-csi--node--driver--w679k-eth0" May 8 00:13:52.594318 containerd[1432]: 2025-05-08 00:13:52.565 [INFO][4558] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:13:52.594318 containerd[1432]: 2025-05-08 00:13:52.572 [INFO][4558] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:13:52.594318 containerd[1432]: 2025-05-08 00:13:52.584 [WARNING][4558] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" HandleID="k8s-pod-network.2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" Workload="localhost-k8s-csi--node--driver--w679k-eth0" May 8 00:13:52.594318 containerd[1432]: 2025-05-08 00:13:52.584 [INFO][4558] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" HandleID="k8s-pod-network.2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" Workload="localhost-k8s-csi--node--driver--w679k-eth0" May 8 00:13:52.594318 containerd[1432]: 2025-05-08 00:13:52.586 [INFO][4558] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:13:52.594318 containerd[1432]: 2025-05-08 00:13:52.591 [INFO][4522] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" May 8 00:13:52.595364 containerd[1432]: time="2025-05-08T00:13:52.594430436Z" level=info msg="TearDown network for sandbox \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\" successfully" May 8 00:13:52.595364 containerd[1432]: time="2025-05-08T00:13:52.594449198Z" level=info msg="StopPodSandbox for \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\" returns successfully" May 8 00:13:52.595639 containerd[1432]: time="2025-05-08T00:13:52.595618121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w679k,Uid:afea4b03-2e4e-494b-bfd2-bbc94939e0ab,Namespace:calico-system,Attempt:1,}" May 8 00:13:52.608528 kubelet[2457]: E0508 00:13:52.608468 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:52.608832 kubelet[2457]: E0508 00:13:52.608476 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:52.628415 containerd[1432]: 2025-05-08 00:13:52.516 [INFO][4521] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" May 8 00:13:52.628415 containerd[1432]: 2025-05-08 00:13:52.517 [INFO][4521] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" iface="eth0" netns="/var/run/netns/cni-b3f3799c-d671-eee3-4564-1f8fce45df7f" May 8 00:13:52.628415 containerd[1432]: 2025-05-08 00:13:52.517 [INFO][4521] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" iface="eth0" netns="/var/run/netns/cni-b3f3799c-d671-eee3-4564-1f8fce45df7f" May 8 00:13:52.628415 containerd[1432]: 2025-05-08 00:13:52.517 [INFO][4521] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" iface="eth0" netns="/var/run/netns/cni-b3f3799c-d671-eee3-4564-1f8fce45df7f" May 8 00:13:52.628415 containerd[1432]: 2025-05-08 00:13:52.517 [INFO][4521] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" May 8 00:13:52.628415 containerd[1432]: 2025-05-08 00:13:52.517 [INFO][4521] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" May 8 00:13:52.628415 containerd[1432]: 2025-05-08 00:13:52.573 [INFO][4544] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" HandleID="k8s-pod-network.4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" Workload="localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0" May 8 00:13:52.628415 containerd[1432]: 2025-05-08 00:13:52.574 [INFO][4544] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:13:52.628415 containerd[1432]: 2025-05-08 00:13:52.586 [INFO][4544] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:13:52.628415 containerd[1432]: 2025-05-08 00:13:52.614 [WARNING][4544] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" HandleID="k8s-pod-network.4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" Workload="localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0" May 8 00:13:52.628415 containerd[1432]: 2025-05-08 00:13:52.614 [INFO][4544] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" HandleID="k8s-pod-network.4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" Workload="localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0" May 8 00:13:52.628415 containerd[1432]: 2025-05-08 00:13:52.620 [INFO][4544] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:13:52.628415 containerd[1432]: 2025-05-08 00:13:52.625 [INFO][4521] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" May 8 00:13:52.628788 containerd[1432]: time="2025-05-08T00:13:52.628708656Z" level=info msg="TearDown network for sandbox \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\" successfully" May 8 00:13:52.628788 containerd[1432]: time="2025-05-08T00:13:52.628735579Z" level=info msg="StopPodSandbox for \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\" returns successfully" May 8 00:13:52.629730 containerd[1432]: time="2025-05-08T00:13:52.629615472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c5df75c-scvb7,Uid:12b9a996-dd18-4e57-9070-f80361b7270b,Namespace:calico-apiserver,Attempt:1,}" May 8 00:13:52.735930 systemd[1]: run-netns-cni\x2d69597545\x2d77df\x2de4b7\x2da268\x2d41c09400e682.mount: Deactivated successfully. May 8 00:13:52.736436 systemd[1]: run-netns-cni\x2db3f3799c\x2dd671\x2deee3\x2d4564\x2d1f8fce45df7f.mount: Deactivated successfully. May 8 00:13:52.736489 systemd[1]: run-netns-cni\x2db76f5de8\x2d83e0\x2d35d3\x2d746f\x2d360d7bd1c9e7.mount: Deactivated successfully. May 8 00:13:52.785006 systemd-networkd[1374]: cali9ee13cbed72: Link UP May 8 00:13:52.785844 systemd-networkd[1374]: cali9ee13cbed72: Gained carrier May 8 00:13:52.797113 kubelet[2457]: I0508 00:13:52.796028 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-dpj96" podStartSLOduration=31.796008447 podStartE2EDuration="31.796008447s" podCreationTimestamp="2025-05-08 00:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:13:52.622729185 +0000 UTC m=+36.251558131" watchObservedRunningTime="2025-05-08 00:13:52.796008447 +0000 UTC m=+36.424837393" May 8 00:13:52.799077 containerd[1432]: 2025-05-08 00:13:52.682 [INFO][4583] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--w679k-eth0 csi-node-driver- calico-system afea4b03-2e4e-494b-bfd2-bbc94939e0ab 904 0 2025-05-08 00:13:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-w679k eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9ee13cbed72 [] []}} ContainerID="7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760" Namespace="calico-system" Pod="csi-node-driver-w679k" WorkloadEndpoint="localhost-k8s-csi--node--driver--w679k-" May 8 00:13:52.799077 containerd[1432]: 2025-05-08 00:13:52.682 [INFO][4583] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760" Namespace="calico-system" Pod="csi-node-driver-w679k" WorkloadEndpoint="localhost-k8s-csi--node--driver--w679k-eth0" May 8 00:13:52.799077 containerd[1432]: 2025-05-08 00:13:52.715 [INFO][4621] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760" HandleID="k8s-pod-network.7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760" Workload="localhost-k8s-csi--node--driver--w679k-eth0" May 8 00:13:52.799077 containerd[1432]: 2025-05-08 00:13:52.741 [INFO][4621] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760" HandleID="k8s-pod-network.7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760" Workload="localhost-k8s-csi--node--driver--w679k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000373320), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-w679k", "timestamp":"2025-05-08 00:13:52.715606794 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:13:52.799077 containerd[1432]: 2025-05-08 00:13:52.742 [INFO][4621] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:13:52.799077 containerd[1432]: 2025-05-08 00:13:52.742 [INFO][4621] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:13:52.799077 containerd[1432]: 2025-05-08 00:13:52.742 [INFO][4621] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:13:52.799077 containerd[1432]: 2025-05-08 00:13:52.747 [INFO][4621] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760" host="localhost" May 8 00:13:52.799077 containerd[1432]: 2025-05-08 00:13:52.752 [INFO][4621] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:13:52.799077 containerd[1432]: 2025-05-08 00:13:52.758 [INFO][4621] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:13:52.799077 containerd[1432]: 2025-05-08 00:13:52.761 [INFO][4621] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:13:52.799077 containerd[1432]: 2025-05-08 00:13:52.765 [INFO][4621] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:13:52.799077 containerd[1432]: 2025-05-08 00:13:52.765 [INFO][4621] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760" host="localhost" May 8 00:13:52.799077 containerd[1432]: 2025-05-08 00:13:52.767 [INFO][4621] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760 May 8 00:13:52.799077 containerd[1432]: 2025-05-08 00:13:52.771 [INFO][4621] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760" host="localhost" May 8 00:13:52.799077 containerd[1432]: 2025-05-08 00:13:52.778 [INFO][4621] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760" host="localhost" May 8 00:13:52.799077 containerd[1432]: 2025-05-08 00:13:52.778 [INFO][4621] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760" host="localhost" May 8 00:13:52.799077 containerd[1432]: 2025-05-08 00:13:52.778 [INFO][4621] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:13:52.799077 containerd[1432]: 2025-05-08 00:13:52.778 [INFO][4621] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760" HandleID="k8s-pod-network.7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760" Workload="localhost-k8s-csi--node--driver--w679k-eth0" May 8 00:13:52.799606 containerd[1432]: 2025-05-08 00:13:52.781 [INFO][4583] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760" Namespace="calico-system" Pod="csi-node-driver-w679k" WorkloadEndpoint="localhost-k8s-csi--node--driver--w679k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w679k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"afea4b03-2e4e-494b-bfd2-bbc94939e0ab", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-w679k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9ee13cbed72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:13:52.799606 containerd[1432]: 2025-05-08 00:13:52.781 [INFO][4583] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760" Namespace="calico-system" Pod="csi-node-driver-w679k" WorkloadEndpoint="localhost-k8s-csi--node--driver--w679k-eth0" May 8 00:13:52.799606 containerd[1432]: 2025-05-08 00:13:52.781 [INFO][4583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ee13cbed72 ContainerID="7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760" Namespace="calico-system" Pod="csi-node-driver-w679k" WorkloadEndpoint="localhost-k8s-csi--node--driver--w679k-eth0" May 8 00:13:52.799606 containerd[1432]: 2025-05-08 00:13:52.786 [INFO][4583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760" Namespace="calico-system" Pod="csi-node-driver-w679k" WorkloadEndpoint="localhost-k8s-csi--node--driver--w679k-eth0" May 8 00:13:52.799606 containerd[1432]: 2025-05-08 00:13:52.786 [INFO][4583] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760" Namespace="calico-system" Pod="csi-node-driver-w679k" WorkloadEndpoint="localhost-k8s-csi--node--driver--w679k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w679k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"afea4b03-2e4e-494b-bfd2-bbc94939e0ab", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760", Pod:"csi-node-driver-w679k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9ee13cbed72", MAC:"ea:fd:a5:83:34:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:13:52.799606 containerd[1432]: 2025-05-08 00:13:52.796 [INFO][4583] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760" Namespace="calico-system" Pod="csi-node-driver-w679k" WorkloadEndpoint="localhost-k8s-csi--node--driver--w679k-eth0" May 8 00:13:52.825169 containerd[1432]: time="2025-05-08T00:13:52.824951304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:52.825169 containerd[1432]: time="2025-05-08T00:13:52.825005149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:52.825169 containerd[1432]: time="2025-05-08T00:13:52.825015470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:52.825169 containerd[1432]: time="2025-05-08T00:13:52.825088438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:52.847468 systemd[1]: Started cri-containerd-7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760.scope - libcontainer container 7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760. May 8 00:13:52.869882 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:13:52.889850 systemd-networkd[1374]: cali0f744f1cffe: Link UP May 8 00:13:52.891086 systemd-networkd[1374]: cali0f744f1cffe: Gained carrier May 8 00:13:52.898525 containerd[1432]: time="2025-05-08T00:13:52.898490711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w679k,Uid:afea4b03-2e4e-494b-bfd2-bbc94939e0ab,Namespace:calico-system,Attempt:1,} returns sandbox id \"7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760\"" May 8 00:13:52.909158 containerd[1432]: 2025-05-08 00:13:52.678 [INFO][4573] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0 calico-kube-controllers-749dfb98f- calico-system cdbc0319-9800-491a-bfc5-f62d3ecc390b 903 0 2025-05-08 00:13:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:749dfb98f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-749dfb98f-2zbqs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0f744f1cffe [] []}} ContainerID="058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d" Namespace="calico-system" Pod="calico-kube-controllers-749dfb98f-2zbqs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-" May 8 00:13:52.909158 containerd[1432]: 2025-05-08 00:13:52.678 [INFO][4573] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d" Namespace="calico-system" Pod="calico-kube-controllers-749dfb98f-2zbqs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0" May 8 00:13:52.909158 containerd[1432]: 2025-05-08 00:13:52.727 [INFO][4619] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d" HandleID="k8s-pod-network.058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d" Workload="localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0" May 8 00:13:52.909158 containerd[1432]: 2025-05-08 00:13:52.754 [INFO][4619] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d" HandleID="k8s-pod-network.058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d" Workload="localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000373e00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-749dfb98f-2zbqs", "timestamp":"2025-05-08 00:13:52.72711617 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:13:52.909158 containerd[1432]: 2025-05-08 00:13:52.754 [INFO][4619] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:13:52.909158 containerd[1432]: 2025-05-08 00:13:52.778 [INFO][4619] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:13:52.909158 containerd[1432]: 2025-05-08 00:13:52.779 [INFO][4619] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:13:52.909158 containerd[1432]: 2025-05-08 00:13:52.845 [INFO][4619] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d" host="localhost" May 8 00:13:52.909158 containerd[1432]: 2025-05-08 00:13:52.851 [INFO][4619] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:13:52.909158 containerd[1432]: 2025-05-08 00:13:52.861 [INFO][4619] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:13:52.909158 containerd[1432]: 2025-05-08 00:13:52.864 [INFO][4619] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:13:52.909158 containerd[1432]: 2025-05-08 00:13:52.867 [INFO][4619] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:13:52.909158 containerd[1432]: 2025-05-08 00:13:52.867 [INFO][4619] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d" host="localhost" May 8 00:13:52.909158 containerd[1432]: 2025-05-08 00:13:52.869 [INFO][4619] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d May 8 00:13:52.909158 containerd[1432]: 2025-05-08 00:13:52.874 [INFO][4619] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d" host="localhost" May 8 00:13:52.909158 containerd[1432]: 2025-05-08 00:13:52.882 [INFO][4619] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d" host="localhost" May 8 00:13:52.909158 containerd[1432]: 2025-05-08 00:13:52.882 [INFO][4619] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d" host="localhost" May 8 00:13:52.909158 containerd[1432]: 2025-05-08 00:13:52.882 [INFO][4619] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:13:52.909158 containerd[1432]: 2025-05-08 00:13:52.882 [INFO][4619] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d" HandleID="k8s-pod-network.058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d" Workload="localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0" May 8 00:13:52.909688 containerd[1432]: 2025-05-08 00:13:52.885 [INFO][4573] cni-plugin/k8s.go 386: Populated endpoint ContainerID="058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d" Namespace="calico-system" Pod="calico-kube-controllers-749dfb98f-2zbqs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0", GenerateName:"calico-kube-controllers-749dfb98f-", Namespace:"calico-system", SelfLink:"", UID:"cdbc0319-9800-491a-bfc5-f62d3ecc390b", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"749dfb98f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-749dfb98f-2zbqs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0f744f1cffe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:13:52.909688 containerd[1432]: 2025-05-08 00:13:52.885 [INFO][4573] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d" Namespace="calico-system" Pod="calico-kube-controllers-749dfb98f-2zbqs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0" May 8 00:13:52.909688 containerd[1432]: 2025-05-08 00:13:52.885 [INFO][4573] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f744f1cffe ContainerID="058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d" Namespace="calico-system" Pod="calico-kube-controllers-749dfb98f-2zbqs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0" May 8 00:13:52.909688 containerd[1432]: 2025-05-08 00:13:52.892 [INFO][4573] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d" Namespace="calico-system" Pod="calico-kube-controllers-749dfb98f-2zbqs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0" May 8 00:13:52.909688 containerd[1432]: 2025-05-08 00:13:52.892 [INFO][4573] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d" Namespace="calico-system" Pod="calico-kube-controllers-749dfb98f-2zbqs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0", GenerateName:"calico-kube-controllers-749dfb98f-", Namespace:"calico-system", SelfLink:"", UID:"cdbc0319-9800-491a-bfc5-f62d3ecc390b", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"749dfb98f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d", Pod:"calico-kube-controllers-749dfb98f-2zbqs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0f744f1cffe", MAC:"6e:a9:a6:82:25:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:13:52.909688 containerd[1432]: 2025-05-08 00:13:52.905 [INFO][4573] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d" Namespace="calico-system" Pod="calico-kube-controllers-749dfb98f-2zbqs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0" May 8 00:13:52.932248 containerd[1432]: time="2025-05-08T00:13:52.932154507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:52.932248 containerd[1432]: time="2025-05-08T00:13:52.932222714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:52.932248 containerd[1432]: time="2025-05-08T00:13:52.932239476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:52.932500 containerd[1432]: time="2025-05-08T00:13:52.932363449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:52.954622 systemd[1]: Started cri-containerd-058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d.scope - libcontainer container 058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d. May 8 00:13:52.969960 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:13:52.993331 systemd-networkd[1374]: cali1a10e351aaf: Link UP May 8 00:13:52.994406 systemd-networkd[1374]: cali1a10e351aaf: Gained carrier May 8 00:13:52.998429 systemd-networkd[1374]: cali760735a3beb: Gained IPv6LL May 8 00:13:52.999359 systemd-networkd[1374]: calieb5ee9e837b: Gained IPv6LL May 8 00:13:53.007263 containerd[1432]: time="2025-05-08T00:13:53.007221379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749dfb98f-2zbqs,Uid:cdbc0319-9800-491a-bfc5-f62d3ecc390b,Namespace:calico-system,Attempt:1,} returns sandbox id \"058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d\"" May 8 00:13:53.020658 containerd[1432]: 2025-05-08 00:13:52.702 [INFO][4601] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0 calico-apiserver-79c5df75c- calico-apiserver 12b9a996-dd18-4e57-9070-f80361b7270b 902 0 2025-05-08 00:13:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79c5df75c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79c5df75c-scvb7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1a10e351aaf [] []}} ContainerID="2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2" Namespace="calico-apiserver" Pod="calico-apiserver-79c5df75c-scvb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--79c5df75c--scvb7-" May 8 00:13:53.020658 containerd[1432]: 2025-05-08 00:13:52.702 [INFO][4601] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2" Namespace="calico-apiserver" Pod="calico-apiserver-79c5df75c-scvb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0" May 8 00:13:53.020658 containerd[1432]: 2025-05-08 00:13:52.748 [INFO][4635] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2" HandleID="k8s-pod-network.2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2" Workload="localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0" May 8 00:13:53.020658 containerd[1432]: 2025-05-08 00:13:52.766 [INFO][4635] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2" HandleID="k8s-pod-network.2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2" Workload="localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000362d50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79c5df75c-scvb7", "timestamp":"2025-05-08 00:13:52.748911192 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:13:53.020658 containerd[1432]: 2025-05-08 00:13:52.766 [INFO][4635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:13:53.020658 containerd[1432]: 2025-05-08 00:13:52.882 [INFO][4635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:13:53.020658 containerd[1432]: 2025-05-08 00:13:52.883 [INFO][4635] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:13:53.020658 containerd[1432]: 2025-05-08 00:13:52.946 [INFO][4635] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2" host="localhost" May 8 00:13:53.020658 containerd[1432]: 2025-05-08 00:13:52.956 [INFO][4635] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:13:53.020658 containerd[1432]: 2025-05-08 00:13:52.964 [INFO][4635] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:13:53.020658 containerd[1432]: 2025-05-08 00:13:52.967 [INFO][4635] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:13:53.020658 containerd[1432]: 2025-05-08 00:13:52.971 [INFO][4635] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:13:53.020658 containerd[1432]: 2025-05-08 00:13:52.971 [INFO][4635] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2" host="localhost" May 8 00:13:53.020658 containerd[1432]: 2025-05-08 00:13:52.973 [INFO][4635] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2 May 8 00:13:53.020658 containerd[1432]: 2025-05-08 00:13:52.978 [INFO][4635] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2" host="localhost" May 8 00:13:53.020658 containerd[1432]: 2025-05-08 00:13:52.986 [INFO][4635] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2" host="localhost" May 8 00:13:53.020658 containerd[1432]: 2025-05-08 00:13:52.986 [INFO][4635] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2" host="localhost" May 8 00:13:53.020658 containerd[1432]: 2025-05-08 00:13:52.986 [INFO][4635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:13:53.020658 containerd[1432]: 2025-05-08 00:13:52.986 [INFO][4635] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2" HandleID="k8s-pod-network.2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2" Workload="localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0" May 8 00:13:53.021192 containerd[1432]: 2025-05-08 00:13:52.990 [INFO][4601] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2" Namespace="calico-apiserver" Pod="calico-apiserver-79c5df75c-scvb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0", GenerateName:"calico-apiserver-79c5df75c-", Namespace:"calico-apiserver", SelfLink:"", UID:"12b9a996-dd18-4e57-9070-f80361b7270b", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c5df75c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79c5df75c-scvb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a10e351aaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:13:53.021192 containerd[1432]: 2025-05-08 00:13:52.991 [INFO][4601] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2" Namespace="calico-apiserver" Pod="calico-apiserver-79c5df75c-scvb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0" May 8 00:13:53.021192 containerd[1432]: 2025-05-08 00:13:52.991 [INFO][4601] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a10e351aaf ContainerID="2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2" Namespace="calico-apiserver" Pod="calico-apiserver-79c5df75c-scvb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0" May 8 00:13:53.021192 containerd[1432]: 2025-05-08 00:13:52.994 [INFO][4601] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2" Namespace="calico-apiserver" Pod="calico-apiserver-79c5df75c-scvb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0" May 8 00:13:53.021192 containerd[1432]: 2025-05-08 00:13:52.996 [INFO][4601] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2" Namespace="calico-apiserver" Pod="calico-apiserver-79c5df75c-scvb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0", GenerateName:"calico-apiserver-79c5df75c-", Namespace:"calico-apiserver", SelfLink:"", UID:"12b9a996-dd18-4e57-9070-f80361b7270b", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c5df75c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2", Pod:"calico-apiserver-79c5df75c-scvb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a10e351aaf", MAC:"ae:26:d3:0e:5a:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:13:53.021192 containerd[1432]: 2025-05-08 00:13:53.018 [INFO][4601] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2" Namespace="calico-apiserver" Pod="calico-apiserver-79c5df75c-scvb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0" May 8 00:13:53.045060 containerd[1432]: time="2025-05-08T00:13:53.044827646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:53.045060 containerd[1432]: time="2025-05-08T00:13:53.044973661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:53.045482 containerd[1432]: time="2025-05-08T00:13:53.045032147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:53.045482 containerd[1432]: time="2025-05-08T00:13:53.045137278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:53.061439 systemd[1]: Started cri-containerd-2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2.scope - libcontainer container 2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2. May 8 00:13:53.073900 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:13:53.094450 containerd[1432]: time="2025-05-08T00:13:53.094346218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c5df75c-scvb7,Uid:12b9a996-dd18-4e57-9070-f80361b7270b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2\"" May 8 00:13:53.437555 containerd[1432]: time="2025-05-08T00:13:53.437425695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:53.438659 containerd[1432]: time="2025-05-08T00:13:53.438627219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 8 00:13:53.439920 containerd[1432]: time="2025-05-08T00:13:53.439843864Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:53.442018 containerd[1432]: time="2025-05-08T00:13:53.441966962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:53.442957 containerd[1432]: time="2025-05-08T00:13:53.442922541Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.652149937s" May 8 00:13:53.443003 containerd[1432]: time="2025-05-08T00:13:53.442964505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 8 00:13:53.443903 containerd[1432]: time="2025-05-08T00:13:53.443870158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 00:13:53.444898 containerd[1432]: time="2025-05-08T00:13:53.444861700Z" level=info msg="CreateContainer within sandbox \"0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:13:53.454227 containerd[1432]: time="2025-05-08T00:13:53.454083088Z" level=info msg="CreateContainer within sandbox \"0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8b11110888af2b28ce1ec6b1e96695702833f819e8c15a34f89e35c5c4d39acb\"" May 8 00:13:53.454611 containerd[1432]: time="2025-05-08T00:13:53.454585540Z" level=info msg="StartContainer for \"8b11110888af2b28ce1ec6b1e96695702833f819e8c15a34f89e35c5c4d39acb\"" May 8 00:13:53.486523 systemd[1]: Started cri-containerd-8b11110888af2b28ce1ec6b1e96695702833f819e8c15a34f89e35c5c4d39acb.scope - libcontainer container 8b11110888af2b28ce1ec6b1e96695702833f819e8c15a34f89e35c5c4d39acb. May 8 00:13:53.524414 containerd[1432]: time="2025-05-08T00:13:53.524371756Z" level=info msg="StartContainer for \"8b11110888af2b28ce1ec6b1e96695702833f819e8c15a34f89e35c5c4d39acb\" returns successfully" May 8 00:13:53.621029 kubelet[2457]: E0508 00:13:53.620857 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:53.627534 kubelet[2457]: I0508 00:13:53.627415 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79c5df75c-cmszx" podStartSLOduration=23.973956118 podStartE2EDuration="25.627231212s" podCreationTimestamp="2025-05-08 00:13:28 +0000 UTC" firstStartedPulling="2025-05-08 00:13:51.790490293 +0000 UTC m=+35.419319199" lastFinishedPulling="2025-05-08 00:13:53.443765387 +0000 UTC m=+37.072594293" observedRunningTime="2025-05-08 00:13:53.62555364 +0000 UTC m=+37.254382626" watchObservedRunningTime="2025-05-08 00:13:53.627231212 +0000 UTC m=+37.256060118" May 8 00:13:53.830403 systemd-networkd[1374]: cali9ee13cbed72: Gained IPv6LL May 8 00:13:54.345923 systemd-networkd[1374]: cali1a10e351aaf: Gained IPv6LL May 8 00:13:54.551055 containerd[1432]: time="2025-05-08T00:13:54.551001679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:54.551863 containerd[1432]: time="2025-05-08T00:13:54.551825041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 8 00:13:54.552452 containerd[1432]: time="2025-05-08T00:13:54.552416861Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:54.554728 containerd[1432]: time="2025-05-08T00:13:54.554689409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:54.555371 containerd[1432]: time="2025-05-08T00:13:54.555338154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.111426551s" May 8 00:13:54.555408 containerd[1432]: time="2025-05-08T00:13:54.555372557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 8 00:13:54.558292 containerd[1432]: time="2025-05-08T00:13:54.557270147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 8 00:13:54.559508 containerd[1432]: time="2025-05-08T00:13:54.559374438Z" level=info msg="CreateContainer within sandbox \"7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 00:13:54.582386 containerd[1432]: time="2025-05-08T00:13:54.582319777Z" level=info msg="CreateContainer within sandbox \"7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"da4eaa65deae7f64838def2a6234f7070766fd009431f80f6bac5fa4e36c2aa5\"" May 8 00:13:54.582889 containerd[1432]: time="2025-05-08T00:13:54.582858231Z" level=info msg="StartContainer for \"da4eaa65deae7f64838def2a6234f7070766fd009431f80f6bac5fa4e36c2aa5\"" May 8 00:13:54.618414 systemd[1]: Started cri-containerd-da4eaa65deae7f64838def2a6234f7070766fd009431f80f6bac5fa4e36c2aa5.scope - libcontainer container da4eaa65deae7f64838def2a6234f7070766fd009431f80f6bac5fa4e36c2aa5. May 8 00:13:54.626084 kubelet[2457]: I0508 00:13:54.626060 2457 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:13:54.626412 kubelet[2457]: E0508 00:13:54.626389 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:54.648414 containerd[1432]: time="2025-05-08T00:13:54.648042083Z" level=info msg="StartContainer for \"da4eaa65deae7f64838def2a6234f7070766fd009431f80f6bac5fa4e36c2aa5\" returns successfully" May 8 00:13:54.854448 systemd-networkd[1374]: cali0f744f1cffe: Gained IPv6LL May 8 00:13:55.631076 kubelet[2457]: E0508 00:13:55.631015 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:56.339979 containerd[1432]: time="2025-05-08T00:13:56.339922554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:56.340814 containerd[1432]: time="2025-05-08T00:13:56.340771595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 8 00:13:56.345067 containerd[1432]: time="2025-05-08T00:13:56.344458427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.787117273s" May 8 00:13:56.345067 containerd[1432]: time="2025-05-08T00:13:56.344509032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 8 00:13:56.345067 containerd[1432]: time="2025-05-08T00:13:56.344864506Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:56.345519 containerd[1432]: time="2025-05-08T00:13:56.345477124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:56.347321 containerd[1432]: time="2025-05-08T00:13:56.347297378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:13:56.355576 containerd[1432]: time="2025-05-08T00:13:56.355532524Z" level=info msg="CreateContainer within sandbox \"058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 8 00:13:56.367475 containerd[1432]: time="2025-05-08T00:13:56.367424859Z" level=info msg="CreateContainer within sandbox \"058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9ef6e9009eaaa7ed7204b56060084681a4456f8b7e5d7b5d492cd6e79fcdc8d7\"" May 8 00:13:56.367967 containerd[1432]: time="2025-05-08T00:13:56.367932468Z" level=info msg="StartContainer for \"9ef6e9009eaaa7ed7204b56060084681a4456f8b7e5d7b5d492cd6e79fcdc8d7\"" May 8 00:13:56.400471 systemd[1]: Started cri-containerd-9ef6e9009eaaa7ed7204b56060084681a4456f8b7e5d7b5d492cd6e79fcdc8d7.scope - libcontainer container 9ef6e9009eaaa7ed7204b56060084681a4456f8b7e5d7b5d492cd6e79fcdc8d7. May 8 00:13:56.437244 containerd[1432]: time="2025-05-08T00:13:56.437196999Z" level=info msg="StartContainer for \"9ef6e9009eaaa7ed7204b56060084681a4456f8b7e5d7b5d492cd6e79fcdc8d7\" returns successfully" May 8 00:13:56.656162 kubelet[2457]: I0508 00:13:56.655224 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-749dfb98f-2zbqs" podStartSLOduration=25.316975232 podStartE2EDuration="28.655206047s" podCreationTimestamp="2025-05-08 00:13:28 +0000 UTC" firstStartedPulling="2025-05-08 00:13:53.008343614 +0000 UTC m=+36.637172560" lastFinishedPulling="2025-05-08 00:13:56.346574429 +0000 UTC m=+39.975403375" observedRunningTime="2025-05-08 00:13:56.654317442 +0000 UTC m=+40.283146388" watchObservedRunningTime="2025-05-08 00:13:56.655206047 +0000 UTC m=+40.284034993" May 8 00:13:56.695405 containerd[1432]: time="2025-05-08T00:13:56.694689455Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:56.696636 containerd[1432]: time="2025-05-08T00:13:56.696600878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 8 00:13:56.698308 containerd[1432]: time="2025-05-08T00:13:56.698254996Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 350.927855ms" May 8 00:13:56.698427 containerd[1432]: time="2025-05-08T00:13:56.698410250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 8 00:13:56.699969 containerd[1432]: time="2025-05-08T00:13:56.699946397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 00:13:56.700594 containerd[1432]: time="2025-05-08T00:13:56.700561136Z" level=info msg="CreateContainer within sandbox \"2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:13:56.722167 containerd[1432]: time="2025-05-08T00:13:56.722113073Z" level=info msg="CreateContainer within sandbox \"2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a9faa844ea95e3b6558fbef0ce62d2fb308ae8b58cd23563ddc6b8a5cf51eb64\"" May 8 00:13:56.722915 containerd[1432]: time="2025-05-08T00:13:56.722805819Z" level=info msg="StartContainer for \"a9faa844ea95e3b6558fbef0ce62d2fb308ae8b58cd23563ddc6b8a5cf51eb64\"" May 8 00:13:56.743883 systemd[1]: Started cri-containerd-a9faa844ea95e3b6558fbef0ce62d2fb308ae8b58cd23563ddc6b8a5cf51eb64.scope - libcontainer container a9faa844ea95e3b6558fbef0ce62d2fb308ae8b58cd23563ddc6b8a5cf51eb64. May 8 00:13:56.787142 containerd[1432]: time="2025-05-08T00:13:56.787089674Z" level=info msg="StartContainer for \"a9faa844ea95e3b6558fbef0ce62d2fb308ae8b58cd23563ddc6b8a5cf51eb64\" returns successfully" May 8 00:13:57.146809 systemd[1]: Started sshd@9-10.0.0.14:22-10.0.0.1:57942.service - OpenSSH per-connection server daemon (10.0.0.1:57942). May 8 00:13:57.203154 sshd[5012]: Accepted publickey for core from 10.0.0.1 port 57942 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:13:57.204236 sshd[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:57.208644 systemd-logind[1416]: New session 10 of user core. May 8 00:13:57.218485 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:13:57.431194 sshd[5012]: pam_unix(sshd:session): session closed for user core May 8 00:13:57.438028 systemd[1]: sshd@9-10.0.0.14:22-10.0.0.1:57942.service: Deactivated successfully. May 8 00:13:57.441734 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:13:57.445205 systemd-logind[1416]: Session 10 logged out. Waiting for processes to exit. May 8 00:13:57.450546 systemd[1]: Started sshd@10-10.0.0.14:22-10.0.0.1:57956.service - OpenSSH per-connection server daemon (10.0.0.1:57956). May 8 00:13:57.451867 systemd-logind[1416]: Removed session 10. May 8 00:13:57.506019 sshd[5029]: Accepted publickey for core from 10.0.0.1 port 57956 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:13:57.507620 sshd[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:57.512222 systemd-logind[1416]: New session 11 of user core. May 8 00:13:57.521471 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:13:57.977505 containerd[1432]: time="2025-05-08T00:13:57.977457382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:57.979042 containerd[1432]: time="2025-05-08T00:13:57.978997086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 8 00:13:57.979159 sshd[5029]: pam_unix(sshd:session): session closed for user core May 8 00:13:57.982152 containerd[1432]: time="2025-05-08T00:13:57.981879715Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:57.988369 containerd[1432]: time="2025-05-08T00:13:57.988234988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:57.992728 containerd[1432]: time="2025-05-08T00:13:57.991258950Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.291189301s" May 8 00:13:57.992728 containerd[1432]: time="2025-05-08T00:13:57.991320236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 8 00:13:57.994879 kubelet[2457]: I0508 00:13:57.993960 2457 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:13:57.997036 kubelet[2457]: E0508 00:13:57.995227 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:58.001760 containerd[1432]: time="2025-05-08T00:13:57.999779985Z" level=info msg="CreateContainer within sandbox \"7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 00:13:58.011607 systemd[1]: Started sshd@11-10.0.0.14:22-10.0.0.1:57962.service - OpenSSH per-connection server daemon (10.0.0.1:57962). May 8 00:13:58.012073 systemd[1]: sshd@10-10.0.0.14:22-10.0.0.1:57956.service: Deactivated successfully. May 8 00:13:58.013576 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:13:58.020256 systemd-logind[1416]: Session 11 logged out. Waiting for processes to exit. May 8 00:13:58.021408 systemd-logind[1416]: Removed session 11. May 8 00:13:58.047019 containerd[1432]: time="2025-05-08T00:13:58.046828002Z" level=info msg="CreateContainer within sandbox \"7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ac96544e5662103b5f4a66a54173e1f94d45475b81717aafb158180bf72e39b3\"" May 8 00:13:58.048133 containerd[1432]: time="2025-05-08T00:13:58.048102558Z" level=info msg="StartContainer for \"ac96544e5662103b5f4a66a54173e1f94d45475b81717aafb158180bf72e39b3\"" May 8 00:13:58.061174 sshd[5046]: Accepted publickey for core from 10.0.0.1 port 57962 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:13:58.063241 sshd[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:58.071401 systemd-logind[1416]: New session 12 of user core. May 8 00:13:58.080522 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:13:58.084685 systemd[1]: Started cri-containerd-ac96544e5662103b5f4a66a54173e1f94d45475b81717aafb158180bf72e39b3.scope - libcontainer container ac96544e5662103b5f4a66a54173e1f94d45475b81717aafb158180bf72e39b3. May 8 00:13:58.127554 containerd[1432]: time="2025-05-08T00:13:58.127510245Z" level=info msg="StartContainer for \"ac96544e5662103b5f4a66a54173e1f94d45475b81717aafb158180bf72e39b3\" returns successfully" May 8 00:13:58.171998 kubelet[2457]: I0508 00:13:58.171716 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79c5df75c-scvb7" podStartSLOduration=26.568294078 podStartE2EDuration="30.171696597s" podCreationTimestamp="2025-05-08 00:13:28 +0000 UTC" firstStartedPulling="2025-05-08 00:13:53.095787446 +0000 UTC m=+36.724616392" lastFinishedPulling="2025-05-08 00:13:56.699189965 +0000 UTC m=+40.328018911" observedRunningTime="2025-05-08 00:13:57.656712141 +0000 UTC m=+41.285541087" watchObservedRunningTime="2025-05-08 00:13:58.171696597 +0000 UTC m=+41.800525543" May 8 00:13:58.300837 sshd[5046]: pam_unix(sshd:session): session closed for user core May 8 00:13:58.307007 systemd[1]: sshd@11-10.0.0.14:22-10.0.0.1:57962.service: Deactivated successfully. May 8 00:13:58.315416 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:13:58.317625 systemd-logind[1416]: Session 12 logged out. Waiting for processes to exit. May 8 00:13:58.320385 systemd-logind[1416]: Removed session 12. May 8 00:13:58.559335 kubelet[2457]: I0508 00:13:58.556936 2457 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 00:13:58.559335 kubelet[2457]: I0508 00:13:58.558245 2457 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 00:13:58.649583 kubelet[2457]: I0508 00:13:58.648998 2457 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:13:58.650387 kubelet[2457]: E0508 00:13:58.650324 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:13:58.660513 kubelet[2457]: I0508 00:13:58.660291 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-w679k" podStartSLOduration=25.56664616 podStartE2EDuration="30.660259545s" podCreationTimestamp="2025-05-08 00:13:28 +0000 UTC" firstStartedPulling="2025-05-08 00:13:52.899480056 +0000 UTC m=+36.528309002" lastFinishedPulling="2025-05-08 00:13:57.993093441 +0000 UTC m=+41.621922387" observedRunningTime="2025-05-08 00:13:58.659889831 +0000 UTC m=+42.288718737" watchObservedRunningTime="2025-05-08 00:13:58.660259545 +0000 UTC m=+42.289088491" May 8 00:14:01.780455 kubelet[2457]: I0508 00:14:01.780316 2457 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:14:03.315019 systemd[1]: Started sshd@12-10.0.0.14:22-10.0.0.1:55142.service - OpenSSH per-connection server daemon (10.0.0.1:55142). May 8 00:14:03.352906 sshd[5163]: Accepted publickey for core from 10.0.0.1 port 55142 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:14:03.354119 sshd[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:03.358986 systemd-logind[1416]: New session 13 of user core. May 8 00:14:03.368453 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:14:03.526741 sshd[5163]: pam_unix(sshd:session): session closed for user core May 8 00:14:03.536762 systemd[1]: sshd@12-10.0.0.14:22-10.0.0.1:55142.service: Deactivated successfully. May 8 00:14:03.538266 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:14:03.540721 systemd-logind[1416]: Session 13 logged out. Waiting for processes to exit. May 8 00:14:03.546526 systemd[1]: Started sshd@13-10.0.0.14:22-10.0.0.1:55144.service - OpenSSH per-connection server daemon (10.0.0.1:55144). May 8 00:14:03.548453 systemd-logind[1416]: Removed session 13. May 8 00:14:03.577687 sshd[5177]: Accepted publickey for core from 10.0.0.1 port 55144 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:14:03.578972 sshd[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:03.582992 systemd-logind[1416]: New session 14 of user core. May 8 00:14:03.591421 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:14:03.966719 sshd[5177]: pam_unix(sshd:session): session closed for user core May 8 00:14:03.978761 systemd[1]: sshd@13-10.0.0.14:22-10.0.0.1:55144.service: Deactivated successfully. May 8 00:14:03.980252 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:14:03.982314 systemd-logind[1416]: Session 14 logged out. Waiting for processes to exit. May 8 00:14:03.994536 systemd[1]: Started sshd@14-10.0.0.14:22-10.0.0.1:55148.service - OpenSSH per-connection server daemon (10.0.0.1:55148). May 8 00:14:03.995803 systemd-logind[1416]: Removed session 14. May 8 00:14:04.028968 sshd[5190]: Accepted publickey for core from 10.0.0.1 port 55148 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:14:04.030120 sshd[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:04.034046 systemd-logind[1416]: New session 15 of user core. May 8 00:14:04.041449 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:14:05.449186 sshd[5190]: pam_unix(sshd:session): session closed for user core May 8 00:14:05.460319 systemd[1]: sshd@14-10.0.0.14:22-10.0.0.1:55148.service: Deactivated successfully. May 8 00:14:05.463679 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:14:05.465022 systemd-logind[1416]: Session 15 logged out. Waiting for processes to exit. May 8 00:14:05.472924 systemd[1]: Started sshd@15-10.0.0.14:22-10.0.0.1:55156.service - OpenSSH per-connection server daemon (10.0.0.1:55156). May 8 00:14:05.476075 systemd-logind[1416]: Removed session 15. May 8 00:14:05.506071 sshd[5209]: Accepted publickey for core from 10.0.0.1 port 55156 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:14:05.506938 sshd[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:05.510330 systemd-logind[1416]: New session 16 of user core. May 8 00:14:05.524508 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:14:05.865075 sshd[5209]: pam_unix(sshd:session): session closed for user core May 8 00:14:05.874264 systemd[1]: sshd@15-10.0.0.14:22-10.0.0.1:55156.service: Deactivated successfully. May 8 00:14:05.876575 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:14:05.878924 systemd-logind[1416]: Session 16 logged out. Waiting for processes to exit. May 8 00:14:05.890570 systemd[1]: Started sshd@16-10.0.0.14:22-10.0.0.1:55160.service - OpenSSH per-connection server daemon (10.0.0.1:55160). May 8 00:14:05.891633 systemd-logind[1416]: Removed session 16. May 8 00:14:05.921482 sshd[5223]: Accepted publickey for core from 10.0.0.1 port 55160 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:14:05.922736 sshd[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:05.926375 systemd-logind[1416]: New session 17 of user core. May 8 00:14:05.933439 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:14:06.055388 sshd[5223]: pam_unix(sshd:session): session closed for user core May 8 00:14:06.058853 systemd[1]: sshd@16-10.0.0.14:22-10.0.0.1:55160.service: Deactivated successfully. May 8 00:14:06.061870 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:14:06.062472 systemd-logind[1416]: Session 17 logged out. Waiting for processes to exit. May 8 00:14:06.063217 systemd-logind[1416]: Removed session 17. May 8 00:14:11.065855 systemd[1]: Started sshd@17-10.0.0.14:22-10.0.0.1:55164.service - OpenSSH per-connection server daemon (10.0.0.1:55164). May 8 00:14:11.104570 sshd[5243]: Accepted publickey for core from 10.0.0.1 port 55164 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:14:11.105718 sshd[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:11.109022 systemd-logind[1416]: New session 18 of user core. May 8 00:14:11.120466 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:14:11.290897 sshd[5243]: pam_unix(sshd:session): session closed for user core May 8 00:14:11.294343 systemd[1]: sshd@17-10.0.0.14:22-10.0.0.1:55164.service: Deactivated successfully. May 8 00:14:11.296762 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:14:11.297343 systemd-logind[1416]: Session 18 logged out. Waiting for processes to exit. May 8 00:14:11.298145 systemd-logind[1416]: Removed session 18. May 8 00:14:16.302086 systemd[1]: Started sshd@18-10.0.0.14:22-10.0.0.1:52438.service - OpenSSH per-connection server daemon (10.0.0.1:52438). May 8 00:14:16.344310 sshd[5262]: Accepted publickey for core from 10.0.0.1 port 52438 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:14:16.345080 sshd[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:16.348962 systemd-logind[1416]: New session 19 of user core. May 8 00:14:16.358490 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:14:16.453937 containerd[1432]: time="2025-05-08T00:14:16.453825515Z" level=info msg="StopPodSandbox for \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\"" May 8 00:14:16.585437 containerd[1432]: 2025-05-08 00:14:16.527 [WARNING][5290] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w679k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"afea4b03-2e4e-494b-bfd2-bbc94939e0ab", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760", Pod:"csi-node-driver-w679k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9ee13cbed72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:14:16.585437 containerd[1432]: 2025-05-08 00:14:16.529 [INFO][5290] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" May 8 00:14:16.585437 containerd[1432]: 2025-05-08 00:14:16.529 [INFO][5290] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" iface="eth0" netns="" May 8 00:14:16.585437 containerd[1432]: 2025-05-08 00:14:16.529 [INFO][5290] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" May 8 00:14:16.585437 containerd[1432]: 2025-05-08 00:14:16.529 [INFO][5290] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" May 8 00:14:16.585437 containerd[1432]: 2025-05-08 00:14:16.566 [INFO][5299] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" HandleID="k8s-pod-network.2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" Workload="localhost-k8s-csi--node--driver--w679k-eth0" May 8 00:14:16.585437 containerd[1432]: 2025-05-08 00:14:16.566 [INFO][5299] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:14:16.585437 containerd[1432]: 2025-05-08 00:14:16.566 [INFO][5299] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:14:16.585437 containerd[1432]: 2025-05-08 00:14:16.577 [WARNING][5299] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" HandleID="k8s-pod-network.2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" Workload="localhost-k8s-csi--node--driver--w679k-eth0" May 8 00:14:16.585437 containerd[1432]: 2025-05-08 00:14:16.577 [INFO][5299] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" HandleID="k8s-pod-network.2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" Workload="localhost-k8s-csi--node--driver--w679k-eth0" May 8 00:14:16.585437 containerd[1432]: 2025-05-08 00:14:16.579 [INFO][5299] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:14:16.585437 containerd[1432]: 2025-05-08 00:14:16.582 [INFO][5290] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" May 8 00:14:16.585437 containerd[1432]: time="2025-05-08T00:14:16.585402818Z" level=info msg="TearDown network for sandbox \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\" successfully" May 8 00:14:16.585437 containerd[1432]: time="2025-05-08T00:14:16.585437020Z" level=info msg="StopPodSandbox for \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\" returns successfully" May 8 00:14:16.586167 containerd[1432]: time="2025-05-08T00:14:16.586127789Z" level=info msg="RemovePodSandbox for \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\"" May 8 00:14:16.592332 containerd[1432]: time="2025-05-08T00:14:16.592286142Z" level=info msg="Forcibly stopping sandbox \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\"" May 8 00:14:16.643803 sshd[5262]: pam_unix(sshd:session): session closed for user core May 8 00:14:16.648449 systemd-logind[1416]: Session 19 logged out. Waiting for processes to exit. May 8 00:14:16.649103 systemd[1]: sshd@18-10.0.0.14:22-10.0.0.1:52438.service: Deactivated successfully. May 8 00:14:16.656374 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:14:16.659101 systemd-logind[1416]: Removed session 19. May 8 00:14:16.694012 containerd[1432]: 2025-05-08 00:14:16.656 [WARNING][5321] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w679k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"afea4b03-2e4e-494b-bfd2-bbc94939e0ab", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7117e6564594421724f0daf6891d0b6778ba3daf3e5dada5249fddb740e1d760", Pod:"csi-node-driver-w679k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9ee13cbed72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:14:16.694012 containerd[1432]: 2025-05-08 00:14:16.657 [INFO][5321] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" May 8 00:14:16.694012 containerd[1432]: 2025-05-08 00:14:16.657 [INFO][5321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" iface="eth0" netns="" May 8 00:14:16.694012 containerd[1432]: 2025-05-08 00:14:16.657 [INFO][5321] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" May 8 00:14:16.694012 containerd[1432]: 2025-05-08 00:14:16.657 [INFO][5321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" May 8 00:14:16.694012 containerd[1432]: 2025-05-08 00:14:16.681 [INFO][5332] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" HandleID="k8s-pod-network.2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" Workload="localhost-k8s-csi--node--driver--w679k-eth0" May 8 00:14:16.694012 containerd[1432]: 2025-05-08 00:14:16.681 [INFO][5332] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:14:16.694012 containerd[1432]: 2025-05-08 00:14:16.681 [INFO][5332] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:14:16.694012 containerd[1432]: 2025-05-08 00:14:16.689 [WARNING][5332] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" HandleID="k8s-pod-network.2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" Workload="localhost-k8s-csi--node--driver--w679k-eth0" May 8 00:14:16.694012 containerd[1432]: 2025-05-08 00:14:16.689 [INFO][5332] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" HandleID="k8s-pod-network.2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" Workload="localhost-k8s-csi--node--driver--w679k-eth0" May 8 00:14:16.694012 containerd[1432]: 2025-05-08 00:14:16.691 [INFO][5332] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:14:16.694012 containerd[1432]: 2025-05-08 00:14:16.692 [INFO][5321] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40" May 8 00:14:16.694448 containerd[1432]: time="2025-05-08T00:14:16.694044106Z" level=info msg="TearDown network for sandbox \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\" successfully" May 8 00:14:16.705181 containerd[1432]: time="2025-05-08T00:14:16.705109845Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:14:16.705286 containerd[1432]: time="2025-05-08T00:14:16.705208092Z" level=info msg="RemovePodSandbox \"2fd5d68782f7f92a4e66e52e6565ccc79a2388e74e0a5c390818ea35370a8b40\" returns successfully" May 8 00:14:16.705932 containerd[1432]: time="2025-05-08T00:14:16.705890700Z" level=info msg="StopPodSandbox for \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\"" May 8 00:14:16.786735 containerd[1432]: 2025-05-08 00:14:16.746 [WARNING][5354] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0", GenerateName:"calico-kube-controllers-749dfb98f-", Namespace:"calico-system", SelfLink:"", UID:"cdbc0319-9800-491a-bfc5-f62d3ecc390b", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"749dfb98f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d", Pod:"calico-kube-controllers-749dfb98f-2zbqs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0f744f1cffe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:14:16.786735 containerd[1432]: 2025-05-08 00:14:16.746 [INFO][5354] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" May 8 00:14:16.786735 containerd[1432]: 2025-05-08 00:14:16.747 [INFO][5354] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" iface="eth0" netns="" May 8 00:14:16.786735 containerd[1432]: 2025-05-08 00:14:16.747 [INFO][5354] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" May 8 00:14:16.786735 containerd[1432]: 2025-05-08 00:14:16.747 [INFO][5354] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" May 8 00:14:16.786735 containerd[1432]: 2025-05-08 00:14:16.774 [INFO][5362] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" HandleID="k8s-pod-network.89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" Workload="localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0" May 8 00:14:16.786735 containerd[1432]: 2025-05-08 00:14:16.774 [INFO][5362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:14:16.786735 containerd[1432]: 2025-05-08 00:14:16.774 [INFO][5362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:14:16.786735 containerd[1432]: 2025-05-08 00:14:16.782 [WARNING][5362] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" HandleID="k8s-pod-network.89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" Workload="localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0" May 8 00:14:16.786735 containerd[1432]: 2025-05-08 00:14:16.782 [INFO][5362] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" HandleID="k8s-pod-network.89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" Workload="localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0" May 8 00:14:16.786735 containerd[1432]: 2025-05-08 00:14:16.783 [INFO][5362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:14:16.786735 containerd[1432]: 2025-05-08 00:14:16.785 [INFO][5354] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" May 8 00:14:16.786735 containerd[1432]: time="2025-05-08T00:14:16.786614943Z" level=info msg="TearDown network for sandbox \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\" successfully" May 8 00:14:16.786735 containerd[1432]: time="2025-05-08T00:14:16.786642265Z" level=info msg="StopPodSandbox for \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\" returns successfully" May 8 00:14:16.787192 containerd[1432]: time="2025-05-08T00:14:16.787065735Z" level=info msg="RemovePodSandbox for \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\"" May 8 00:14:16.787192 containerd[1432]: time="2025-05-08T00:14:16.787100017Z" level=info msg="Forcibly stopping sandbox \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\"" May 8 00:14:16.859444 containerd[1432]: 2025-05-08 00:14:16.824 [WARNING][5384] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0", GenerateName:"calico-kube-controllers-749dfb98f-", Namespace:"calico-system", SelfLink:"", UID:"cdbc0319-9800-491a-bfc5-f62d3ecc390b", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"749dfb98f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"058c74a2d78e5b4fb4b01c2f4c7c8f8941ac83430e01995e2ec488afdd02c51d", Pod:"calico-kube-controllers-749dfb98f-2zbqs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0f744f1cffe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:14:16.859444 containerd[1432]: 2025-05-08 00:14:16.825 [INFO][5384] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" May 8 00:14:16.859444 containerd[1432]: 2025-05-08 00:14:16.825 [INFO][5384] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" iface="eth0" netns="" May 8 00:14:16.859444 containerd[1432]: 2025-05-08 00:14:16.825 [INFO][5384] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" May 8 00:14:16.859444 containerd[1432]: 2025-05-08 00:14:16.825 [INFO][5384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" May 8 00:14:16.859444 containerd[1432]: 2025-05-08 00:14:16.844 [INFO][5393] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" HandleID="k8s-pod-network.89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" Workload="localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0" May 8 00:14:16.859444 containerd[1432]: 2025-05-08 00:14:16.844 [INFO][5393] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:14:16.859444 containerd[1432]: 2025-05-08 00:14:16.844 [INFO][5393] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:14:16.859444 containerd[1432]: 2025-05-08 00:14:16.853 [WARNING][5393] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" HandleID="k8s-pod-network.89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" Workload="localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0" May 8 00:14:16.859444 containerd[1432]: 2025-05-08 00:14:16.853 [INFO][5393] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" HandleID="k8s-pod-network.89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" Workload="localhost-k8s-calico--kube--controllers--749dfb98f--2zbqs-eth0" May 8 00:14:16.859444 containerd[1432]: 2025-05-08 00:14:16.855 [INFO][5393] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:14:16.859444 containerd[1432]: 2025-05-08 00:14:16.857 [INFO][5384] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da" May 8 00:14:16.859444 containerd[1432]: time="2025-05-08T00:14:16.859379385Z" level=info msg="TearDown network for sandbox \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\" successfully" May 8 00:14:16.862862 containerd[1432]: time="2025-05-08T00:14:16.862818868Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:14:16.862930 containerd[1432]: time="2025-05-08T00:14:16.862881912Z" level=info msg="RemovePodSandbox \"89130d1e9d3bd8954fb24112f4de41c0b467ed5ea7a2317ec668e1fdbc9d40da\" returns successfully" May 8 00:14:16.863345 containerd[1432]: time="2025-05-08T00:14:16.863322743Z" level=info msg="StopPodSandbox for \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\"" May 8 00:14:16.933531 containerd[1432]: 2025-05-08 00:14:16.897 [WARNING][5416] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0", GenerateName:"calico-apiserver-79c5df75c-", Namespace:"calico-apiserver", SelfLink:"", UID:"12b9a996-dd18-4e57-9070-f80361b7270b", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c5df75c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2", Pod:"calico-apiserver-79c5df75c-scvb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a10e351aaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:14:16.933531 containerd[1432]: 2025-05-08 00:14:16.897 [INFO][5416] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" May 8 00:14:16.933531 containerd[1432]: 2025-05-08 00:14:16.897 [INFO][5416] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" iface="eth0" netns="" May 8 00:14:16.933531 containerd[1432]: 2025-05-08 00:14:16.897 [INFO][5416] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" May 8 00:14:16.933531 containerd[1432]: 2025-05-08 00:14:16.897 [INFO][5416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" May 8 00:14:16.933531 containerd[1432]: 2025-05-08 00:14:16.921 [INFO][5424] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" HandleID="k8s-pod-network.4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" Workload="localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0" May 8 00:14:16.933531 containerd[1432]: 2025-05-08 00:14:16.921 [INFO][5424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:14:16.933531 containerd[1432]: 2025-05-08 00:14:16.921 [INFO][5424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:14:16.933531 containerd[1432]: 2025-05-08 00:14:16.929 [WARNING][5424] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" HandleID="k8s-pod-network.4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" Workload="localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0" May 8 00:14:16.933531 containerd[1432]: 2025-05-08 00:14:16.929 [INFO][5424] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" HandleID="k8s-pod-network.4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" Workload="localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0" May 8 00:14:16.933531 containerd[1432]: 2025-05-08 00:14:16.930 [INFO][5424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:14:16.933531 containerd[1432]: 2025-05-08 00:14:16.932 [INFO][5416] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" May 8 00:14:16.934169 containerd[1432]: time="2025-05-08T00:14:16.933564728Z" level=info msg="TearDown network for sandbox \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\" successfully" May 8 00:14:16.934169 containerd[1432]: time="2025-05-08T00:14:16.933589370Z" level=info msg="StopPodSandbox for \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\" returns successfully" May 8 00:14:16.934169 containerd[1432]: time="2025-05-08T00:14:16.934062563Z" level=info msg="RemovePodSandbox for \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\"" May 8 00:14:16.934169 containerd[1432]: time="2025-05-08T00:14:16.934094645Z" level=info msg="Forcibly stopping sandbox \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\"" May 8 00:14:17.006427 containerd[1432]: 2025-05-08 00:14:16.973 [WARNING][5447] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0", GenerateName:"calico-apiserver-79c5df75c-", Namespace:"calico-apiserver", SelfLink:"", UID:"12b9a996-dd18-4e57-9070-f80361b7270b", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c5df75c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b3f38ced3fe762aff0f57c874f2153e2e74962f65478f5c1c89f7647d7636e2", Pod:"calico-apiserver-79c5df75c-scvb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a10e351aaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:14:17.006427 containerd[1432]: 2025-05-08 00:14:16.973 [INFO][5447] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" May 8 00:14:17.006427 containerd[1432]: 2025-05-08 00:14:16.973 [INFO][5447] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" iface="eth0" netns="" May 8 00:14:17.006427 containerd[1432]: 2025-05-08 00:14:16.973 [INFO][5447] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" May 8 00:14:17.006427 containerd[1432]: 2025-05-08 00:14:16.973 [INFO][5447] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" May 8 00:14:17.006427 containerd[1432]: 2025-05-08 00:14:16.992 [INFO][5455] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" HandleID="k8s-pod-network.4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" Workload="localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0" May 8 00:14:17.006427 containerd[1432]: 2025-05-08 00:14:16.992 [INFO][5455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:14:17.006427 containerd[1432]: 2025-05-08 00:14:16.992 [INFO][5455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:14:17.006427 containerd[1432]: 2025-05-08 00:14:17.001 [WARNING][5455] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" HandleID="k8s-pod-network.4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" Workload="localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0" May 8 00:14:17.006427 containerd[1432]: 2025-05-08 00:14:17.001 [INFO][5455] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" HandleID="k8s-pod-network.4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" Workload="localhost-k8s-calico--apiserver--79c5df75c--scvb7-eth0" May 8 00:14:17.006427 containerd[1432]: 2025-05-08 00:14:17.002 [INFO][5455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:14:17.006427 containerd[1432]: 2025-05-08 00:14:17.004 [INFO][5447] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8" May 8 00:14:17.006427 containerd[1432]: time="2025-05-08T00:14:17.006007465Z" level=info msg="TearDown network for sandbox \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\" successfully" May 8 00:14:17.016238 containerd[1432]: time="2025-05-08T00:14:17.016080608Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:14:17.016238 containerd[1432]: time="2025-05-08T00:14:17.016149013Z" level=info msg="RemovePodSandbox \"4b923322d426dcbd857eb392b953cf1336883f5adb069d36537b0f176a58dce8\" returns successfully" May 8 00:14:17.016644 containerd[1432]: time="2025-05-08T00:14:17.016624046Z" level=info msg="StopPodSandbox for \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\"" May 8 00:14:17.107946 containerd[1432]: 2025-05-08 00:14:17.073 [WARNING][5478] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a3ed82ee-0db3-4b0d-9d13-06ab474af0f9", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377", Pod:"coredns-6f6b679f8f-6rqx8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10ec180fb3e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:14:17.107946 containerd[1432]: 2025-05-08 00:14:17.073 [INFO][5478] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" May 8 00:14:17.107946 containerd[1432]: 2025-05-08 00:14:17.073 [INFO][5478] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" iface="eth0" netns="" May 8 00:14:17.107946 containerd[1432]: 2025-05-08 00:14:17.073 [INFO][5478] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" May 8 00:14:17.107946 containerd[1432]: 2025-05-08 00:14:17.073 [INFO][5478] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" May 8 00:14:17.107946 containerd[1432]: 2025-05-08 00:14:17.094 [INFO][5486] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" HandleID="k8s-pod-network.c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" Workload="localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0" May 8 00:14:17.107946 containerd[1432]: 2025-05-08 00:14:17.094 [INFO][5486] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:14:17.107946 containerd[1432]: 2025-05-08 00:14:17.094 [INFO][5486] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:14:17.107946 containerd[1432]: 2025-05-08 00:14:17.102 [WARNING][5486] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" HandleID="k8s-pod-network.c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" Workload="localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0" May 8 00:14:17.107946 containerd[1432]: 2025-05-08 00:14:17.102 [INFO][5486] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" HandleID="k8s-pod-network.c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" Workload="localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0" May 8 00:14:17.107946 containerd[1432]: 2025-05-08 00:14:17.104 [INFO][5486] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:14:17.107946 containerd[1432]: 2025-05-08 00:14:17.106 [INFO][5478] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" May 8 00:14:17.108453 containerd[1432]: time="2025-05-08T00:14:17.108009505Z" level=info msg="TearDown network for sandbox \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\" successfully" May 8 00:14:17.108453 containerd[1432]: time="2025-05-08T00:14:17.108034267Z" level=info msg="StopPodSandbox for \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\" returns successfully" May 8 00:14:17.108506 containerd[1432]: time="2025-05-08T00:14:17.108475538Z" level=info msg="RemovePodSandbox for \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\"" May 8 00:14:17.108529 containerd[1432]: time="2025-05-08T00:14:17.108512140Z" level=info msg="Forcibly stopping sandbox \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\"" May 8 00:14:17.149465 systemd[1]: run-containerd-runc-k8s.io-9ef6e9009eaaa7ed7204b56060084681a4456f8b7e5d7b5d492cd6e79fcdc8d7-runc.AGujdA.mount: Deactivated successfully. May 8 00:14:17.183584 containerd[1432]: 2025-05-08 00:14:17.145 [WARNING][5508] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a3ed82ee-0db3-4b0d-9d13-06ab474af0f9", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a6b98cd102ba4cf70484d2eb8ec2709217778cb417e53e8dbee6f7957d56377", Pod:"coredns-6f6b679f8f-6rqx8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10ec180fb3e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:14:17.183584 containerd[1432]: 2025-05-08 00:14:17.145 [INFO][5508] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" May 8 00:14:17.183584 containerd[1432]: 2025-05-08 00:14:17.145 [INFO][5508] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" iface="eth0" netns="" May 8 00:14:17.183584 containerd[1432]: 2025-05-08 00:14:17.145 [INFO][5508] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" May 8 00:14:17.183584 containerd[1432]: 2025-05-08 00:14:17.145 [INFO][5508] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" May 8 00:14:17.183584 containerd[1432]: 2025-05-08 00:14:17.170 [INFO][5531] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" HandleID="k8s-pod-network.c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" Workload="localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0" May 8 00:14:17.183584 containerd[1432]: 2025-05-08 00:14:17.170 [INFO][5531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:14:17.183584 containerd[1432]: 2025-05-08 00:14:17.170 [INFO][5531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:14:17.183584 containerd[1432]: 2025-05-08 00:14:17.179 [WARNING][5531] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" HandleID="k8s-pod-network.c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" Workload="localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0" May 8 00:14:17.183584 containerd[1432]: 2025-05-08 00:14:17.179 [INFO][5531] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" HandleID="k8s-pod-network.c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" Workload="localhost-k8s-coredns--6f6b679f8f--6rqx8-eth0" May 8 00:14:17.183584 containerd[1432]: 2025-05-08 00:14:17.180 [INFO][5531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:14:17.183584 containerd[1432]: 2025-05-08 00:14:17.182 [INFO][5508] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88" May 8 00:14:17.184075 containerd[1432]: time="2025-05-08T00:14:17.183619783Z" level=info msg="TearDown network for sandbox \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\" successfully" May 8 00:14:17.186208 containerd[1432]: time="2025-05-08T00:14:17.186179242Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:14:17.186344 containerd[1432]: time="2025-05-08T00:14:17.186234446Z" level=info msg="RemovePodSandbox \"c618a1e3e0607a7a696082b4d849a2278a2b8fd1a59304abf6886e84b76dbf88\" returns successfully" May 8 00:14:17.186764 containerd[1432]: time="2025-05-08T00:14:17.186743601Z" level=info msg="StopPodSandbox for \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\"" May 8 00:14:17.252036 containerd[1432]: 2025-05-08 00:14:17.219 [WARNING][5558] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0", GenerateName:"calico-apiserver-79c5df75c-", Namespace:"calico-apiserver", SelfLink:"", UID:"276ac77e-604a-4871-b9fd-fa9015b47098", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c5df75c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc", Pod:"calico-apiserver-79c5df75c-cmszx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali760735a3beb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:14:17.252036 containerd[1432]: 2025-05-08 00:14:17.219 [INFO][5558] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" May 8 00:14:17.252036 containerd[1432]: 2025-05-08 00:14:17.219 [INFO][5558] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" iface="eth0" netns="" May 8 00:14:17.252036 containerd[1432]: 2025-05-08 00:14:17.219 [INFO][5558] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" May 8 00:14:17.252036 containerd[1432]: 2025-05-08 00:14:17.219 [INFO][5558] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" May 8 00:14:17.252036 containerd[1432]: 2025-05-08 00:14:17.239 [INFO][5567] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" HandleID="k8s-pod-network.28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" Workload="localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0" May 8 00:14:17.252036 containerd[1432]: 2025-05-08 00:14:17.239 [INFO][5567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:14:17.252036 containerd[1432]: 2025-05-08 00:14:17.239 [INFO][5567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:14:17.252036 containerd[1432]: 2025-05-08 00:14:17.247 [WARNING][5567] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" HandleID="k8s-pod-network.28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" Workload="localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0" May 8 00:14:17.252036 containerd[1432]: 2025-05-08 00:14:17.247 [INFO][5567] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" HandleID="k8s-pod-network.28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" Workload="localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0" May 8 00:14:17.252036 containerd[1432]: 2025-05-08 00:14:17.249 [INFO][5567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:14:17.252036 containerd[1432]: 2025-05-08 00:14:17.250 [INFO][5558] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" May 8 00:14:17.252788 containerd[1432]: time="2025-05-08T00:14:17.252228573Z" level=info msg="TearDown network for sandbox \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\" successfully" May 8 00:14:17.252788 containerd[1432]: time="2025-05-08T00:14:17.252412025Z" level=info msg="StopPodSandbox for \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\" returns successfully" May 8 00:14:17.253085 containerd[1432]: time="2025-05-08T00:14:17.253059351Z" level=info msg="RemovePodSandbox for \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\"" May 8 00:14:17.253134 containerd[1432]: time="2025-05-08T00:14:17.253095873Z" level=info msg="Forcibly stopping sandbox \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\"" May 8 00:14:17.319324 containerd[1432]: 2025-05-08 00:14:17.286 [WARNING][5590] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0", GenerateName:"calico-apiserver-79c5df75c-", Namespace:"calico-apiserver", SelfLink:"", UID:"276ac77e-604a-4871-b9fd-fa9015b47098", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c5df75c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0c30d7cdad85384cf5d69f9b8eeaad3db2c1b62aac90de3458d47380a0524bcc", Pod:"calico-apiserver-79c5df75c-cmszx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali760735a3beb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:14:17.319324 containerd[1432]: 2025-05-08 00:14:17.286 [INFO][5590] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" May 8 00:14:17.319324 containerd[1432]: 2025-05-08 00:14:17.286 [INFO][5590] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" iface="eth0" netns="" May 8 00:14:17.319324 containerd[1432]: 2025-05-08 00:14:17.286 [INFO][5590] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" May 8 00:14:17.319324 containerd[1432]: 2025-05-08 00:14:17.286 [INFO][5590] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" May 8 00:14:17.319324 containerd[1432]: 2025-05-08 00:14:17.305 [INFO][5598] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" HandleID="k8s-pod-network.28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" Workload="localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0" May 8 00:14:17.319324 containerd[1432]: 2025-05-08 00:14:17.305 [INFO][5598] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:14:17.319324 containerd[1432]: 2025-05-08 00:14:17.305 [INFO][5598] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:14:17.319324 containerd[1432]: 2025-05-08 00:14:17.314 [WARNING][5598] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" HandleID="k8s-pod-network.28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" Workload="localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0" May 8 00:14:17.319324 containerd[1432]: 2025-05-08 00:14:17.314 [INFO][5598] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" HandleID="k8s-pod-network.28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" Workload="localhost-k8s-calico--apiserver--79c5df75c--cmszx-eth0" May 8 00:14:17.319324 containerd[1432]: 2025-05-08 00:14:17.315 [INFO][5598] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:14:17.319324 containerd[1432]: 2025-05-08 00:14:17.317 [INFO][5590] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f" May 8 00:14:17.319324 containerd[1432]: time="2025-05-08T00:14:17.318502839Z" level=info msg="TearDown network for sandbox \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\" successfully" May 8 00:14:17.321837 containerd[1432]: time="2025-05-08T00:14:17.321798989Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:14:17.321992 containerd[1432]: time="2025-05-08T00:14:17.321966041Z" level=info msg="RemovePodSandbox \"28bf52f96d946d33b21eedc9390fd97b447894401d912b67009979379502744f\" returns successfully" May 8 00:14:17.322566 containerd[1432]: time="2025-05-08T00:14:17.322547841Z" level=info msg="StopPodSandbox for \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\"" May 8 00:14:17.399547 containerd[1432]: 2025-05-08 00:14:17.365 [WARNING][5621] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--dpj96-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ba17bc01-d9db-4f11-a32c-d317dc8f04b0", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c", Pod:"coredns-6f6b679f8f-dpj96", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb5ee9e837b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:14:17.399547 containerd[1432]: 2025-05-08 00:14:17.365 [INFO][5621] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" May 8 00:14:17.399547 containerd[1432]: 2025-05-08 00:14:17.365 [INFO][5621] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" iface="eth0" netns="" May 8 00:14:17.399547 containerd[1432]: 2025-05-08 00:14:17.365 [INFO][5621] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" May 8 00:14:17.399547 containerd[1432]: 2025-05-08 00:14:17.365 [INFO][5621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" May 8 00:14:17.399547 containerd[1432]: 2025-05-08 00:14:17.386 [INFO][5629] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" HandleID="k8s-pod-network.251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" Workload="localhost-k8s-coredns--6f6b679f8f--dpj96-eth0" May 8 00:14:17.399547 containerd[1432]: 2025-05-08 00:14:17.386 [INFO][5629] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:14:17.399547 containerd[1432]: 2025-05-08 00:14:17.386 [INFO][5629] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:14:17.399547 containerd[1432]: 2025-05-08 00:14:17.395 [WARNING][5629] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" HandleID="k8s-pod-network.251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" Workload="localhost-k8s-coredns--6f6b679f8f--dpj96-eth0" May 8 00:14:17.399547 containerd[1432]: 2025-05-08 00:14:17.395 [INFO][5629] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" HandleID="k8s-pod-network.251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" Workload="localhost-k8s-coredns--6f6b679f8f--dpj96-eth0" May 8 00:14:17.399547 containerd[1432]: 2025-05-08 00:14:17.396 [INFO][5629] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:14:17.399547 containerd[1432]: 2025-05-08 00:14:17.398 [INFO][5621] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" May 8 00:14:17.399977 containerd[1432]: time="2025-05-08T00:14:17.399579618Z" level=info msg="TearDown network for sandbox \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\" successfully" May 8 00:14:17.399977 containerd[1432]: time="2025-05-08T00:14:17.399631462Z" level=info msg="StopPodSandbox for \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\" returns successfully" May 8 00:14:17.400627 containerd[1432]: time="2025-05-08T00:14:17.400198182Z" level=info msg="RemovePodSandbox for \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\"" May 8 00:14:17.400627 containerd[1432]: time="2025-05-08T00:14:17.400230704Z" level=info msg="Forcibly stopping sandbox \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\"" May 8 00:14:17.473503 containerd[1432]: 2025-05-08 00:14:17.434 [WARNING][5652] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--dpj96-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ba17bc01-d9db-4f11-a32c-d317dc8f04b0", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 13, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ba618773db17effb4a7768cd189fd25a0d69cc54344571ffa64e07eb959793c", Pod:"coredns-6f6b679f8f-dpj96", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb5ee9e837b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:14:17.473503 containerd[1432]: 2025-05-08 00:14:17.435 [INFO][5652] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" May 8 00:14:17.473503 containerd[1432]: 2025-05-08 00:14:17.435 [INFO][5652] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" iface="eth0" netns="" May 8 00:14:17.473503 containerd[1432]: 2025-05-08 00:14:17.435 [INFO][5652] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" May 8 00:14:17.473503 containerd[1432]: 2025-05-08 00:14:17.435 [INFO][5652] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" May 8 00:14:17.473503 containerd[1432]: 2025-05-08 00:14:17.455 [INFO][5660] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" HandleID="k8s-pod-network.251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" Workload="localhost-k8s-coredns--6f6b679f8f--dpj96-eth0" May 8 00:14:17.473503 containerd[1432]: 2025-05-08 00:14:17.455 [INFO][5660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:14:17.473503 containerd[1432]: 2025-05-08 00:14:17.455 [INFO][5660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:14:17.473503 containerd[1432]: 2025-05-08 00:14:17.465 [WARNING][5660] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" HandleID="k8s-pod-network.251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" Workload="localhost-k8s-coredns--6f6b679f8f--dpj96-eth0" May 8 00:14:17.473503 containerd[1432]: 2025-05-08 00:14:17.465 [INFO][5660] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" HandleID="k8s-pod-network.251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" Workload="localhost-k8s-coredns--6f6b679f8f--dpj96-eth0" May 8 00:14:17.473503 containerd[1432]: 2025-05-08 00:14:17.467 [INFO][5660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:14:17.473503 containerd[1432]: 2025-05-08 00:14:17.472 [INFO][5652] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251" May 8 00:14:17.474105 containerd[1432]: time="2025-05-08T00:14:17.473538421Z" level=info msg="TearDown network for sandbox \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\" successfully" May 8 00:14:17.483417 containerd[1432]: time="2025-05-08T00:14:17.483371028Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:14:17.483501 containerd[1432]: time="2025-05-08T00:14:17.483442473Z" level=info msg="RemovePodSandbox \"251564e244f979a80b7b7d2a24f16256bb60ac0c3c12161d9c9287c38e59f251\" returns successfully" May 8 00:14:21.659150 systemd[1]: Started sshd@19-10.0.0.14:22-10.0.0.1:52448.service - OpenSSH per-connection server daemon (10.0.0.1:52448). May 8 00:14:21.706172 sshd[5689]: Accepted publickey for core from 10.0.0.1 port 52448 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:14:21.707531 sshd[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:21.711182 systemd-logind[1416]: New session 20 of user core. May 8 00:14:21.722475 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:14:21.867429 sshd[5689]: pam_unix(sshd:session): session closed for user core May 8 00:14:21.871067 systemd[1]: sshd@19-10.0.0.14:22-10.0.0.1:52448.service: Deactivated successfully. May 8 00:14:21.873063 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:14:21.873675 systemd-logind[1416]: Session 20 logged out. Waiting for processes to exit. May 8 00:14:21.874560 systemd-logind[1416]: Removed session 20.