May 9 00:03:28.883012 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 9 00:03:28.883034 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu May 8 22:43:24 -00 2025 May 9 00:03:28.883044 kernel: KASLR enabled May 9 00:03:28.883050 kernel: efi: EFI v2.7 by EDK II May 9 00:03:28.883055 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 9 00:03:28.883061 kernel: random: crng init done May 9 00:03:28.883068 kernel: ACPI: Early table checksum verification disabled May 9 00:03:28.883074 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 9 00:03:28.883080 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 9 00:03:28.883087 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:03:28.883094 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:03:28.883099 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:03:28.883106 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:03:28.883112 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:03:28.883119 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:03:28.883127 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:03:28.883134 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:03:28.883140 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:03:28.883146 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 9 00:03:28.883153 kernel: NUMA: Failed to initialise from firmware May 9 00:03:28.883159 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 9 00:03:28.883165 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 9 00:03:28.883171 kernel: Zone ranges: May 9 00:03:28.883178 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 9 00:03:28.883184 kernel: DMA32 empty May 9 00:03:28.883191 kernel: Normal empty May 9 00:03:28.883197 kernel: Movable zone start for each node May 9 00:03:28.883203 kernel: Early memory node ranges May 9 00:03:28.883210 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 9 00:03:28.883216 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 9 00:03:28.883222 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 9 00:03:28.883229 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 9 00:03:28.883235 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 9 00:03:28.883241 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 9 00:03:28.883247 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 9 00:03:28.883254 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 9 00:03:28.883260 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 9 00:03:28.883268 kernel: psci: probing for conduit method from ACPI. May 9 00:03:28.883274 kernel: psci: PSCIv1.1 detected in firmware. May 9 00:03:28.883280 kernel: psci: Using standard PSCI v0.2 function IDs May 9 00:03:28.883289 kernel: psci: Trusted OS migration not required May 9 00:03:28.883296 kernel: psci: SMC Calling Convention v1.1 May 9 00:03:28.883303 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 9 00:03:28.883310 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 9 00:03:28.883317 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 9 00:03:28.883324 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 9 00:03:28.883331 kernel: Detected PIPT I-cache on CPU0 May 9 00:03:28.883337 kernel: CPU features: detected: GIC system register CPU interface May 9 00:03:28.883344 kernel: CPU features: detected: Hardware dirty bit management May 9 00:03:28.883351 kernel: CPU features: detected: Spectre-v4 May 9 00:03:28.883357 kernel: CPU features: detected: Spectre-BHB May 9 00:03:28.883364 kernel: CPU features: kernel page table isolation forced ON by KASLR May 9 00:03:28.883371 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 9 00:03:28.883378 kernel: CPU features: detected: ARM erratum 1418040 May 9 00:03:28.883385 kernel: CPU features: detected: SSBS not fully self-synchronizing May 9 00:03:28.883391 kernel: alternatives: applying boot alternatives May 9 00:03:28.883399 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8e29bd932c31237847976018676f554a4d09fa105e08b3bc01bcbb09708aefd3 May 9 00:03:28.883406 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 00:03:28.883413 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 00:03:28.883419 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 00:03:28.883426 kernel: Fallback order for Node 0: 0 May 9 00:03:28.883433 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 9 00:03:28.883440 kernel: Policy zone: DMA May 9 00:03:28.883446 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 00:03:28.883454 kernel: software IO TLB: area num 4. May 9 00:03:28.883461 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 9 00:03:28.883468 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) May 9 00:03:28.883475 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 00:03:28.883481 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 00:03:28.883489 kernel: rcu: RCU event tracing is enabled. May 9 00:03:28.883496 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 00:03:28.883503 kernel: Trampoline variant of Tasks RCU enabled. May 9 00:03:28.883509 kernel: Tracing variant of Tasks RCU enabled. May 9 00:03:28.883516 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 00:03:28.883523 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 00:03:28.883530 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 00:03:28.883537 kernel: GICv3: 256 SPIs implemented May 9 00:03:28.883544 kernel: GICv3: 0 Extended SPIs implemented May 9 00:03:28.883551 kernel: Root IRQ handler: gic_handle_irq May 9 00:03:28.883557 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 9 00:03:28.883564 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 9 00:03:28.883570 kernel: ITS [mem 0x08080000-0x0809ffff] May 9 00:03:28.883578 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 9 00:03:28.883584 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 9 00:03:28.883591 kernel: GICv3: using LPI property table @0x00000000400f0000 May 9 00:03:28.883598 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 9 00:03:28.883605 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 00:03:28.883612 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 00:03:28.883619 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 9 00:03:28.883626 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 9 00:03:28.883633 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 9 00:03:28.883640 kernel: arm-pv: using stolen time PV May 9 00:03:28.883646 kernel: Console: colour dummy device 80x25 May 9 00:03:28.883653 kernel: ACPI: Core revision 20230628 May 9 00:03:28.883661 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 9 00:03:28.883667 kernel: pid_max: default: 32768 minimum: 301 May 9 00:03:28.883674 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 00:03:28.883682 kernel: landlock: Up and running. May 9 00:03:28.883689 kernel: SELinux: Initializing. May 9 00:03:28.883696 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:03:28.883703 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:03:28.883710 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 9 00:03:28.883717 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:03:28.883724 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:03:28.883731 kernel: rcu: Hierarchical SRCU implementation. May 9 00:03:28.883738 kernel: rcu: Max phase no-delay instances is 400. May 9 00:03:28.883746 kernel: Platform MSI: ITS@0x8080000 domain created May 9 00:03:28.883753 kernel: PCI/MSI: ITS@0x8080000 domain created May 9 00:03:28.883760 kernel: Remapping and enabling EFI services. May 9 00:03:28.883767 kernel: smp: Bringing up secondary CPUs ... May 9 00:03:28.883774 kernel: Detected PIPT I-cache on CPU1 May 9 00:03:28.883781 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 9 00:03:28.883788 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 9 00:03:28.883795 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 00:03:28.883802 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 9 00:03:28.883809 kernel: Detected PIPT I-cache on CPU2 May 9 00:03:28.883816 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 9 00:03:28.883824 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 9 00:03:28.883835 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 00:03:28.883850 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 9 00:03:28.883858 kernel: Detected PIPT I-cache on CPU3 May 9 00:03:28.883865 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 9 00:03:28.883872 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 9 00:03:28.883879 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 00:03:28.883886 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 9 00:03:28.883894 kernel: smp: Brought up 1 node, 4 CPUs May 9 00:03:28.883903 kernel: SMP: Total of 4 processors activated. May 9 00:03:28.883910 kernel: CPU features: detected: 32-bit EL0 Support May 9 00:03:28.883917 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 9 00:03:28.883925 kernel: CPU features: detected: Common not Private translations May 9 00:03:28.883932 kernel: CPU features: detected: CRC32 instructions May 9 00:03:28.883939 kernel: CPU features: detected: Enhanced Virtualization Traps May 9 00:03:28.883947 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 9 00:03:28.883954 kernel: CPU features: detected: LSE atomic instructions May 9 00:03:28.883961 kernel: CPU features: detected: Privileged Access Never May 9 00:03:28.883969 kernel: CPU features: detected: RAS Extension Support May 9 00:03:28.883976 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 9 00:03:28.883983 kernel: CPU: All CPU(s) started at EL1 May 9 00:03:28.883995 kernel: alternatives: applying system-wide alternatives May 9 00:03:28.884003 kernel: devtmpfs: initialized May 9 00:03:28.884010 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 00:03:28.884019 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 00:03:28.884026 kernel: pinctrl core: initialized pinctrl subsystem May 9 00:03:28.884033 kernel: SMBIOS 3.0.0 present. May 9 00:03:28.884040 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 9 00:03:28.884047 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 00:03:28.884055 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 00:03:28.884062 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 00:03:28.884069 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 00:03:28.884076 kernel: audit: initializing netlink subsys (disabled) May 9 00:03:28.884085 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 May 9 00:03:28.884092 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 00:03:28.884099 kernel: cpuidle: using governor menu May 9 00:03:28.884106 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 00:03:28.884113 kernel: ASID allocator initialised with 32768 entries May 9 00:03:28.884120 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 00:03:28.884128 kernel: Serial: AMBA PL011 UART driver May 9 00:03:28.884135 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 9 00:03:28.884142 kernel: Modules: 0 pages in range for non-PLT usage May 9 00:03:28.884150 kernel: Modules: 509008 pages in range for PLT usage May 9 00:03:28.884157 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 00:03:28.884165 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 00:03:28.884172 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 00:03:28.884179 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 00:03:28.884186 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 00:03:28.884193 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 00:03:28.884201 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 00:03:28.884208 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 00:03:28.884215 kernel: ACPI: Added _OSI(Module Device) May 9 00:03:28.884223 kernel: ACPI: Added _OSI(Processor Device) May 9 00:03:28.884230 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 00:03:28.884237 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 00:03:28.884245 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 00:03:28.884252 kernel: ACPI: Interpreter enabled May 9 00:03:28.884259 kernel: ACPI: Using GIC for interrupt routing May 9 00:03:28.884266 kernel: ACPI: MCFG table detected, 1 entries May 9 00:03:28.884273 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 9 00:03:28.884280 kernel: printk: console [ttyAMA0] enabled May 9 00:03:28.884289 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 00:03:28.884423 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 00:03:28.884496 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 9 00:03:28.884562 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 9 00:03:28.884626 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 9 00:03:28.884689 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 9 00:03:28.884698 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 9 00:03:28.884708 kernel: PCI host bridge to bus 0000:00 May 9 00:03:28.884780 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 9 00:03:28.884839 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 9 00:03:28.884913 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 9 00:03:28.884983 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 00:03:28.885078 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 9 00:03:28.885155 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 9 00:03:28.885227 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 9 00:03:28.885293 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 9 00:03:28.885358 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 9 00:03:28.885423 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 9 00:03:28.885487 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 9 00:03:28.885552 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 9 00:03:28.885613 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 9 00:03:28.885670 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 9 00:03:28.885732 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 9 00:03:28.885742 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 9 00:03:28.885749 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 9 00:03:28.885757 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 9 00:03:28.885764 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 9 00:03:28.885771 kernel: iommu: Default domain type: Translated May 9 00:03:28.885779 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 00:03:28.885788 kernel: efivars: Registered efivars operations May 9 00:03:28.885795 kernel: vgaarb: loaded May 9 00:03:28.885802 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 00:03:28.885810 kernel: VFS: Disk quotas dquot_6.6.0 May 9 00:03:28.885817 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 00:03:28.885825 kernel: pnp: PnP ACPI init May 9 00:03:28.885909 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 9 00:03:28.885921 kernel: pnp: PnP ACPI: found 1 devices May 9 00:03:28.885930 kernel: NET: Registered PF_INET protocol family May 9 00:03:28.885937 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 00:03:28.885945 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 00:03:28.885952 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 00:03:28.885960 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 00:03:28.885967 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 00:03:28.885974 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 00:03:28.885982 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:03:28.886058 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:03:28.886068 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 00:03:28.886075 kernel: PCI: CLS 0 bytes, default 64 May 9 00:03:28.886082 kernel: kvm [1]: HYP mode not available May 9 00:03:28.886090 kernel: Initialise system trusted keyrings May 9 00:03:28.886097 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 00:03:28.886105 kernel: Key type asymmetric registered May 9 00:03:28.886112 kernel: Asymmetric key parser 'x509' registered May 9 00:03:28.886119 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 9 00:03:28.886126 kernel: io scheduler mq-deadline registered May 9 00:03:28.886135 kernel: io scheduler kyber registered May 9 00:03:28.886142 kernel: io scheduler bfq registered May 9 00:03:28.886149 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 9 00:03:28.886156 kernel: ACPI: button: Power Button [PWRB] May 9 00:03:28.886164 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 9 00:03:28.886248 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 9 00:03:28.886264 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 00:03:28.886271 kernel: thunder_xcv, ver 1.0 May 9 00:03:28.886278 kernel: thunder_bgx, ver 1.0 May 9 00:03:28.886287 kernel: nicpf, ver 1.0 May 9 00:03:28.886294 kernel: nicvf, ver 1.0 May 9 00:03:28.886367 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 00:03:28.886430 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T00:03:28 UTC (1746749008) May 9 00:03:28.886440 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 00:03:28.886447 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 9 00:03:28.886454 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 00:03:28.886462 kernel: watchdog: Hard watchdog permanently disabled May 9 00:03:28.886471 kernel: NET: Registered PF_INET6 protocol family May 9 00:03:28.886479 kernel: Segment Routing with IPv6 May 9 00:03:28.886486 kernel: In-situ OAM (IOAM) with IPv6 May 9 00:03:28.886493 kernel: NET: Registered PF_PACKET protocol family May 9 00:03:28.886500 kernel: Key type dns_resolver registered May 9 00:03:28.886508 kernel: registered taskstats version 1 May 9 00:03:28.886515 kernel: Loading compiled-in X.509 certificates May 9 00:03:28.886523 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 7944e0e0bec5e8cad487856da19569eba337cea0' May 9 00:03:28.886530 kernel: Key type .fscrypt registered May 9 00:03:28.886538 kernel: Key type fscrypt-provisioning registered May 9 00:03:28.886546 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 00:03:28.886553 kernel: ima: Allocated hash algorithm: sha1 May 9 00:03:28.886560 kernel: ima: No architecture policies found May 9 00:03:28.886567 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 00:03:28.886575 kernel: clk: Disabling unused clocks May 9 00:03:28.886582 kernel: Freeing unused kernel memory: 39424K May 9 00:03:28.886589 kernel: Run /init as init process May 9 00:03:28.886596 kernel: with arguments: May 9 00:03:28.886605 kernel: /init May 9 00:03:28.886612 kernel: with environment: May 9 00:03:28.886619 kernel: HOME=/ May 9 00:03:28.886626 kernel: TERM=linux May 9 00:03:28.886632 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 00:03:28.886641 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:03:28.886650 systemd[1]: Detected virtualization kvm. May 9 00:03:28.886659 systemd[1]: Detected architecture arm64. May 9 00:03:28.886667 systemd[1]: Running in initrd. May 9 00:03:28.886675 systemd[1]: No hostname configured, using default hostname. May 9 00:03:28.886682 systemd[1]: Hostname set to . May 9 00:03:28.886690 systemd[1]: Initializing machine ID from VM UUID. May 9 00:03:28.886698 systemd[1]: Queued start job for default target initrd.target. May 9 00:03:28.886706 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:03:28.886713 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:03:28.886721 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 00:03:28.886731 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:03:28.886739 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 00:03:28.886747 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 00:03:28.886756 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 00:03:28.886764 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 00:03:28.886772 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:03:28.886782 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:03:28.886790 systemd[1]: Reached target paths.target - Path Units. May 9 00:03:28.886797 systemd[1]: Reached target slices.target - Slice Units. May 9 00:03:28.886805 systemd[1]: Reached target swap.target - Swaps. May 9 00:03:28.886813 systemd[1]: Reached target timers.target - Timer Units. May 9 00:03:28.886820 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:03:28.886828 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:03:28.886836 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 00:03:28.886852 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 00:03:28.886862 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:03:28.886870 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:03:28.886877 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:03:28.886885 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:03:28.886893 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 00:03:28.886901 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:03:28.886909 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 00:03:28.886916 systemd[1]: Starting systemd-fsck-usr.service... May 9 00:03:28.886926 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:03:28.886933 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:03:28.886941 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:03:28.886949 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 00:03:28.886957 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:03:28.886964 systemd[1]: Finished systemd-fsck-usr.service. May 9 00:03:28.886974 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:03:28.887007 systemd-journald[236]: Collecting audit messages is disabled. May 9 00:03:28.887026 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:03:28.887037 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:03:28.887045 systemd-journald[236]: Journal started May 9 00:03:28.887065 systemd-journald[236]: Runtime Journal (/run/log/journal/78d78cdc59854138b560d04b653a12b4) is 5.9M, max 47.3M, 41.4M free. May 9 00:03:28.878559 systemd-modules-load[238]: Inserted module 'overlay' May 9 00:03:28.888834 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:03:28.889929 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:03:28.894011 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 00:03:28.894348 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:03:28.897359 kernel: Bridge firewalling registered May 9 00:03:28.896449 systemd-modules-load[238]: Inserted module 'br_netfilter' May 9 00:03:28.897140 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:03:28.899008 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:03:28.904359 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:03:28.906832 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:03:28.910176 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:03:28.912437 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:03:28.922141 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 00:03:28.923220 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:03:28.926252 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:03:28.936327 dracut-cmdline[276]: dracut-dracut-053 May 9 00:03:28.938716 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8e29bd932c31237847976018676f554a4d09fa105e08b3bc01bcbb09708aefd3 May 9 00:03:28.956800 systemd-resolved[280]: Positive Trust Anchors: May 9 00:03:28.956819 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:03:28.956858 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:03:28.962751 systemd-resolved[280]: Defaulting to hostname 'linux'. May 9 00:03:28.968483 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:03:28.969636 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:03:29.009030 kernel: SCSI subsystem initialized May 9 00:03:29.014010 kernel: Loading iSCSI transport class v2.0-870. May 9 00:03:29.024006 kernel: iscsi: registered transport (tcp) May 9 00:03:29.034255 kernel: iscsi: registered transport (qla4xxx) May 9 00:03:29.034293 kernel: QLogic iSCSI HBA Driver May 9 00:03:29.075361 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 00:03:29.089169 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 00:03:29.107017 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 00:03:29.107088 kernel: device-mapper: uevent: version 1.0.3 May 9 00:03:29.107105 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 00:03:29.155029 kernel: raid6: neonx8 gen() 15752 MB/s May 9 00:03:29.172007 kernel: raid6: neonx4 gen() 15619 MB/s May 9 00:03:29.189005 kernel: raid6: neonx2 gen() 13204 MB/s May 9 00:03:29.206010 kernel: raid6: neonx1 gen() 10495 MB/s May 9 00:03:29.223011 kernel: raid6: int64x8 gen() 6931 MB/s May 9 00:03:29.240011 kernel: raid6: int64x4 gen() 7344 MB/s May 9 00:03:29.257012 kernel: raid6: int64x2 gen() 6127 MB/s May 9 00:03:29.274008 kernel: raid6: int64x1 gen() 5031 MB/s May 9 00:03:29.274037 kernel: raid6: using algorithm neonx8 gen() 15752 MB/s May 9 00:03:29.291002 kernel: raid6: .... xor() 11931 MB/s, rmw enabled May 9 00:03:29.291024 kernel: raid6: using neon recovery algorithm May 9 00:03:29.296014 kernel: xor: measuring software checksum speed May 9 00:03:29.296045 kernel: 8regs : 19754 MB/sec May 9 00:03:29.297382 kernel: 32regs : 18747 MB/sec May 9 00:03:29.297395 kernel: arm64_neon : 27096 MB/sec May 9 00:03:29.297405 kernel: xor: using function: arm64_neon (27096 MB/sec) May 9 00:03:29.347015 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 00:03:29.356860 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 00:03:29.368160 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:03:29.379202 systemd-udevd[463]: Using default interface naming scheme 'v255'. May 9 00:03:29.382285 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:03:29.395250 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 00:03:29.407419 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation May 9 00:03:29.433096 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:03:29.445129 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:03:29.490340 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:03:29.501578 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 00:03:29.513566 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 00:03:29.517162 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:03:29.518053 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:03:29.519824 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:03:29.532196 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 00:03:29.538668 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 9 00:03:29.538826 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 00:03:29.542696 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:03:29.542807 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:03:29.547758 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:03:29.551401 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 00:03:29.551423 kernel: GPT:9289727 != 19775487 May 9 00:03:29.551433 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 00:03:29.551443 kernel: GPT:9289727 != 19775487 May 9 00:03:29.551459 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 00:03:29.551469 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:03:29.550207 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:03:29.550347 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:03:29.552393 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:03:29.566611 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:03:29.569035 kernel: BTRFS: device fsid 9a510efc-c158-4845-bfb8-279f8b20070f devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (527) May 9 00:03:29.568273 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 00:03:29.574186 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (514) May 9 00:03:29.579737 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:03:29.584755 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 00:03:29.590446 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 00:03:29.599258 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 00:03:29.600183 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 00:03:29.606228 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:03:29.618203 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 00:03:29.619910 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:03:29.625675 disk-uuid[554]: Primary Header is updated. May 9 00:03:29.625675 disk-uuid[554]: Secondary Entries is updated. May 9 00:03:29.625675 disk-uuid[554]: Secondary Header is updated. May 9 00:03:29.631027 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:03:29.637288 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:03:30.640037 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:03:30.640671 disk-uuid[556]: The operation has completed successfully. May 9 00:03:30.657650 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 00:03:30.657741 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 00:03:30.690158 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 00:03:30.694059 sh[577]: Success May 9 00:03:30.707189 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 00:03:30.746480 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 00:03:30.748135 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 00:03:30.749041 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 00:03:30.760015 kernel: BTRFS info (device dm-0): first mount of filesystem 9a510efc-c158-4845-bfb8-279f8b20070f May 9 00:03:30.760051 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 00:03:30.760063 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 00:03:30.761360 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 00:03:30.761375 kernel: BTRFS info (device dm-0): using free space tree May 9 00:03:30.765464 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 00:03:30.766901 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 00:03:30.776167 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 00:03:30.778501 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 00:03:30.785206 kernel: BTRFS info (device vda6): first mount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 9 00:03:30.785254 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 00:03:30.785272 kernel: BTRFS info (device vda6): using free space tree May 9 00:03:30.788020 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:03:30.796123 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 00:03:30.797468 kernel: BTRFS info (device vda6): last unmount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 9 00:03:30.804860 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 00:03:30.810185 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 00:03:30.887046 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:03:30.897176 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:03:30.934802 systemd-networkd[766]: lo: Link UP May 9 00:03:30.934822 systemd-networkd[766]: lo: Gained carrier May 9 00:03:30.935526 systemd-networkd[766]: Enumeration completed May 9 00:03:30.935647 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:03:30.936141 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:03:30.939826 ignition[668]: Ignition 2.19.0 May 9 00:03:30.936145 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:03:30.939832 ignition[668]: Stage: fetch-offline May 9 00:03:30.937123 systemd[1]: Reached target network.target - Network. May 9 00:03:30.939876 ignition[668]: no configs at "/usr/lib/ignition/base.d" May 9 00:03:30.937141 systemd-networkd[766]: eth0: Link UP May 9 00:03:30.939885 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:03:30.937146 systemd-networkd[766]: eth0: Gained carrier May 9 00:03:30.940051 ignition[668]: parsed url from cmdline: "" May 9 00:03:30.937153 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:03:30.940054 ignition[668]: no config URL provided May 9 00:03:30.940059 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" May 9 00:03:30.940066 ignition[668]: no config at "/usr/lib/ignition/user.ign" May 9 00:03:30.940087 ignition[668]: op(1): [started] loading QEMU firmware config module May 9 00:03:30.940092 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 00:03:30.951888 ignition[668]: op(1): [finished] loading QEMU firmware config module May 9 00:03:30.962038 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:03:30.993914 ignition[668]: parsing config with SHA512: ae49edc5ff85fd0e85b419097826c2a0338ce0a982f43ae4a487e65dddac89c47972e84f096f7af09996f23eeff7d038830a95696cb6e68f0f375d13c249274b May 9 00:03:31.000291 unknown[668]: fetched base config from "system" May 9 00:03:31.000301 unknown[668]: fetched user config from "qemu" May 9 00:03:31.000717 ignition[668]: fetch-offline: fetch-offline passed May 9 00:03:31.002723 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:03:31.000777 ignition[668]: Ignition finished successfully May 9 00:03:31.004257 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 00:03:31.014156 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 00:03:31.025201 ignition[773]: Ignition 2.19.0 May 9 00:03:31.025210 ignition[773]: Stage: kargs May 9 00:03:31.025376 ignition[773]: no configs at "/usr/lib/ignition/base.d" May 9 00:03:31.025385 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:03:31.026251 ignition[773]: kargs: kargs passed May 9 00:03:31.026295 ignition[773]: Ignition finished successfully May 9 00:03:31.031064 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 00:03:31.041144 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 00:03:31.051459 ignition[782]: Ignition 2.19.0 May 9 00:03:31.051470 ignition[782]: Stage: disks May 9 00:03:31.051633 ignition[782]: no configs at "/usr/lib/ignition/base.d" May 9 00:03:31.051642 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:03:31.052518 ignition[782]: disks: disks passed May 9 00:03:31.052572 ignition[782]: Ignition finished successfully May 9 00:03:31.055047 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 00:03:31.056603 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 00:03:31.057784 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 00:03:31.059464 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:03:31.061027 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:03:31.062714 systemd[1]: Reached target basic.target - Basic System. May 9 00:03:31.074152 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 00:03:31.085240 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 00:03:31.088777 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 00:03:31.105092 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 00:03:31.152004 kernel: EXT4-fs (vda9): mounted filesystem 1a8c7c5d-87ec-4bc4-aa01-1ebc1d3c20e7 r/w with ordered data mode. Quota mode: none. May 9 00:03:31.152791 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 00:03:31.154116 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 00:03:31.169089 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:03:31.170806 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 00:03:31.171981 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 00:03:31.172080 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 00:03:31.172108 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:03:31.178351 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) May 9 00:03:31.178002 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 00:03:31.181657 kernel: BTRFS info (device vda6): first mount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 9 00:03:31.181673 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 00:03:31.181683 kernel: BTRFS info (device vda6): using free space tree May 9 00:03:31.180652 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 00:03:31.184067 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:03:31.185685 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:03:31.223829 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory May 9 00:03:31.228247 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory May 9 00:03:31.231881 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory May 9 00:03:31.235538 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory May 9 00:03:31.301217 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 00:03:31.320191 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 00:03:31.322753 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 00:03:31.327036 kernel: BTRFS info (device vda6): last unmount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 9 00:03:31.345094 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 00:03:31.351029 ignition[914]: INFO : Ignition 2.19.0 May 9 00:03:31.351029 ignition[914]: INFO : Stage: mount May 9 00:03:31.352473 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:03:31.352473 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:03:31.352473 ignition[914]: INFO : mount: mount passed May 9 00:03:31.352473 ignition[914]: INFO : Ignition finished successfully May 9 00:03:31.353787 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 00:03:31.363150 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 00:03:31.759440 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 00:03:31.769200 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:03:31.775001 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) May 9 00:03:31.777251 kernel: BTRFS info (device vda6): first mount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 9 00:03:31.777268 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 00:03:31.777278 kernel: BTRFS info (device vda6): using free space tree May 9 00:03:31.780012 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:03:31.780557 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:03:31.804647 ignition[945]: INFO : Ignition 2.19.0 May 9 00:03:31.804647 ignition[945]: INFO : Stage: files May 9 00:03:31.806279 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:03:31.806279 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:03:31.806279 ignition[945]: DEBUG : files: compiled without relabeling support, skipping May 9 00:03:31.809735 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 00:03:31.809735 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 00:03:31.809735 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 00:03:31.809735 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 00:03:31.809735 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 00:03:31.809520 unknown[945]: wrote ssh authorized keys file for user: core May 9 00:03:31.817145 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 00:03:31.817145 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 9 00:03:31.888818 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 00:03:32.151529 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 00:03:32.151529 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 9 00:03:32.155492 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 9 00:03:32.155492 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 00:03:32.155492 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 00:03:32.155492 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:03:32.155492 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:03:32.155492 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:03:32.155492 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:03:32.155492 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:03:32.155492 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:03:32.155492 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 9 00:03:32.155492 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 9 00:03:32.155492 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 9 00:03:32.155492 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 9 00:03:32.501220 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 9 00:03:32.817200 systemd-networkd[766]: eth0: Gained IPv6LL May 9 00:03:32.858003 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 9 00:03:32.858003 ignition[945]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 9 00:03:32.861733 ignition[945]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:03:32.861733 ignition[945]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:03:32.861733 ignition[945]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 9 00:03:32.861733 ignition[945]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 9 00:03:32.861733 ignition[945]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:03:32.861733 ignition[945]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:03:32.861733 ignition[945]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 9 00:03:32.861733 ignition[945]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 9 00:03:32.883527 ignition[945]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:03:32.887447 ignition[945]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:03:32.889722 ignition[945]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 9 00:03:32.889722 ignition[945]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 9 00:03:32.889722 ignition[945]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 9 00:03:32.889722 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 00:03:32.889722 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 00:03:32.889722 ignition[945]: INFO : files: files passed May 9 00:03:32.889722 ignition[945]: INFO : Ignition finished successfully May 9 00:03:32.890451 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 00:03:32.901149 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 00:03:32.902978 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 00:03:32.904750 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 00:03:32.904843 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 00:03:32.910678 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory May 9 00:03:32.912875 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:03:32.912875 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 00:03:32.916099 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:03:32.914699 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:03:32.917694 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 00:03:32.925201 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 00:03:32.945254 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 00:03:32.945371 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 00:03:32.947621 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 00:03:32.949533 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 00:03:32.951428 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 00:03:32.960136 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 00:03:32.972571 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:03:32.974967 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 00:03:32.986537 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 00:03:32.987780 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:03:32.989802 systemd[1]: Stopped target timers.target - Timer Units. May 9 00:03:32.991581 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 00:03:32.991706 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:03:32.994318 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 00:03:32.996384 systemd[1]: Stopped target basic.target - Basic System. May 9 00:03:32.998073 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 00:03:32.999883 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:03:33.001890 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 00:03:33.003922 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 00:03:33.005866 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:03:33.007808 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 00:03:33.009711 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 00:03:33.011376 systemd[1]: Stopped target swap.target - Swaps. May 9 00:03:33.012831 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 00:03:33.012960 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 00:03:33.015203 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 00:03:33.017120 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:03:33.019152 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 00:03:33.019258 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:03:33.021166 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 00:03:33.021280 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 00:03:33.024159 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 00:03:33.024274 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:03:33.026377 systemd[1]: Stopped target paths.target - Path Units. May 9 00:03:33.027953 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 00:03:33.029096 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:03:33.031245 systemd[1]: Stopped target slices.target - Slice Units. May 9 00:03:33.032845 systemd[1]: Stopped target sockets.target - Socket Units. May 9 00:03:33.034638 systemd[1]: iscsid.socket: Deactivated successfully. May 9 00:03:33.034732 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:03:33.036558 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 00:03:33.036639 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:03:33.037935 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 00:03:33.038058 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:03:33.039576 systemd[1]: ignition-files.service: Deactivated successfully. May 9 00:03:33.039675 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 00:03:33.051199 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 00:03:33.052759 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 00:03:33.053694 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 00:03:33.053841 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:03:33.055793 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 00:03:33.055912 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:03:33.061878 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 00:03:33.062962 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 00:03:33.065270 ignition[998]: INFO : Ignition 2.19.0 May 9 00:03:33.065270 ignition[998]: INFO : Stage: umount May 9 00:03:33.067667 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:03:33.067667 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:03:33.067667 ignition[998]: INFO : umount: umount passed May 9 00:03:33.067667 ignition[998]: INFO : Ignition finished successfully May 9 00:03:33.068736 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 00:03:33.069272 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 00:03:33.069369 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 00:03:33.070820 systemd[1]: Stopped target network.target - Network. May 9 00:03:33.072285 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 00:03:33.072347 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 00:03:33.073964 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 00:03:33.074021 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 00:03:33.075539 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 00:03:33.075580 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 00:03:33.078253 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 00:03:33.078298 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 00:03:33.080168 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 00:03:33.081969 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 00:03:33.090037 systemd-networkd[766]: eth0: DHCPv6 lease lost May 9 00:03:33.091468 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 00:03:33.091601 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 00:03:33.095384 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 00:03:33.095466 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 00:03:33.098215 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 00:03:33.098252 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 00:03:33.110093 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 00:03:33.111050 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 00:03:33.111117 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:03:33.113210 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:03:33.113256 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:03:33.114910 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 00:03:33.114956 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 00:03:33.116859 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 00:03:33.116902 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:03:33.118781 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:03:33.125689 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 00:03:33.125780 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 00:03:33.127439 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 00:03:33.127550 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 00:03:33.129235 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 00:03:33.129334 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 00:03:33.132706 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 00:03:33.132864 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:03:33.135195 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 00:03:33.135233 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 00:03:33.136506 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 00:03:33.136542 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:03:33.138387 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 00:03:33.138437 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 00:03:33.140748 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 00:03:33.140790 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 00:03:33.143115 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:03:33.143159 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:03:33.158185 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 00:03:33.159281 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 00:03:33.159348 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:03:33.161303 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 9 00:03:33.161346 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:03:33.163309 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 00:03:33.163360 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:03:33.165357 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:03:33.165403 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:03:33.167420 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 00:03:33.169018 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 00:03:33.171265 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 00:03:33.173228 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 00:03:33.183319 systemd[1]: Switching root. May 9 00:03:33.216040 systemd-journald[236]: Journal stopped May 9 00:03:33.898892 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). May 9 00:03:33.898953 kernel: SELinux: policy capability network_peer_controls=1 May 9 00:03:33.898966 kernel: SELinux: policy capability open_perms=1 May 9 00:03:33.898976 kernel: SELinux: policy capability extended_socket_class=1 May 9 00:03:33.898986 kernel: SELinux: policy capability always_check_network=0 May 9 00:03:33.899053 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 00:03:33.899065 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 00:03:33.899074 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 00:03:33.899087 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 00:03:33.899101 kernel: audit: type=1403 audit(1746749013.354:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 00:03:33.899111 systemd[1]: Successfully loaded SELinux policy in 31.109ms. May 9 00:03:33.899128 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.633ms. May 9 00:03:33.899140 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:03:33.899151 systemd[1]: Detected virtualization kvm. May 9 00:03:33.899163 systemd[1]: Detected architecture arm64. May 9 00:03:33.899174 systemd[1]: Detected first boot. May 9 00:03:33.899185 systemd[1]: Initializing machine ID from VM UUID. May 9 00:03:33.899197 zram_generator::config[1043]: No configuration found. May 9 00:03:33.899209 systemd[1]: Populated /etc with preset unit settings. May 9 00:03:33.899219 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 00:03:33.899229 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 00:03:33.899240 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 00:03:33.899250 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 00:03:33.899261 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 00:03:33.899271 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 00:03:33.899284 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 00:03:33.899295 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 00:03:33.899306 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 00:03:33.899316 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 00:03:33.899327 systemd[1]: Created slice user.slice - User and Session Slice. May 9 00:03:33.899337 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:03:33.899348 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:03:33.899358 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 00:03:33.899369 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 00:03:33.899382 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 00:03:33.899398 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:03:33.899408 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 9 00:03:33.899419 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:03:33.899429 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 00:03:33.899439 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 00:03:33.899450 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 00:03:33.899462 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 00:03:33.899473 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:03:33.899483 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:03:33.899494 systemd[1]: Reached target slices.target - Slice Units. May 9 00:03:33.899504 systemd[1]: Reached target swap.target - Swaps. May 9 00:03:33.899515 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 00:03:33.899526 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 00:03:33.899536 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:03:33.899546 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:03:33.899557 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:03:33.899569 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 00:03:33.899580 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 00:03:33.899590 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 00:03:33.899600 systemd[1]: Mounting media.mount - External Media Directory... May 9 00:03:33.899611 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 00:03:33.899622 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 00:03:33.899632 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 00:03:33.899642 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 00:03:33.899655 systemd[1]: Reached target machines.target - Containers. May 9 00:03:33.899665 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 00:03:33.899676 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:03:33.899686 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:03:33.899697 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 00:03:33.899707 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:03:33.899718 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:03:33.899728 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:03:33.899738 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 00:03:33.899751 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:03:33.899763 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 00:03:33.899773 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 00:03:33.899783 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 00:03:33.899794 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 00:03:33.899804 systemd[1]: Stopped systemd-fsck-usr.service. May 9 00:03:33.899814 kernel: ACPI: bus type drm_connector registered May 9 00:03:33.899830 kernel: fuse: init (API version 7.39) May 9 00:03:33.899844 kernel: loop: module loaded May 9 00:03:33.899857 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:03:33.899867 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:03:33.899878 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 00:03:33.899888 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 00:03:33.899899 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:03:33.899927 systemd-journald[1110]: Collecting audit messages is disabled. May 9 00:03:33.899961 systemd[1]: verity-setup.service: Deactivated successfully. May 9 00:03:33.899973 systemd[1]: Stopped verity-setup.service. May 9 00:03:33.899984 systemd-journald[1110]: Journal started May 9 00:03:33.900014 systemd-journald[1110]: Runtime Journal (/run/log/journal/78d78cdc59854138b560d04b653a12b4) is 5.9M, max 47.3M, 41.4M free. May 9 00:03:33.717579 systemd[1]: Queued start job for default target multi-user.target. May 9 00:03:33.740811 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 00:03:33.741227 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 00:03:33.903007 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:03:33.903452 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 00:03:33.904425 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 00:03:33.905320 systemd[1]: Mounted media.mount - External Media Directory. May 9 00:03:33.906246 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 00:03:33.907131 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 00:03:33.908018 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 00:03:33.909024 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 00:03:33.910123 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:03:33.911244 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 00:03:33.911381 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 00:03:33.912478 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:03:33.912621 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:03:33.913685 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:03:33.913813 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:03:33.915042 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:03:33.915176 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:03:33.916277 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 00:03:33.916416 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 00:03:33.917592 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:03:33.917725 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:03:33.918896 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:03:33.920061 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 00:03:33.921196 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 00:03:33.933132 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 00:03:33.946121 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 00:03:33.948061 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 00:03:33.948915 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 00:03:33.948953 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:03:33.950729 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 00:03:33.952684 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 00:03:33.954532 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 00:03:33.955435 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:03:33.956799 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 00:03:33.960250 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 00:03:33.961539 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:03:33.962503 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 00:03:33.963552 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:03:33.966575 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:03:33.969685 systemd-journald[1110]: Time spent on flushing to /var/log/journal/78d78cdc59854138b560d04b653a12b4 is 27.044ms for 855 entries. May 9 00:03:33.969685 systemd-journald[1110]: System Journal (/var/log/journal/78d78cdc59854138b560d04b653a12b4) is 8.0M, max 195.6M, 187.6M free. May 9 00:03:34.004733 systemd-journald[1110]: Received client request to flush runtime journal. May 9 00:03:34.004785 kernel: loop0: detected capacity change from 0 to 114328 May 9 00:03:33.970226 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 00:03:33.973955 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:03:33.977042 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:03:33.978482 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 00:03:33.981486 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 00:03:33.989718 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 00:03:33.991492 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 00:03:33.994118 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 00:03:34.003209 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 00:03:34.006145 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 00:03:34.007792 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. May 9 00:03:34.007804 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. May 9 00:03:34.012072 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 00:03:34.010519 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 00:03:34.012019 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:03:34.013159 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:03:34.018700 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 00:03:34.028117 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 9 00:03:34.042917 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 00:03:34.046021 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 00:03:34.054162 kernel: loop1: detected capacity change from 0 to 114432 May 9 00:03:34.054914 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 00:03:34.069224 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:03:34.080490 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. May 9 00:03:34.080507 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. May 9 00:03:34.085106 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:03:34.087058 kernel: loop2: detected capacity change from 0 to 189592 May 9 00:03:34.130313 kernel: loop3: detected capacity change from 0 to 114328 May 9 00:03:34.135952 kernel: loop4: detected capacity change from 0 to 114432 May 9 00:03:34.139156 kernel: loop5: detected capacity change from 0 to 189592 May 9 00:03:34.143479 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 00:03:34.143895 (sd-merge)[1184]: Merged extensions into '/usr'. May 9 00:03:34.150303 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... May 9 00:03:34.150322 systemd[1]: Reloading... May 9 00:03:34.197020 zram_generator::config[1208]: No configuration found. May 9 00:03:34.246771 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 00:03:34.300500 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:03:34.338596 systemd[1]: Reloading finished in 187 ms. May 9 00:03:34.372030 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 00:03:34.373128 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 00:03:34.385219 systemd[1]: Starting ensure-sysext.service... May 9 00:03:34.387077 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:03:34.395841 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... May 9 00:03:34.395856 systemd[1]: Reloading... May 9 00:03:34.404575 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 00:03:34.405212 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 00:03:34.405946 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 00:03:34.406280 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. May 9 00:03:34.406411 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. May 9 00:03:34.408732 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:03:34.408842 systemd-tmpfiles[1246]: Skipping /boot May 9 00:03:34.415725 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:03:34.415867 systemd-tmpfiles[1246]: Skipping /boot May 9 00:03:34.446063 zram_generator::config[1279]: No configuration found. May 9 00:03:34.530484 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:03:34.568072 systemd[1]: Reloading finished in 171 ms. May 9 00:03:34.584927 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 00:03:34.603514 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:03:34.609931 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 00:03:34.612462 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 00:03:34.614673 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 00:03:34.617506 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:03:34.624218 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:03:34.629158 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 00:03:34.637308 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 00:03:34.638534 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 00:03:34.641798 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:03:34.643387 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:03:34.651271 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:03:34.655044 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:03:34.655942 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:03:34.657199 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 00:03:34.661018 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 00:03:34.662697 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:03:34.662816 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:03:34.671586 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:03:34.674260 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:03:34.674945 systemd-udevd[1315]: Using default interface naming scheme 'v255'. May 9 00:03:34.675176 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:03:34.675301 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 00:03:34.676078 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 00:03:34.677555 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:03:34.679031 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:03:34.680596 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:03:34.680727 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:03:34.682351 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 00:03:34.694499 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:03:34.697681 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:03:34.699026 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:03:34.701777 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:03:34.708268 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:03:34.712698 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:03:34.717383 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:03:34.719464 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:03:34.719632 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 00:03:34.721465 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 00:03:34.722901 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:03:34.724766 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:03:34.724961 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:03:34.728469 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:03:34.729074 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:03:34.730754 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:03:34.730922 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:03:34.734956 systemd[1]: Finished ensure-sysext.service. May 9 00:03:34.749241 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:03:34.750098 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:03:34.750176 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:03:34.754466 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 00:03:34.757423 augenrules[1377]: No rules May 9 00:03:34.758731 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 00:03:34.779884 systemd-resolved[1314]: Positive Trust Anchors: May 9 00:03:34.781687 systemd-resolved[1314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:03:34.781723 systemd-resolved[1314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:03:34.793129 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 9 00:03:34.796674 systemd-resolved[1314]: Defaulting to hostname 'linux'. May 9 00:03:34.808416 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:03:34.809796 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 00:03:34.810859 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:03:34.811858 systemd[1]: Reached target time-set.target - System Time Set. May 9 00:03:34.823094 systemd-networkd[1372]: lo: Link UP May 9 00:03:34.823102 systemd-networkd[1372]: lo: Gained carrier May 9 00:03:34.823873 systemd-networkd[1372]: Enumeration completed May 9 00:03:34.824039 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:03:34.824547 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:03:34.824551 systemd-networkd[1372]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:03:34.825015 systemd[1]: Reached target network.target - Network. May 9 00:03:34.826983 systemd-networkd[1372]: eth0: Link UP May 9 00:03:34.827204 systemd-networkd[1372]: eth0: Gained carrier May 9 00:03:34.827220 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:03:34.832182 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 00:03:34.834006 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1369) May 9 00:03:34.846136 systemd-networkd[1372]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:03:34.846844 systemd-timesyncd[1376]: Network configuration changed, trying to establish connection. May 9 00:03:34.847926 systemd-timesyncd[1376]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 00:03:34.847983 systemd-timesyncd[1376]: Initial clock synchronization to Fri 2025-05-09 00:03:35.217519 UTC. May 9 00:03:34.863226 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:03:34.865302 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:03:34.874282 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 00:03:34.897108 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 00:03:34.913205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:03:34.924131 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 00:03:34.926858 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 00:03:34.941023 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:03:34.954188 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:03:34.971404 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 00:03:34.972629 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:03:34.975128 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:03:34.976016 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 00:03:34.977180 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 00:03:34.978318 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 00:03:34.979489 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 00:03:34.980741 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 00:03:34.981966 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 00:03:34.982014 systemd[1]: Reached target paths.target - Path Units. May 9 00:03:34.982871 systemd[1]: Reached target timers.target - Timer Units. May 9 00:03:34.984547 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 00:03:34.987124 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 00:03:34.997057 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 00:03:34.999469 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 00:03:35.001161 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 00:03:35.002093 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:03:35.002801 systemd[1]: Reached target basic.target - Basic System. May 9 00:03:35.003582 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 00:03:35.003618 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 00:03:35.004651 systemd[1]: Starting containerd.service - containerd container runtime... May 9 00:03:35.006484 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 00:03:35.009198 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:03:35.011283 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 00:03:35.014268 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 00:03:35.015489 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 00:03:35.016901 jq[1412]: false May 9 00:03:35.018429 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 00:03:35.023447 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 00:03:35.026202 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 00:03:35.028776 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 00:03:35.033502 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 00:03:35.039403 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 00:03:35.039962 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 00:03:35.040760 systemd[1]: Starting update-engine.service - Update Engine... May 9 00:03:35.045976 extend-filesystems[1413]: Found loop3 May 9 00:03:35.045976 extend-filesystems[1413]: Found loop4 May 9 00:03:35.045976 extend-filesystems[1413]: Found loop5 May 9 00:03:35.045976 extend-filesystems[1413]: Found vda May 9 00:03:35.045976 extend-filesystems[1413]: Found vda1 May 9 00:03:35.045976 extend-filesystems[1413]: Found vda2 May 9 00:03:35.045976 extend-filesystems[1413]: Found vda3 May 9 00:03:35.045976 extend-filesystems[1413]: Found usr May 9 00:03:35.045976 extend-filesystems[1413]: Found vda4 May 9 00:03:35.045976 extend-filesystems[1413]: Found vda6 May 9 00:03:35.045976 extend-filesystems[1413]: Found vda7 May 9 00:03:35.045976 extend-filesystems[1413]: Found vda9 May 9 00:03:35.045976 extend-filesystems[1413]: Checking size of /dev/vda9 May 9 00:03:35.042451 dbus-daemon[1411]: [system] SELinux support is enabled May 9 00:03:35.047176 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 00:03:35.067967 jq[1425]: true May 9 00:03:35.048727 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 00:03:35.056258 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 00:03:35.059370 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 00:03:35.059559 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 00:03:35.064238 systemd[1]: motdgen.service: Deactivated successfully. May 9 00:03:35.064402 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 00:03:35.069096 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 00:03:35.069265 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 00:03:35.090644 tar[1432]: linux-arm64/helm May 9 00:03:35.088572 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 00:03:35.088601 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 00:03:35.091064 update_engine[1422]: I20250509 00:03:35.090826 1422 main.cc:92] Flatcar Update Engine starting May 9 00:03:35.092654 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 00:03:35.092680 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 00:03:35.096975 update_engine[1422]: I20250509 00:03:35.096916 1422 update_check_scheduler.cc:74] Next update check in 8m59s May 9 00:03:35.109640 systemd[1]: Started update-engine.service - Update Engine. May 9 00:03:35.118199 extend-filesystems[1413]: Resized partition /dev/vda9 May 9 00:03:35.127471 extend-filesystems[1449]: resize2fs 1.47.1 (20-May-2024) May 9 00:03:35.133514 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1369) May 9 00:03:35.129238 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 00:03:35.132420 (ntainerd)[1434]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 00:03:35.138747 jq[1433]: true May 9 00:03:35.162090 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 00:03:35.169478 systemd-logind[1421]: Watching system buttons on /dev/input/event0 (Power Button) May 9 00:03:35.169709 systemd-logind[1421]: New seat seat0. May 9 00:03:35.171392 systemd[1]: Started systemd-logind.service - User Login Management. May 9 00:03:35.243029 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 00:03:35.256365 locksmithd[1448]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 00:03:35.259079 extend-filesystems[1449]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 00:03:35.259079 extend-filesystems[1449]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 00:03:35.259079 extend-filesystems[1449]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 00:03:35.263873 extend-filesystems[1413]: Resized filesystem in /dev/vda9 May 9 00:03:35.261165 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 00:03:35.267738 bash[1465]: Updated "/home/core/.ssh/authorized_keys" May 9 00:03:35.263103 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 00:03:35.269509 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 00:03:35.271665 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 00:03:35.399210 containerd[1434]: time="2025-05-09T00:03:35.399070748Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 9 00:03:35.441887 containerd[1434]: time="2025-05-09T00:03:35.441831309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 00:03:35.444104 containerd[1434]: time="2025-05-09T00:03:35.443425860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 00:03:35.444104 containerd[1434]: time="2025-05-09T00:03:35.443470429Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 00:03:35.444104 containerd[1434]: time="2025-05-09T00:03:35.443489261Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 00:03:35.444104 containerd[1434]: time="2025-05-09T00:03:35.443661174Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 00:03:35.444104 containerd[1434]: time="2025-05-09T00:03:35.443679838Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 00:03:35.444104 containerd[1434]: time="2025-05-09T00:03:35.443736627Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:03:35.444104 containerd[1434]: time="2025-05-09T00:03:35.443752906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 00:03:35.444104 containerd[1434]: time="2025-05-09T00:03:35.443917077Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:03:35.444104 containerd[1434]: time="2025-05-09T00:03:35.443933942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 00:03:35.444104 containerd[1434]: time="2025-05-09T00:03:35.443947794Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:03:35.444104 containerd[1434]: time="2025-05-09T00:03:35.443957712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 00:03:35.444370 containerd[1434]: time="2025-05-09T00:03:35.444027599Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 00:03:35.444370 containerd[1434]: time="2025-05-09T00:03:35.444266972Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 00:03:35.444419 containerd[1434]: time="2025-05-09T00:03:35.444370128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:03:35.444419 containerd[1434]: time="2025-05-09T00:03:35.444386784Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 00:03:35.444516 containerd[1434]: time="2025-05-09T00:03:35.444474205Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 00:03:35.444547 containerd[1434]: time="2025-05-09T00:03:35.444528315Z" level=info msg="metadata content store policy set" policy=shared May 9 00:03:35.448536 containerd[1434]: time="2025-05-09T00:03:35.448502494Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 00:03:35.448613 containerd[1434]: time="2025-05-09T00:03:35.448555976Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 00:03:35.448613 containerd[1434]: time="2025-05-09T00:03:35.448600084Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 00:03:35.448766 containerd[1434]: time="2025-05-09T00:03:35.448621176Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 00:03:35.448766 containerd[1434]: time="2025-05-09T00:03:35.448637873Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 00:03:35.448870 containerd[1434]: time="2025-05-09T00:03:35.448845651Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 00:03:35.449175 containerd[1434]: time="2025-05-09T00:03:35.449155915Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 00:03:35.449308 containerd[1434]: time="2025-05-09T00:03:35.449289328Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 00:03:35.449331 containerd[1434]: time="2025-05-09T00:03:35.449312303Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 00:03:35.449331 containerd[1434]: time="2025-05-09T00:03:35.449326406Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 00:03:35.449367 containerd[1434]: time="2025-05-09T00:03:35.449348711Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 00:03:35.449367 containerd[1434]: time="2025-05-09T00:03:35.449363023Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 00:03:35.449418 containerd[1434]: time="2025-05-09T00:03:35.449377419Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 00:03:35.449418 containerd[1434]: time="2025-05-09T00:03:35.449393154Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 00:03:35.449418 containerd[1434]: time="2025-05-09T00:03:35.449409600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 00:03:35.449471 containerd[1434]: time="2025-05-09T00:03:35.449430608Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 00:03:35.449471 containerd[1434]: time="2025-05-09T00:03:35.449451365Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 00:03:35.449471 containerd[1434]: time="2025-05-09T00:03:35.449464882Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 00:03:35.449523 containerd[1434]: time="2025-05-09T00:03:35.449485304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 00:03:35.449523 containerd[1434]: time="2025-05-09T00:03:35.449507526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 00:03:35.449561 containerd[1434]: time="2025-05-09T00:03:35.449523093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 00:03:35.449561 containerd[1434]: time="2025-05-09T00:03:35.449540753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 00:03:35.449561 containerd[1434]: time="2025-05-09T00:03:35.449554354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 00:03:35.449616 containerd[1434]: time="2025-05-09T00:03:35.449568373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 00:03:35.449616 containerd[1434]: time="2025-05-09T00:03:35.449588502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 00:03:35.449616 containerd[1434]: time="2025-05-09T00:03:35.449602773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 00:03:35.449683 containerd[1434]: time="2025-05-09T00:03:35.449617671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 00:03:35.449683 containerd[1434]: time="2025-05-09T00:03:35.449643868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 00:03:35.449683 containerd[1434]: time="2025-05-09T00:03:35.449668307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 00:03:35.449735 containerd[1434]: time="2025-05-09T00:03:35.449682578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 00:03:35.449735 containerd[1434]: time="2025-05-09T00:03:35.449697434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 00:03:35.449735 containerd[1434]: time="2025-05-09T00:03:35.449714927Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 00:03:35.449786 containerd[1434]: time="2025-05-09T00:03:35.449745769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 00:03:35.449786 containerd[1434]: time="2025-05-09T00:03:35.449760876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 00:03:35.449786 containerd[1434]: time="2025-05-09T00:03:35.449772301Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 00:03:35.450483 containerd[1434]: time="2025-05-09T00:03:35.450457485Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 00:03:35.450533 containerd[1434]: time="2025-05-09T00:03:35.450493517Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 00:03:35.450533 containerd[1434]: time="2025-05-09T00:03:35.450506615Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 00:03:35.450533 containerd[1434]: time="2025-05-09T00:03:35.450519337Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 00:03:35.450533 containerd[1434]: time="2025-05-09T00:03:35.450529423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 00:03:35.450875 containerd[1434]: time="2025-05-09T00:03:35.450551853Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 00:03:35.450875 containerd[1434]: time="2025-05-09T00:03:35.450562650Z" level=info msg="NRI interface is disabled by configuration." May 9 00:03:35.450875 containerd[1434]: time="2025-05-09T00:03:35.450572987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 00:03:35.451042 containerd[1434]: time="2025-05-09T00:03:35.450977243Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 00:03:35.451172 containerd[1434]: time="2025-05-09T00:03:35.451066380Z" level=info msg="Connect containerd service" May 9 00:03:35.451172 containerd[1434]: time="2025-05-09T00:03:35.451104211Z" level=info msg="using legacy CRI server" May 9 00:03:35.451172 containerd[1434]: time="2025-05-09T00:03:35.451111827Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 00:03:35.452198 containerd[1434]: time="2025-05-09T00:03:35.452169923Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 00:03:35.453195 containerd[1434]: time="2025-05-09T00:03:35.453159890Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:03:35.453608 containerd[1434]: time="2025-05-09T00:03:35.453561551Z" level=info msg="Start subscribing containerd event" May 9 00:03:35.453752 containerd[1434]: time="2025-05-09T00:03:35.453735933Z" level=info msg="Start recovering state" May 9 00:03:35.454177 containerd[1434]: time="2025-05-09T00:03:35.453831055Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 00:03:35.454338 containerd[1434]: time="2025-05-09T00:03:35.454315367Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 00:03:35.454529 containerd[1434]: time="2025-05-09T00:03:35.454481128Z" level=info msg="Start event monitor" May 9 00:03:35.454653 containerd[1434]: time="2025-05-09T00:03:35.454635591Z" level=info msg="Start snapshots syncer" May 9 00:03:35.454761 containerd[1434]: time="2025-05-09T00:03:35.454745778Z" level=info msg="Start cni network conf syncer for default" May 9 00:03:35.454894 containerd[1434]: time="2025-05-09T00:03:35.454878521Z" level=info msg="Start streaming server" May 9 00:03:35.455386 containerd[1434]: time="2025-05-09T00:03:35.455365720Z" level=info msg="containerd successfully booted in 0.057361s" May 9 00:03:35.455462 systemd[1]: Started containerd.service - containerd container runtime. May 9 00:03:35.496956 tar[1432]: linux-arm64/LICENSE May 9 00:03:35.496956 tar[1432]: linux-arm64/README.md May 9 00:03:35.509667 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 00:03:35.847451 sshd_keygen[1435]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 00:03:35.867690 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 00:03:35.880356 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 00:03:35.886384 systemd[1]: issuegen.service: Deactivated successfully. May 9 00:03:35.888095 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 00:03:35.891870 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 00:03:35.908116 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 00:03:35.911333 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 00:03:35.913454 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 9 00:03:35.916343 systemd[1]: Reached target getty.target - Login Prompts. May 9 00:03:36.345440 systemd-networkd[1372]: eth0: Gained IPv6LL May 9 00:03:36.348158 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 00:03:36.349744 systemd[1]: Reached target network-online.target - Network is Online. May 9 00:03:36.368306 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 00:03:36.370318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:03:36.372221 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 00:03:36.390025 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 00:03:36.390340 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 00:03:36.391820 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 00:03:36.399015 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 00:03:36.876859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:03:36.878479 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 00:03:36.881239 (kubelet)[1523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:03:36.883115 systemd[1]: Startup finished in 551ms (kernel) + 4.652s (initrd) + 3.564s (userspace) = 8.769s. May 9 00:03:37.321695 kubelet[1523]: E0509 00:03:37.321575 1523 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:03:37.325475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:03:37.325639 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:03:42.086752 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 00:03:42.087907 systemd[1]: Started sshd@0-10.0.0.43:22-10.0.0.1:32916.service - OpenSSH per-connection server daemon (10.0.0.1:32916). May 9 00:03:42.136507 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 32916 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:03:42.138511 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:03:42.147895 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 00:03:42.158292 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 00:03:42.160123 systemd-logind[1421]: New session 1 of user core. May 9 00:03:42.169075 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 00:03:42.171903 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 00:03:42.179500 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 00:03:42.256429 systemd[1540]: Queued start job for default target default.target. May 9 00:03:42.267923 systemd[1540]: Created slice app.slice - User Application Slice. May 9 00:03:42.267952 systemd[1540]: Reached target paths.target - Paths. May 9 00:03:42.267964 systemd[1540]: Reached target timers.target - Timers. May 9 00:03:42.269222 systemd[1540]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 00:03:42.278801 systemd[1540]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 00:03:42.278868 systemd[1540]: Reached target sockets.target - Sockets. May 9 00:03:42.278880 systemd[1540]: Reached target basic.target - Basic System. May 9 00:03:42.278915 systemd[1540]: Reached target default.target - Main User Target. May 9 00:03:42.278941 systemd[1540]: Startup finished in 93ms. May 9 00:03:42.279248 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 00:03:42.280543 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 00:03:42.345185 systemd[1]: Started sshd@1-10.0.0.43:22-10.0.0.1:45988.service - OpenSSH per-connection server daemon (10.0.0.1:45988). May 9 00:03:42.382711 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 45988 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:03:42.384140 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:03:42.389085 systemd-logind[1421]: New session 2 of user core. May 9 00:03:42.400193 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 00:03:42.453247 sshd[1551]: pam_unix(sshd:session): session closed for user core May 9 00:03:42.462643 systemd[1]: sshd@1-10.0.0.43:22-10.0.0.1:45988.service: Deactivated successfully. May 9 00:03:42.464118 systemd[1]: session-2.scope: Deactivated successfully. May 9 00:03:42.465424 systemd-logind[1421]: Session 2 logged out. Waiting for processes to exit. May 9 00:03:42.467055 systemd[1]: Started sshd@2-10.0.0.43:22-10.0.0.1:46000.service - OpenSSH per-connection server daemon (10.0.0.1:46000). May 9 00:03:42.467941 systemd-logind[1421]: Removed session 2. May 9 00:03:42.501695 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 46000 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:03:42.502974 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:03:42.507101 systemd-logind[1421]: New session 3 of user core. May 9 00:03:42.515234 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 00:03:42.564062 sshd[1558]: pam_unix(sshd:session): session closed for user core May 9 00:03:42.581584 systemd[1]: sshd@2-10.0.0.43:22-10.0.0.1:46000.service: Deactivated successfully. May 9 00:03:42.583101 systemd[1]: session-3.scope: Deactivated successfully. May 9 00:03:42.584381 systemd-logind[1421]: Session 3 logged out. Waiting for processes to exit. May 9 00:03:42.585544 systemd[1]: Started sshd@3-10.0.0.43:22-10.0.0.1:46010.service - OpenSSH per-connection server daemon (10.0.0.1:46010). May 9 00:03:42.586261 systemd-logind[1421]: Removed session 3. May 9 00:03:42.620563 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 46010 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:03:42.621903 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:03:42.626029 systemd-logind[1421]: New session 4 of user core. May 9 00:03:42.637194 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 00:03:42.691252 sshd[1565]: pam_unix(sshd:session): session closed for user core May 9 00:03:42.700479 systemd[1]: sshd@3-10.0.0.43:22-10.0.0.1:46010.service: Deactivated successfully. May 9 00:03:42.701966 systemd[1]: session-4.scope: Deactivated successfully. May 9 00:03:42.703422 systemd-logind[1421]: Session 4 logged out. Waiting for processes to exit. May 9 00:03:42.704668 systemd[1]: Started sshd@4-10.0.0.43:22-10.0.0.1:46024.service - OpenSSH per-connection server daemon (10.0.0.1:46024). May 9 00:03:42.706289 systemd-logind[1421]: Removed session 4. May 9 00:03:42.739542 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 46024 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:03:42.740828 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:03:42.744610 systemd-logind[1421]: New session 5 of user core. May 9 00:03:42.754161 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 00:03:42.817803 sudo[1575]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 00:03:42.820206 sudo[1575]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:03:42.837956 sudo[1575]: pam_unix(sudo:session): session closed for user root May 9 00:03:42.839899 sshd[1572]: pam_unix(sshd:session): session closed for user core May 9 00:03:42.849861 systemd[1]: sshd@4-10.0.0.43:22-10.0.0.1:46024.service: Deactivated successfully. May 9 00:03:42.853495 systemd[1]: session-5.scope: Deactivated successfully. May 9 00:03:42.855088 systemd-logind[1421]: Session 5 logged out. Waiting for processes to exit. May 9 00:03:42.867498 systemd[1]: Started sshd@5-10.0.0.43:22-10.0.0.1:46040.service - OpenSSH per-connection server daemon (10.0.0.1:46040). May 9 00:03:42.868624 systemd-logind[1421]: Removed session 5. May 9 00:03:42.899209 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 46040 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:03:42.900826 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:03:42.904732 systemd-logind[1421]: New session 6 of user core. May 9 00:03:42.917209 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 00:03:42.969304 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 00:03:42.969882 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:03:42.973026 sudo[1584]: pam_unix(sudo:session): session closed for user root May 9 00:03:42.977979 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 9 00:03:42.978313 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:03:42.996535 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 9 00:03:42.997663 auditctl[1587]: No rules May 9 00:03:42.998022 systemd[1]: audit-rules.service: Deactivated successfully. May 9 00:03:42.998210 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 9 00:03:43.000366 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 00:03:43.025480 augenrules[1605]: No rules May 9 00:03:43.026782 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 00:03:43.028188 sudo[1583]: pam_unix(sudo:session): session closed for user root May 9 00:03:43.030047 sshd[1580]: pam_unix(sshd:session): session closed for user core May 9 00:03:43.041529 systemd[1]: sshd@5-10.0.0.43:22-10.0.0.1:46040.service: Deactivated successfully. May 9 00:03:43.043251 systemd[1]: session-6.scope: Deactivated successfully. May 9 00:03:43.045114 systemd-logind[1421]: Session 6 logged out. Waiting for processes to exit. May 9 00:03:43.054310 systemd[1]: Started sshd@6-10.0.0.43:22-10.0.0.1:46050.service - OpenSSH per-connection server daemon (10.0.0.1:46050). May 9 00:03:43.055099 systemd-logind[1421]: Removed session 6. May 9 00:03:43.084249 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 46050 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:03:43.085529 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:03:43.089157 systemd-logind[1421]: New session 7 of user core. May 9 00:03:43.098157 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 00:03:43.150025 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 00:03:43.150926 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:03:43.466278 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 00:03:43.466426 (dockerd)[1635]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 00:03:43.716041 dockerd[1635]: time="2025-05-09T00:03:43.715949082Z" level=info msg="Starting up" May 9 00:03:43.857143 dockerd[1635]: time="2025-05-09T00:03:43.856976580Z" level=info msg="Loading containers: start." May 9 00:03:43.952041 kernel: Initializing XFRM netlink socket May 9 00:03:44.026857 systemd-networkd[1372]: docker0: Link UP May 9 00:03:44.055479 dockerd[1635]: time="2025-05-09T00:03:44.055433039Z" level=info msg="Loading containers: done." May 9 00:03:44.072734 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1847636657-merged.mount: Deactivated successfully. May 9 00:03:44.078713 dockerd[1635]: time="2025-05-09T00:03:44.078284458Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 00:03:44.078713 dockerd[1635]: time="2025-05-09T00:03:44.078388808Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 9 00:03:44.078713 dockerd[1635]: time="2025-05-09T00:03:44.078497295Z" level=info msg="Daemon has completed initialization" May 9 00:03:44.115501 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 00:03:44.116089 dockerd[1635]: time="2025-05-09T00:03:44.115249155Z" level=info msg="API listen on /run/docker.sock" May 9 00:03:44.774692 containerd[1434]: time="2025-05-09T00:03:44.774555884Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 9 00:03:45.478087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3331362973.mount: Deactivated successfully. May 9 00:03:46.845059 containerd[1434]: time="2025-05-09T00:03:46.845003959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:46.845543 containerd[1434]: time="2025-05-09T00:03:46.845513522Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" May 9 00:03:46.846795 containerd[1434]: time="2025-05-09T00:03:46.846753089Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:46.850005 containerd[1434]: time="2025-05-09T00:03:46.849935424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:46.851738 containerd[1434]: time="2025-05-09T00:03:46.851699350Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 2.077089481s" May 9 00:03:46.851804 containerd[1434]: time="2025-05-09T00:03:46.851740624Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 9 00:03:46.852739 containerd[1434]: time="2025-05-09T00:03:46.852656707Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 9 00:03:47.576367 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 00:03:47.586193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:03:47.683921 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:03:47.687831 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:03:47.727005 kubelet[1847]: E0509 00:03:47.726948 1847 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:03:47.730244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:03:47.730389 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:03:48.548293 containerd[1434]: time="2025-05-09T00:03:48.548234182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:48.551267 containerd[1434]: time="2025-05-09T00:03:48.551203166Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" May 9 00:03:48.552174 containerd[1434]: time="2025-05-09T00:03:48.552140740Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:48.555177 containerd[1434]: time="2025-05-09T00:03:48.555140171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:48.556342 containerd[1434]: time="2025-05-09T00:03:48.556293125Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.703598167s" May 9 00:03:48.556342 containerd[1434]: time="2025-05-09T00:03:48.556329459Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 9 00:03:48.556805 containerd[1434]: time="2025-05-09T00:03:48.556774332Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 9 00:03:49.855060 containerd[1434]: time="2025-05-09T00:03:49.854280597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:49.855428 containerd[1434]: time="2025-05-09T00:03:49.855385455Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" May 9 00:03:49.856258 containerd[1434]: time="2025-05-09T00:03:49.856217421Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:49.860732 containerd[1434]: time="2025-05-09T00:03:49.860671377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:49.861882 containerd[1434]: time="2025-05-09T00:03:49.861830740Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.30502456s" May 9 00:03:49.861882 containerd[1434]: time="2025-05-09T00:03:49.861864177Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 9 00:03:49.862424 containerd[1434]: time="2025-05-09T00:03:49.862395133Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 9 00:03:50.937812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3951590330.mount: Deactivated successfully. May 9 00:03:51.150925 containerd[1434]: time="2025-05-09T00:03:51.150866899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:51.152322 containerd[1434]: time="2025-05-09T00:03:51.152286683Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 9 00:03:51.153137 containerd[1434]: time="2025-05-09T00:03:51.153100700Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:51.154975 containerd[1434]: time="2025-05-09T00:03:51.154943982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:51.155940 containerd[1434]: time="2025-05-09T00:03:51.155904273Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.293392008s" May 9 00:03:51.155973 containerd[1434]: time="2025-05-09T00:03:51.155939545Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 9 00:03:51.156432 containerd[1434]: time="2025-05-09T00:03:51.156370282Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 9 00:03:51.783075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2103498638.mount: Deactivated successfully. May 9 00:03:52.537229 containerd[1434]: time="2025-05-09T00:03:52.537177602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:52.539015 containerd[1434]: time="2025-05-09T00:03:52.538966662Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 9 00:03:52.540136 containerd[1434]: time="2025-05-09T00:03:52.540079550Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:52.543264 containerd[1434]: time="2025-05-09T00:03:52.543207492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:52.544461 containerd[1434]: time="2025-05-09T00:03:52.544332919Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.387931353s" May 9 00:03:52.544461 containerd[1434]: time="2025-05-09T00:03:52.544365313Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 9 00:03:52.544825 containerd[1434]: time="2025-05-09T00:03:52.544801586Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 9 00:03:52.984432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3809468690.mount: Deactivated successfully. May 9 00:03:52.989794 containerd[1434]: time="2025-05-09T00:03:52.989730696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:52.990207 containerd[1434]: time="2025-05-09T00:03:52.990166849Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 9 00:03:52.990925 containerd[1434]: time="2025-05-09T00:03:52.990890527Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:52.992997 containerd[1434]: time="2025-05-09T00:03:52.992957789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:52.993932 containerd[1434]: time="2025-05-09T00:03:52.993897856Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 449.0648ms" May 9 00:03:52.993967 containerd[1434]: time="2025-05-09T00:03:52.993932099Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 9 00:03:52.994554 containerd[1434]: time="2025-05-09T00:03:52.994509281Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 9 00:03:53.536930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount947386696.mount: Deactivated successfully. May 9 00:03:55.987470 containerd[1434]: time="2025-05-09T00:03:55.987416190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:55.988071 containerd[1434]: time="2025-05-09T00:03:55.988029425Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 9 00:03:55.989443 containerd[1434]: time="2025-05-09T00:03:55.989373510Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:55.992099 containerd[1434]: time="2025-05-09T00:03:55.992066736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:03:55.994180 containerd[1434]: time="2025-05-09T00:03:55.994047531Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.999502131s" May 9 00:03:55.994180 containerd[1434]: time="2025-05-09T00:03:55.994084167Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 9 00:03:57.981533 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 00:03:57.993246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:03:58.125344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:03:58.129914 (kubelet)[2004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:03:58.167795 kubelet[2004]: E0509 00:03:58.167738 2004 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:03:58.169723 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:03:58.169848 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:04:00.792457 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:04:00.802284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:04:00.827800 systemd[1]: Reloading requested from client PID 2020 ('systemctl') (unit session-7.scope)... May 9 00:04:00.827818 systemd[1]: Reloading... May 9 00:04:00.898031 zram_generator::config[2060]: No configuration found. May 9 00:04:01.099913 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:04:01.157904 systemd[1]: Reloading finished in 329 ms. May 9 00:04:01.201233 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 00:04:01.201302 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 00:04:01.201569 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:04:01.204270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:04:01.303769 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:04:01.308327 (kubelet)[2105]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:04:01.345589 kubelet[2105]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:04:01.345589 kubelet[2105]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:04:01.345589 kubelet[2105]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:04:01.346019 kubelet[2105]: I0509 00:04:01.345728 2105 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:04:02.072445 kubelet[2105]: I0509 00:04:02.072337 2105 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 9 00:04:02.072445 kubelet[2105]: I0509 00:04:02.072376 2105 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:04:02.074087 kubelet[2105]: I0509 00:04:02.073048 2105 server.go:929] "Client rotation is on, will bootstrap in background" May 9 00:04:02.111832 kubelet[2105]: E0509 00:04:02.111787 2105 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" May 9 00:04:02.112629 kubelet[2105]: I0509 00:04:02.112603 2105 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:04:02.120318 kubelet[2105]: E0509 00:04:02.120281 2105 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 00:04:02.120318 kubelet[2105]: I0509 00:04:02.120314 2105 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 00:04:02.124067 kubelet[2105]: I0509 00:04:02.124036 2105 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:04:02.124408 kubelet[2105]: I0509 00:04:02.124381 2105 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 9 00:04:02.124539 kubelet[2105]: I0509 00:04:02.124505 2105 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:04:02.124779 kubelet[2105]: I0509 00:04:02.124545 2105 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 00:04:02.124942 kubelet[2105]: I0509 00:04:02.124920 2105 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:04:02.124973 kubelet[2105]: I0509 00:04:02.124944 2105 container_manager_linux.go:300] "Creating device plugin manager" May 9 00:04:02.125208 kubelet[2105]: I0509 00:04:02.125183 2105 state_mem.go:36] "Initialized new in-memory state store" May 9 00:04:02.127753 kubelet[2105]: I0509 00:04:02.126982 2105 kubelet.go:408] "Attempting to sync node with API server" May 9 00:04:02.127753 kubelet[2105]: I0509 00:04:02.127032 2105 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:04:02.127753 kubelet[2105]: I0509 00:04:02.127119 2105 kubelet.go:314] "Adding apiserver pod source" May 9 00:04:02.127753 kubelet[2105]: I0509 00:04:02.127133 2105 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:04:02.130391 kubelet[2105]: W0509 00:04:02.130326 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 9 00:04:02.130498 kubelet[2105]: E0509 00:04:02.130412 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" May 9 00:04:02.130901 kubelet[2105]: W0509 00:04:02.130855 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 9 00:04:02.131026 kubelet[2105]: E0509 00:04:02.130983 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" May 9 00:04:02.133574 kubelet[2105]: I0509 00:04:02.133536 2105 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 00:04:02.135860 kubelet[2105]: I0509 00:04:02.135831 2105 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:04:02.137883 kubelet[2105]: W0509 00:04:02.137851 2105 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 00:04:02.138609 kubelet[2105]: I0509 00:04:02.138589 2105 server.go:1269] "Started kubelet" May 9 00:04:02.140851 kubelet[2105]: I0509 00:04:02.140809 2105 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:04:02.145089 kubelet[2105]: I0509 00:04:02.142265 2105 volume_manager.go:289] "Starting Kubelet Volume Manager" May 9 00:04:02.145089 kubelet[2105]: I0509 00:04:02.142386 2105 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 9 00:04:02.145089 kubelet[2105]: I0509 00:04:02.142468 2105 reconciler.go:26] "Reconciler: start to sync state" May 9 00:04:02.145089 kubelet[2105]: W0509 00:04:02.142823 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 9 00:04:02.145089 kubelet[2105]: E0509 00:04:02.142872 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" May 9 00:04:02.145089 kubelet[2105]: I0509 00:04:02.143125 2105 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:04:02.145089 kubelet[2105]: I0509 00:04:02.143162 2105 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 00:04:02.145089 kubelet[2105]: E0509 00:04:02.143968 2105 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:04:02.145089 kubelet[2105]: I0509 00:04:02.144521 2105 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:04:02.145089 kubelet[2105]: I0509 00:04:02.144793 2105 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:04:02.145089 kubelet[2105]: I0509 00:04:02.144958 2105 server.go:460] "Adding debug handlers to kubelet server" May 9 00:04:02.145392 kubelet[2105]: E0509 00:04:02.141852 2105 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.43:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183db2ffc86c3b27 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 00:04:02.138561319 +0000 UTC m=+0.827109022,LastTimestamp:2025-05-09 00:04:02.138561319 +0000 UTC m=+0.827109022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 00:04:02.145695 kubelet[2105]: I0509 00:04:02.145477 2105 factory.go:221] Registration of the systemd container factory successfully May 9 00:04:02.145695 kubelet[2105]: I0509 00:04:02.145678 2105 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:04:02.149121 kubelet[2105]: E0509 00:04:02.146304 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="200ms" May 9 00:04:02.149121 kubelet[2105]: I0509 00:04:02.147357 2105 factory.go:221] Registration of the containerd container factory successfully May 9 00:04:02.171140 kubelet[2105]: I0509 00:04:02.169452 2105 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:04:02.171258 kubelet[2105]: I0509 00:04:02.171164 2105 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:04:02.171258 kubelet[2105]: I0509 00:04:02.171198 2105 state_mem.go:36] "Initialized new in-memory state store" May 9 00:04:02.173738 kubelet[2105]: I0509 00:04:02.173687 2105 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:04:02.174952 kubelet[2105]: I0509 00:04:02.174923 2105 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:04:02.175015 kubelet[2105]: I0509 00:04:02.174963 2105 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:04:02.175015 kubelet[2105]: I0509 00:04:02.174982 2105 kubelet.go:2321] "Starting kubelet main sync loop" May 9 00:04:02.175083 kubelet[2105]: E0509 00:04:02.175043 2105 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:04:02.175781 kubelet[2105]: W0509 00:04:02.175729 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 9 00:04:02.175781 kubelet[2105]: E0509 00:04:02.175775 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" May 9 00:04:02.237323 kubelet[2105]: I0509 00:04:02.237290 2105 policy_none.go:49] "None policy: Start" May 9 00:04:02.238421 kubelet[2105]: I0509 00:04:02.238350 2105 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:04:02.238421 kubelet[2105]: I0509 00:04:02.238385 2105 state_mem.go:35] "Initializing new in-memory state store" May 9 00:04:02.244424 kubelet[2105]: E0509 00:04:02.244385 2105 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:04:02.245417 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 00:04:02.255810 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 00:04:02.258857 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 00:04:02.268928 kubelet[2105]: I0509 00:04:02.268874 2105 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:04:02.269153 kubelet[2105]: I0509 00:04:02.269126 2105 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 00:04:02.269186 kubelet[2105]: I0509 00:04:02.269146 2105 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:04:02.269935 kubelet[2105]: I0509 00:04:02.269901 2105 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:04:02.271733 kubelet[2105]: E0509 00:04:02.271676 2105 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 9 00:04:02.284032 systemd[1]: Created slice kubepods-burstable-pod81072df2ffd250b667a6653b755c19c8.slice - libcontainer container kubepods-burstable-pod81072df2ffd250b667a6653b755c19c8.slice. May 9 00:04:02.299329 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 9 00:04:02.312603 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 9 00:04:02.346842 kubelet[2105]: E0509 00:04:02.346709 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="400ms" May 9 00:04:02.370826 kubelet[2105]: I0509 00:04:02.370775 2105 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:04:02.371215 kubelet[2105]: E0509 00:04:02.371173 2105 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" May 9 00:04:02.443572 kubelet[2105]: I0509 00:04:02.443536 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:04:02.443636 kubelet[2105]: I0509 00:04:02.443579 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 9 00:04:02.443636 kubelet[2105]: I0509 00:04:02.443601 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:04:02.443636 kubelet[2105]: I0509 00:04:02.443616 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:04:02.443636 kubelet[2105]: I0509 00:04:02.443636 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81072df2ffd250b667a6653b755c19c8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"81072df2ffd250b667a6653b755c19c8\") " pod="kube-system/kube-apiserver-localhost" May 9 00:04:02.443734 kubelet[2105]: I0509 00:04:02.443650 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81072df2ffd250b667a6653b755c19c8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"81072df2ffd250b667a6653b755c19c8\") " pod="kube-system/kube-apiserver-localhost" May 9 00:04:02.443734 kubelet[2105]: I0509 00:04:02.443664 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81072df2ffd250b667a6653b755c19c8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"81072df2ffd250b667a6653b755c19c8\") " pod="kube-system/kube-apiserver-localhost" May 9 00:04:02.443734 kubelet[2105]: I0509 00:04:02.443679 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:04:02.443734 kubelet[2105]: I0509 00:04:02.443695 2105 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:04:02.573235 kubelet[2105]: I0509 00:04:02.573159 2105 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:04:02.573546 kubelet[2105]: E0509 00:04:02.573518 2105 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" May 9 00:04:02.598003 kubelet[2105]: E0509 00:04:02.597826 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:02.598766 containerd[1434]: time="2025-05-09T00:04:02.598512295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:81072df2ffd250b667a6653b755c19c8,Namespace:kube-system,Attempt:0,}" May 9 00:04:02.610657 kubelet[2105]: E0509 00:04:02.610498 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:02.611072 containerd[1434]: time="2025-05-09T00:04:02.611024991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 9 00:04:02.615498 kubelet[2105]: E0509 00:04:02.615459 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:02.616186 containerd[1434]: time="2025-05-09T00:04:02.615918929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 9 00:04:02.747534 kubelet[2105]: E0509 00:04:02.747484 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="800ms" May 9 00:04:02.975654 kubelet[2105]: I0509 00:04:02.975527 2105 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:04:02.976109 kubelet[2105]: E0509 00:04:02.975896 2105 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" May 9 00:04:03.068941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount876006067.mount: Deactivated successfully. May 9 00:04:03.081051 containerd[1434]: time="2025-05-09T00:04:03.080359023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:04:03.083053 containerd[1434]: time="2025-05-09T00:04:03.082983424Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 9 00:04:03.083904 containerd[1434]: time="2025-05-09T00:04:03.083845971Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:04:03.084740 containerd[1434]: time="2025-05-09T00:04:03.084710399Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:04:03.085557 containerd[1434]: time="2025-05-09T00:04:03.085352064Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:04:03.086044 containerd[1434]: time="2025-05-09T00:04:03.086009626Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:04:03.087028 containerd[1434]: time="2025-05-09T00:04:03.086845623Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:04:03.090187 kubelet[2105]: W0509 00:04:03.090115 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 9 00:04:03.090314 kubelet[2105]: E0509 00:04:03.090211 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" May 9 00:04:03.091549 containerd[1434]: time="2025-05-09T00:04:03.091498371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:04:03.092488 containerd[1434]: time="2025-05-09T00:04:03.092450576Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 493.853976ms" May 9 00:04:03.094431 containerd[1434]: time="2025-05-09T00:04:03.094266569Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 483.157395ms" May 9 00:04:03.095937 containerd[1434]: time="2025-05-09T00:04:03.095895918Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 479.900775ms" May 9 00:04:03.253450 containerd[1434]: time="2025-05-09T00:04:03.252691796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:04:03.253450 containerd[1434]: time="2025-05-09T00:04:03.252779653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:04:03.253450 containerd[1434]: time="2025-05-09T00:04:03.252794749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:03.253450 containerd[1434]: time="2025-05-09T00:04:03.252890815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:03.255779 containerd[1434]: time="2025-05-09T00:04:03.255150295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:04:03.255897 containerd[1434]: time="2025-05-09T00:04:03.255796324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:04:03.255897 containerd[1434]: time="2025-05-09T00:04:03.255813944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:03.256054 containerd[1434]: time="2025-05-09T00:04:03.256020891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:03.256975 containerd[1434]: time="2025-05-09T00:04:03.256900216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:04:03.256975 containerd[1434]: time="2025-05-09T00:04:03.256945265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:04:03.257123 containerd[1434]: time="2025-05-09T00:04:03.256956598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:03.257123 containerd[1434]: time="2025-05-09T00:04:03.257046136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:03.282266 systemd[1]: Started cri-containerd-77efcafcaa94c4fd598c05282dcc6cdc197cbebeafd26cff36ea43f5916e7eeb.scope - libcontainer container 77efcafcaa94c4fd598c05282dcc6cdc197cbebeafd26cff36ea43f5916e7eeb. May 9 00:04:03.284017 systemd[1]: Started cri-containerd-9c98be96cc52988dbfe17533da1c869d7c3f3f21a0ffaa272ebf950800a34b30.scope - libcontainer container 9c98be96cc52988dbfe17533da1c869d7c3f3f21a0ffaa272ebf950800a34b30. May 9 00:04:03.285112 systemd[1]: Started cri-containerd-d9b3b1196193e8b959d84c1927761c2590b3ff16dac062c195b95df84727d2e5.scope - libcontainer container d9b3b1196193e8b959d84c1927761c2590b3ff16dac062c195b95df84727d2e5. May 9 00:04:03.326686 containerd[1434]: time="2025-05-09T00:04:03.326587593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"77efcafcaa94c4fd598c05282dcc6cdc197cbebeafd26cff36ea43f5916e7eeb\"" May 9 00:04:03.327957 kubelet[2105]: E0509 00:04:03.327878 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:03.330424 containerd[1434]: time="2025-05-09T00:04:03.330216016Z" level=info msg="CreateContainer within sandbox \"77efcafcaa94c4fd598c05282dcc6cdc197cbebeafd26cff36ea43f5916e7eeb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 00:04:03.334823 containerd[1434]: time="2025-05-09T00:04:03.334784311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c98be96cc52988dbfe17533da1c869d7c3f3f21a0ffaa272ebf950800a34b30\"" May 9 00:04:03.336971 kubelet[2105]: E0509 00:04:03.336854 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:03.339632 containerd[1434]: time="2025-05-09T00:04:03.339585942Z" level=info msg="CreateContainer within sandbox \"9c98be96cc52988dbfe17533da1c869d7c3f3f21a0ffaa272ebf950800a34b30\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 00:04:03.342840 containerd[1434]: time="2025-05-09T00:04:03.342728792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:81072df2ffd250b667a6653b755c19c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9b3b1196193e8b959d84c1927761c2590b3ff16dac062c195b95df84727d2e5\"" May 9 00:04:03.343732 kubelet[2105]: E0509 00:04:03.343686 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:03.345344 containerd[1434]: time="2025-05-09T00:04:03.345309945Z" level=info msg="CreateContainer within sandbox \"d9b3b1196193e8b959d84c1927761c2590b3ff16dac062c195b95df84727d2e5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 00:04:03.358792 containerd[1434]: time="2025-05-09T00:04:03.358730397Z" level=info msg="CreateContainer within sandbox \"77efcafcaa94c4fd598c05282dcc6cdc197cbebeafd26cff36ea43f5916e7eeb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"708010d7acb7dc1bce564492e42d02c6c8bb910a6e97981c892af0e8ea1512e6\"" May 9 00:04:03.359529 containerd[1434]: time="2025-05-09T00:04:03.359431607Z" level=info msg="StartContainer for \"708010d7acb7dc1bce564492e42d02c6c8bb910a6e97981c892af0e8ea1512e6\"" May 9 00:04:03.364945 containerd[1434]: time="2025-05-09T00:04:03.364879747Z" level=info msg="CreateContainer within sandbox \"9c98be96cc52988dbfe17533da1c869d7c3f3f21a0ffaa272ebf950800a34b30\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ee6184826225bea84691a070d26e13bc74eafe45ef10f695dbaaaa8cc52c3b44\"" May 9 00:04:03.365454 containerd[1434]: time="2025-05-09T00:04:03.365424666Z" level=info msg="StartContainer for \"ee6184826225bea84691a070d26e13bc74eafe45ef10f695dbaaaa8cc52c3b44\"" May 9 00:04:03.372592 containerd[1434]: time="2025-05-09T00:04:03.371191035Z" level=info msg="CreateContainer within sandbox \"d9b3b1196193e8b959d84c1927761c2590b3ff16dac062c195b95df84727d2e5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"30128217c7e6a27b6b8dc0dfb7d8bfcc128c5caba09a4164a5c8ed76c84aa36a\"" May 9 00:04:03.373173 containerd[1434]: time="2025-05-09T00:04:03.373132086Z" level=info msg="StartContainer for \"30128217c7e6a27b6b8dc0dfb7d8bfcc128c5caba09a4164a5c8ed76c84aa36a\"" May 9 00:04:03.373749 kubelet[2105]: W0509 00:04:03.373690 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 9 00:04:03.374090 kubelet[2105]: E0509 00:04:03.373768 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" May 9 00:04:03.392223 systemd[1]: Started cri-containerd-708010d7acb7dc1bce564492e42d02c6c8bb910a6e97981c892af0e8ea1512e6.scope - libcontainer container 708010d7acb7dc1bce564492e42d02c6c8bb910a6e97981c892af0e8ea1512e6. May 9 00:04:03.395275 systemd[1]: Started cri-containerd-ee6184826225bea84691a070d26e13bc74eafe45ef10f695dbaaaa8cc52c3b44.scope - libcontainer container ee6184826225bea84691a070d26e13bc74eafe45ef10f695dbaaaa8cc52c3b44. May 9 00:04:03.400700 systemd[1]: Started cri-containerd-30128217c7e6a27b6b8dc0dfb7d8bfcc128c5caba09a4164a5c8ed76c84aa36a.scope - libcontainer container 30128217c7e6a27b6b8dc0dfb7d8bfcc128c5caba09a4164a5c8ed76c84aa36a. May 9 00:04:03.501354 kubelet[2105]: W0509 00:04:03.501279 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 9 00:04:03.501354 kubelet[2105]: E0509 00:04:03.501358 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" May 9 00:04:03.530841 kubelet[2105]: W0509 00:04:03.530694 2105 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 9 00:04:03.530841 kubelet[2105]: E0509 00:04:03.530773 2105 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" May 9 00:04:03.546715 containerd[1434]: time="2025-05-09T00:04:03.546637707Z" level=info msg="StartContainer for \"708010d7acb7dc1bce564492e42d02c6c8bb910a6e97981c892af0e8ea1512e6\" returns successfully" May 9 00:04:03.546828 containerd[1434]: time="2025-05-09T00:04:03.546814021Z" level=info msg="StartContainer for \"30128217c7e6a27b6b8dc0dfb7d8bfcc128c5caba09a4164a5c8ed76c84aa36a\" returns successfully" May 9 00:04:03.546903 containerd[1434]: time="2025-05-09T00:04:03.546861112Z" level=info msg="StartContainer for \"ee6184826225bea84691a070d26e13bc74eafe45ef10f695dbaaaa8cc52c3b44\" returns successfully" May 9 00:04:03.548657 kubelet[2105]: E0509 00:04:03.548618 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="1.6s" May 9 00:04:03.777106 kubelet[2105]: I0509 00:04:03.777070 2105 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:04:04.184844 kubelet[2105]: E0509 00:04:04.184696 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:04.186312 kubelet[2105]: E0509 00:04:04.186119 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:04.189883 kubelet[2105]: E0509 00:04:04.189718 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:05.189239 kubelet[2105]: E0509 00:04:05.189208 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:05.547411 kubelet[2105]: E0509 00:04:05.547181 2105 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 9 00:04:05.615453 kubelet[2105]: I0509 00:04:05.615401 2105 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 9 00:04:05.615453 kubelet[2105]: E0509 00:04:05.615448 2105 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 9 00:04:06.129202 kubelet[2105]: I0509 00:04:06.129150 2105 apiserver.go:52] "Watching apiserver" May 9 00:04:06.142536 kubelet[2105]: I0509 00:04:06.142470 2105 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 9 00:04:06.195286 kubelet[2105]: E0509 00:04:06.195247 2105 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 9 00:04:06.195640 kubelet[2105]: E0509 00:04:06.195434 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:07.781531 systemd[1]: Reloading requested from client PID 2386 ('systemctl') (unit session-7.scope)... May 9 00:04:07.781566 systemd[1]: Reloading... May 9 00:04:07.857021 zram_generator::config[2428]: No configuration found. May 9 00:04:07.941371 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:04:08.010691 systemd[1]: Reloading finished in 228 ms. May 9 00:04:08.030362 kubelet[2105]: E0509 00:04:08.030249 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:08.055380 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:04:08.069234 systemd[1]: kubelet.service: Deactivated successfully. May 9 00:04:08.071125 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:04:08.071186 systemd[1]: kubelet.service: Consumed 1.249s CPU time, 116.9M memory peak, 0B memory swap peak. May 9 00:04:08.082333 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:04:08.180182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:04:08.185802 (kubelet)[2467]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:04:08.229518 kubelet[2467]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:04:08.229518 kubelet[2467]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:04:08.229518 kubelet[2467]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:04:08.230100 kubelet[2467]: I0509 00:04:08.229558 2467 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:04:08.235089 kubelet[2467]: I0509 00:04:08.235051 2467 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 9 00:04:08.235089 kubelet[2467]: I0509 00:04:08.235081 2467 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:04:08.235317 kubelet[2467]: I0509 00:04:08.235293 2467 server.go:929] "Client rotation is on, will bootstrap in background" May 9 00:04:08.237102 kubelet[2467]: I0509 00:04:08.237034 2467 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 00:04:08.243254 kubelet[2467]: I0509 00:04:08.243221 2467 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:04:08.246158 kubelet[2467]: E0509 00:04:08.246123 2467 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 00:04:08.246158 kubelet[2467]: I0509 00:04:08.246157 2467 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 00:04:08.249027 kubelet[2467]: I0509 00:04:08.248456 2467 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:04:08.249027 kubelet[2467]: I0509 00:04:08.248594 2467 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 9 00:04:08.249027 kubelet[2467]: I0509 00:04:08.248680 2467 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:04:08.249027 kubelet[2467]: I0509 00:04:08.248710 2467 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 00:04:08.252195 kubelet[2467]: I0509 00:04:08.248894 2467 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:04:08.252195 kubelet[2467]: I0509 00:04:08.248903 2467 container_manager_linux.go:300] "Creating device plugin manager" May 9 00:04:08.252195 kubelet[2467]: I0509 00:04:08.248935 2467 state_mem.go:36] "Initialized new in-memory state store" May 9 00:04:08.252195 kubelet[2467]: I0509 00:04:08.249064 2467 kubelet.go:408] "Attempting to sync node with API server" May 9 00:04:08.252195 kubelet[2467]: I0509 00:04:08.249082 2467 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:04:08.252195 kubelet[2467]: I0509 00:04:08.249104 2467 kubelet.go:314] "Adding apiserver pod source" May 9 00:04:08.252195 kubelet[2467]: I0509 00:04:08.249114 2467 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:04:08.253192 kubelet[2467]: I0509 00:04:08.253161 2467 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 00:04:08.253681 kubelet[2467]: I0509 00:04:08.253651 2467 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:04:08.254608 kubelet[2467]: I0509 00:04:08.254579 2467 server.go:1269] "Started kubelet" May 9 00:04:08.257435 kubelet[2467]: I0509 00:04:08.257403 2467 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:04:08.257588 kubelet[2467]: I0509 00:04:08.257555 2467 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:04:08.258850 kubelet[2467]: I0509 00:04:08.258822 2467 server.go:460] "Adding debug handlers to kubelet server" May 9 00:04:08.266189 kubelet[2467]: E0509 00:04:08.266151 2467 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:04:08.266830 kubelet[2467]: I0509 00:04:08.259721 2467 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 00:04:08.266916 kubelet[2467]: I0509 00:04:08.266892 2467 volume_manager.go:289] "Starting Kubelet Volume Manager" May 9 00:04:08.267019 kubelet[2467]: I0509 00:04:08.267004 2467 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 9 00:04:08.267150 kubelet[2467]: I0509 00:04:08.267134 2467 reconciler.go:26] "Reconciler: start to sync state" May 9 00:04:08.267899 kubelet[2467]: I0509 00:04:08.267073 2467 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:04:08.269659 kubelet[2467]: I0509 00:04:08.269633 2467 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:04:08.278573 kubelet[2467]: I0509 00:04:08.278354 2467 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:04:08.279229 kubelet[2467]: I0509 00:04:08.279172 2467 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:04:08.281549 kubelet[2467]: I0509 00:04:08.281520 2467 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:04:08.281603 kubelet[2467]: I0509 00:04:08.281558 2467 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:04:08.281603 kubelet[2467]: I0509 00:04:08.281578 2467 kubelet.go:2321] "Starting kubelet main sync loop" May 9 00:04:08.283211 kubelet[2467]: E0509 00:04:08.283169 2467 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:04:08.287399 kubelet[2467]: I0509 00:04:08.287368 2467 factory.go:221] Registration of the containerd container factory successfully May 9 00:04:08.289012 kubelet[2467]: I0509 00:04:08.287604 2467 factory.go:221] Registration of the systemd container factory successfully May 9 00:04:08.324480 kubelet[2467]: I0509 00:04:08.323339 2467 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:04:08.324480 kubelet[2467]: I0509 00:04:08.323371 2467 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:04:08.324480 kubelet[2467]: I0509 00:04:08.323393 2467 state_mem.go:36] "Initialized new in-memory state store" May 9 00:04:08.324480 kubelet[2467]: I0509 00:04:08.323614 2467 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 00:04:08.324480 kubelet[2467]: I0509 00:04:08.323627 2467 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 00:04:08.324480 kubelet[2467]: I0509 00:04:08.323646 2467 policy_none.go:49] "None policy: Start" May 9 00:04:08.326427 kubelet[2467]: I0509 00:04:08.326399 2467 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:04:08.326427 kubelet[2467]: I0509 00:04:08.326433 2467 state_mem.go:35] "Initializing new in-memory state store" May 9 00:04:08.326687 kubelet[2467]: I0509 00:04:08.326669 2467 state_mem.go:75] "Updated machine memory state" May 9 00:04:08.330915 kubelet[2467]: I0509 00:04:08.330871 2467 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:04:08.331197 kubelet[2467]: I0509 00:04:08.331162 2467 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 00:04:08.331259 kubelet[2467]: I0509 00:04:08.331182 2467 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:04:08.331862 kubelet[2467]: I0509 00:04:08.331426 2467 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:04:08.399673 kubelet[2467]: E0509 00:04:08.399612 2467 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 9 00:04:08.435423 kubelet[2467]: I0509 00:04:08.435381 2467 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:04:08.442581 kubelet[2467]: I0509 00:04:08.442533 2467 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 9 00:04:08.442928 kubelet[2467]: I0509 00:04:08.442850 2467 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 9 00:04:08.468816 kubelet[2467]: I0509 00:04:08.468782 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:04:08.469055 kubelet[2467]: I0509 00:04:08.468928 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 9 00:04:08.469309 kubelet[2467]: I0509 00:04:08.468954 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81072df2ffd250b667a6653b755c19c8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"81072df2ffd250b667a6653b755c19c8\") " pod="kube-system/kube-apiserver-localhost" May 9 00:04:08.469309 kubelet[2467]: I0509 00:04:08.469172 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81072df2ffd250b667a6653b755c19c8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"81072df2ffd250b667a6653b755c19c8\") " pod="kube-system/kube-apiserver-localhost" May 9 00:04:08.469309 kubelet[2467]: I0509 00:04:08.469199 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:04:08.469309 kubelet[2467]: I0509 00:04:08.469218 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:04:08.469309 kubelet[2467]: I0509 00:04:08.469234 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:04:08.469465 kubelet[2467]: I0509 00:04:08.469252 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:04:08.469465 kubelet[2467]: I0509 00:04:08.469267 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81072df2ffd250b667a6653b755c19c8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"81072df2ffd250b667a6653b755c19c8\") " pod="kube-system/kube-apiserver-localhost" May 9 00:04:08.690859 kubelet[2467]: E0509 00:04:08.690747 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:08.698370 kubelet[2467]: E0509 00:04:08.698329 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:08.700678 kubelet[2467]: E0509 00:04:08.700631 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:09.249679 kubelet[2467]: I0509 00:04:09.249632 2467 apiserver.go:52] "Watching apiserver" May 9 00:04:09.267701 kubelet[2467]: I0509 00:04:09.267622 2467 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 9 00:04:09.306954 kubelet[2467]: E0509 00:04:09.306915 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:09.308324 kubelet[2467]: E0509 00:04:09.307861 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:09.314833 kubelet[2467]: E0509 00:04:09.314763 2467 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 9 00:04:09.314980 kubelet[2467]: E0509 00:04:09.314930 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:09.338309 kubelet[2467]: I0509 00:04:09.338231 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.33821335 podStartE2EDuration="1.33821335s" podCreationTimestamp="2025-05-09 00:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:04:09.330058262 +0000 UTC m=+1.140727553" watchObservedRunningTime="2025-05-09 00:04:09.33821335 +0000 UTC m=+1.148882641" May 9 00:04:09.357473 kubelet[2467]: I0509 00:04:09.357272 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.357253605 podStartE2EDuration="1.357253605s" podCreationTimestamp="2025-05-09 00:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:04:09.338516185 +0000 UTC m=+1.149185476" watchObservedRunningTime="2025-05-09 00:04:09.357253605 +0000 UTC m=+1.167922896" May 9 00:04:10.309530 kubelet[2467]: E0509 00:04:10.309475 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:12.968658 sudo[1616]: pam_unix(sudo:session): session closed for user root May 9 00:04:12.974340 sshd[1613]: pam_unix(sshd:session): session closed for user core May 9 00:04:12.977872 systemd[1]: sshd@6-10.0.0.43:22-10.0.0.1:46050.service: Deactivated successfully. May 9 00:04:12.979677 systemd[1]: session-7.scope: Deactivated successfully. May 9 00:04:12.981042 systemd[1]: session-7.scope: Consumed 6.449s CPU time, 153.8M memory peak, 0B memory swap peak. May 9 00:04:12.981558 systemd-logind[1421]: Session 7 logged out. Waiting for processes to exit. May 9 00:04:12.982602 systemd-logind[1421]: Removed session 7. May 9 00:04:14.086939 kubelet[2467]: I0509 00:04:14.086902 2467 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 00:04:14.087588 containerd[1434]: time="2025-05-09T00:04:14.087480295Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 00:04:14.087844 kubelet[2467]: I0509 00:04:14.087728 2467 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 00:04:14.896037 kubelet[2467]: E0509 00:04:14.895973 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:14.915697 kubelet[2467]: I0509 00:04:14.915634 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.9149772 podStartE2EDuration="6.9149772s" podCreationTimestamp="2025-05-09 00:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:04:09.357326792 +0000 UTC m=+1.167996083" watchObservedRunningTime="2025-05-09 00:04:14.9149772 +0000 UTC m=+6.725646491" May 9 00:04:15.080347 systemd[1]: Created slice kubepods-besteffort-pod3a0945e5_efc2_4b8b_9e01_c8d4b0a0b2dc.slice - libcontainer container kubepods-besteffort-pod3a0945e5_efc2_4b8b_9e01_c8d4b0a0b2dc.slice. May 9 00:04:15.115502 kubelet[2467]: I0509 00:04:15.115459 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a0945e5-efc2-4b8b-9e01-c8d4b0a0b2dc-kube-proxy\") pod \"kube-proxy-mzmlb\" (UID: \"3a0945e5-efc2-4b8b-9e01-c8d4b0a0b2dc\") " pod="kube-system/kube-proxy-mzmlb" May 9 00:04:15.115502 kubelet[2467]: I0509 00:04:15.115500 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a0945e5-efc2-4b8b-9e01-c8d4b0a0b2dc-xtables-lock\") pod \"kube-proxy-mzmlb\" (UID: \"3a0945e5-efc2-4b8b-9e01-c8d4b0a0b2dc\") " pod="kube-system/kube-proxy-mzmlb" May 9 00:04:15.115502 kubelet[2467]: I0509 00:04:15.115516 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a0945e5-efc2-4b8b-9e01-c8d4b0a0b2dc-lib-modules\") pod \"kube-proxy-mzmlb\" (UID: \"3a0945e5-efc2-4b8b-9e01-c8d4b0a0b2dc\") " pod="kube-system/kube-proxy-mzmlb" May 9 00:04:15.115884 kubelet[2467]: I0509 00:04:15.115532 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg5kp\" (UniqueName: \"kubernetes.io/projected/3a0945e5-efc2-4b8b-9e01-c8d4b0a0b2dc-kube-api-access-qg5kp\") pod \"kube-proxy-mzmlb\" (UID: \"3a0945e5-efc2-4b8b-9e01-c8d4b0a0b2dc\") " pod="kube-system/kube-proxy-mzmlb" May 9 00:04:15.194517 systemd[1]: Created slice kubepods-besteffort-pod925a2309_d703_462a_8ca6_f931fd1a1155.slice - libcontainer container kubepods-besteffort-pod925a2309_d703_462a_8ca6_f931fd1a1155.slice. May 9 00:04:15.216759 kubelet[2467]: I0509 00:04:15.216362 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/925a2309-d703-462a-8ca6-f931fd1a1155-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-rjwlz\" (UID: \"925a2309-d703-462a-8ca6-f931fd1a1155\") " pod="tigera-operator/tigera-operator-6f6897fdc5-rjwlz" May 9 00:04:15.216759 kubelet[2467]: I0509 00:04:15.216409 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzjfs\" (UniqueName: \"kubernetes.io/projected/925a2309-d703-462a-8ca6-f931fd1a1155-kube-api-access-kzjfs\") pod \"tigera-operator-6f6897fdc5-rjwlz\" (UID: \"925a2309-d703-462a-8ca6-f931fd1a1155\") " pod="tigera-operator/tigera-operator-6f6897fdc5-rjwlz" May 9 00:04:15.317283 kubelet[2467]: E0509 00:04:15.317187 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:15.398265 kubelet[2467]: E0509 00:04:15.398217 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:15.398872 containerd[1434]: time="2025-05-09T00:04:15.398836324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mzmlb,Uid:3a0945e5-efc2-4b8b-9e01-c8d4b0a0b2dc,Namespace:kube-system,Attempt:0,}" May 9 00:04:15.420763 containerd[1434]: time="2025-05-09T00:04:15.420419418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:04:15.420763 containerd[1434]: time="2025-05-09T00:04:15.420556828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:04:15.420763 containerd[1434]: time="2025-05-09T00:04:15.420573559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:15.420763 containerd[1434]: time="2025-05-09T00:04:15.420672543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:15.437178 systemd[1]: Started cri-containerd-ce48c874ed452696917d800b3394ac972a3aea5b4efcaed14296b41acba320e7.scope - libcontainer container ce48c874ed452696917d800b3394ac972a3aea5b4efcaed14296b41acba320e7. May 9 00:04:15.459314 containerd[1434]: time="2025-05-09T00:04:15.459195469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mzmlb,Uid:3a0945e5-efc2-4b8b-9e01-c8d4b0a0b2dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce48c874ed452696917d800b3394ac972a3aea5b4efcaed14296b41acba320e7\"" May 9 00:04:15.460545 kubelet[2467]: E0509 00:04:15.460513 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:15.463468 containerd[1434]: time="2025-05-09T00:04:15.463234019Z" level=info msg="CreateContainer within sandbox \"ce48c874ed452696917d800b3394ac972a3aea5b4efcaed14296b41acba320e7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 00:04:15.497776 containerd[1434]: time="2025-05-09T00:04:15.497535836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-rjwlz,Uid:925a2309-d703-462a-8ca6-f931fd1a1155,Namespace:tigera-operator,Attempt:0,}" May 9 00:04:15.509669 containerd[1434]: time="2025-05-09T00:04:15.509611499Z" level=info msg="CreateContainer within sandbox \"ce48c874ed452696917d800b3394ac972a3aea5b4efcaed14296b41acba320e7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6906cdd79806808275a19ed381fa51fb4f6e3ef287fb3d3104017d313f90f311\"" May 9 00:04:15.513056 containerd[1434]: time="2025-05-09T00:04:15.511147300Z" level=info msg="StartContainer for \"6906cdd79806808275a19ed381fa51fb4f6e3ef287fb3d3104017d313f90f311\"" May 9 00:04:15.526514 containerd[1434]: time="2025-05-09T00:04:15.526414922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:04:15.526633 containerd[1434]: time="2025-05-09T00:04:15.526526514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:04:15.526633 containerd[1434]: time="2025-05-09T00:04:15.526562298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:15.526709 containerd[1434]: time="2025-05-09T00:04:15.526674851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:15.544189 systemd[1]: Started cri-containerd-6906cdd79806808275a19ed381fa51fb4f6e3ef287fb3d3104017d313f90f311.scope - libcontainer container 6906cdd79806808275a19ed381fa51fb4f6e3ef287fb3d3104017d313f90f311. May 9 00:04:15.546919 systemd[1]: Started cri-containerd-a1e19bd719c3003f4aa893faad9a78c1032f9609f5e3c875105e5208dfb54deb.scope - libcontainer container a1e19bd719c3003f4aa893faad9a78c1032f9609f5e3c875105e5208dfb54deb. May 9 00:04:15.582516 containerd[1434]: time="2025-05-09T00:04:15.582458897Z" level=info msg="StartContainer for \"6906cdd79806808275a19ed381fa51fb4f6e3ef287fb3d3104017d313f90f311\" returns successfully" May 9 00:04:15.583143 containerd[1434]: time="2025-05-09T00:04:15.583097953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-rjwlz,Uid:925a2309-d703-462a-8ca6-f931fd1a1155,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a1e19bd719c3003f4aa893faad9a78c1032f9609f5e3c875105e5208dfb54deb\"" May 9 00:04:15.596517 containerd[1434]: time="2025-05-09T00:04:15.596461055Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 9 00:04:16.323636 kubelet[2467]: E0509 00:04:16.323574 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:17.120701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1232586845.mount: Deactivated successfully. May 9 00:04:17.175282 kubelet[2467]: E0509 00:04:17.175235 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:17.193125 kubelet[2467]: I0509 00:04:17.193059 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mzmlb" podStartSLOduration=2.193041221 podStartE2EDuration="2.193041221s" podCreationTimestamp="2025-05-09 00:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:04:16.335576746 +0000 UTC m=+8.146245997" watchObservedRunningTime="2025-05-09 00:04:17.193041221 +0000 UTC m=+9.003710512" May 9 00:04:17.325679 kubelet[2467]: E0509 00:04:17.325650 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:17.564028 containerd[1434]: time="2025-05-09T00:04:17.563123837Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:17.564028 containerd[1434]: time="2025-05-09T00:04:17.563663273Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 9 00:04:17.564617 containerd[1434]: time="2025-05-09T00:04:17.564586413Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:17.567097 containerd[1434]: time="2025-05-09T00:04:17.567041690Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:17.567826 containerd[1434]: time="2025-05-09T00:04:17.567790408Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 1.971283765s" May 9 00:04:17.567889 containerd[1434]: time="2025-05-09T00:04:17.567825509Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 9 00:04:17.600033 containerd[1434]: time="2025-05-09T00:04:17.599960398Z" level=info msg="CreateContainer within sandbox \"a1e19bd719c3003f4aa893faad9a78c1032f9609f5e3c875105e5208dfb54deb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 9 00:04:17.609686 containerd[1434]: time="2025-05-09T00:04:17.609634140Z" level=info msg="CreateContainer within sandbox \"a1e19bd719c3003f4aa893faad9a78c1032f9609f5e3c875105e5208dfb54deb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"36ddf081428c78309d4998923d3716afe3cc8e6e7f06c57581be430a3ad9cbb8\"" May 9 00:04:17.610227 containerd[1434]: time="2025-05-09T00:04:17.610177578Z" level=info msg="StartContainer for \"36ddf081428c78309d4998923d3716afe3cc8e6e7f06c57581be430a3ad9cbb8\"" May 9 00:04:17.641202 systemd[1]: Started cri-containerd-36ddf081428c78309d4998923d3716afe3cc8e6e7f06c57581be430a3ad9cbb8.scope - libcontainer container 36ddf081428c78309d4998923d3716afe3cc8e6e7f06c57581be430a3ad9cbb8. May 9 00:04:17.662740 containerd[1434]: time="2025-05-09T00:04:17.662686153Z" level=info msg="StartContainer for \"36ddf081428c78309d4998923d3716afe3cc8e6e7f06c57581be430a3ad9cbb8\" returns successfully" May 9 00:04:18.379965 kubelet[2467]: E0509 00:04:18.379927 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:18.396307 kubelet[2467]: I0509 00:04:18.396246 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-rjwlz" podStartSLOduration=1.409093268 podStartE2EDuration="3.396228938s" podCreationTimestamp="2025-05-09 00:04:15 +0000 UTC" firstStartedPulling="2025-05-09 00:04:15.586367642 +0000 UTC m=+7.397036933" lastFinishedPulling="2025-05-09 00:04:17.573503312 +0000 UTC m=+9.384172603" observedRunningTime="2025-05-09 00:04:18.350752279 +0000 UTC m=+10.161421570" watchObservedRunningTime="2025-05-09 00:04:18.396228938 +0000 UTC m=+10.206898189" May 9 00:04:19.331294 kubelet[2467]: E0509 00:04:19.331261 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:20.116090 update_engine[1422]: I20250509 00:04:20.116019 1422 update_attempter.cc:509] Updating boot flags... May 9 00:04:20.173020 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2860) May 9 00:04:21.196291 systemd[1]: Created slice kubepods-besteffort-pod2b2f34b8_d3ca_4466_a07e_78194d22876b.slice - libcontainer container kubepods-besteffort-pod2b2f34b8_d3ca_4466_a07e_78194d22876b.slice. May 9 00:04:21.257740 kubelet[2467]: I0509 00:04:21.257703 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl9bz\" (UniqueName: \"kubernetes.io/projected/2b2f34b8-d3ca-4466-a07e-78194d22876b-kube-api-access-xl9bz\") pod \"calico-typha-56fc77f798-t2fsx\" (UID: \"2b2f34b8-d3ca-4466-a07e-78194d22876b\") " pod="calico-system/calico-typha-56fc77f798-t2fsx" May 9 00:04:21.258049 kubelet[2467]: I0509 00:04:21.257744 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b2f34b8-d3ca-4466-a07e-78194d22876b-tigera-ca-bundle\") pod \"calico-typha-56fc77f798-t2fsx\" (UID: \"2b2f34b8-d3ca-4466-a07e-78194d22876b\") " pod="calico-system/calico-typha-56fc77f798-t2fsx" May 9 00:04:21.258049 kubelet[2467]: I0509 00:04:21.257767 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2b2f34b8-d3ca-4466-a07e-78194d22876b-typha-certs\") pod \"calico-typha-56fc77f798-t2fsx\" (UID: \"2b2f34b8-d3ca-4466-a07e-78194d22876b\") " pod="calico-system/calico-typha-56fc77f798-t2fsx" May 9 00:04:21.263614 systemd[1]: Created slice kubepods-besteffort-pod3d33f6d7_cade_4158_b7ac_bfce716d7679.slice - libcontainer container kubepods-besteffort-pod3d33f6d7_cade_4158_b7ac_bfce716d7679.slice. May 9 00:04:21.358656 kubelet[2467]: I0509 00:04:21.358100 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3d33f6d7-cade-4158-b7ac-bfce716d7679-policysync\") pod \"calico-node-wwn4r\" (UID: \"3d33f6d7-cade-4158-b7ac-bfce716d7679\") " pod="calico-system/calico-node-wwn4r" May 9 00:04:21.358656 kubelet[2467]: I0509 00:04:21.358144 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d33f6d7-cade-4158-b7ac-bfce716d7679-tigera-ca-bundle\") pod \"calico-node-wwn4r\" (UID: \"3d33f6d7-cade-4158-b7ac-bfce716d7679\") " pod="calico-system/calico-node-wwn4r" May 9 00:04:21.358656 kubelet[2467]: I0509 00:04:21.358160 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3d33f6d7-cade-4158-b7ac-bfce716d7679-var-run-calico\") pod \"calico-node-wwn4r\" (UID: \"3d33f6d7-cade-4158-b7ac-bfce716d7679\") " pod="calico-system/calico-node-wwn4r" May 9 00:04:21.358656 kubelet[2467]: I0509 00:04:21.358174 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3d33f6d7-cade-4158-b7ac-bfce716d7679-cni-net-dir\") pod \"calico-node-wwn4r\" (UID: \"3d33f6d7-cade-4158-b7ac-bfce716d7679\") " pod="calico-system/calico-node-wwn4r" May 9 00:04:21.358656 kubelet[2467]: I0509 00:04:21.358240 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3d33f6d7-cade-4158-b7ac-bfce716d7679-var-lib-calico\") pod \"calico-node-wwn4r\" (UID: \"3d33f6d7-cade-4158-b7ac-bfce716d7679\") " pod="calico-system/calico-node-wwn4r" May 9 00:04:21.358884 kubelet[2467]: I0509 00:04:21.358299 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d33f6d7-cade-4158-b7ac-bfce716d7679-xtables-lock\") pod \"calico-node-wwn4r\" (UID: \"3d33f6d7-cade-4158-b7ac-bfce716d7679\") " pod="calico-system/calico-node-wwn4r" May 9 00:04:21.358884 kubelet[2467]: I0509 00:04:21.358348 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3d33f6d7-cade-4158-b7ac-bfce716d7679-cni-log-dir\") pod \"calico-node-wwn4r\" (UID: \"3d33f6d7-cade-4158-b7ac-bfce716d7679\") " pod="calico-system/calico-node-wwn4r" May 9 00:04:21.358884 kubelet[2467]: I0509 00:04:21.358367 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3d33f6d7-cade-4158-b7ac-bfce716d7679-flexvol-driver-host\") pod \"calico-node-wwn4r\" (UID: \"3d33f6d7-cade-4158-b7ac-bfce716d7679\") " pod="calico-system/calico-node-wwn4r" May 9 00:04:21.358884 kubelet[2467]: I0509 00:04:21.358404 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d33f6d7-cade-4158-b7ac-bfce716d7679-lib-modules\") pod \"calico-node-wwn4r\" (UID: \"3d33f6d7-cade-4158-b7ac-bfce716d7679\") " pod="calico-system/calico-node-wwn4r" May 9 00:04:21.358884 kubelet[2467]: I0509 00:04:21.358421 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3d33f6d7-cade-4158-b7ac-bfce716d7679-node-certs\") pod \"calico-node-wwn4r\" (UID: \"3d33f6d7-cade-4158-b7ac-bfce716d7679\") " pod="calico-system/calico-node-wwn4r" May 9 00:04:21.359016 kubelet[2467]: I0509 00:04:21.358434 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3d33f6d7-cade-4158-b7ac-bfce716d7679-cni-bin-dir\") pod \"calico-node-wwn4r\" (UID: \"3d33f6d7-cade-4158-b7ac-bfce716d7679\") " pod="calico-system/calico-node-wwn4r" May 9 00:04:21.359016 kubelet[2467]: I0509 00:04:21.358452 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpkk9\" (UniqueName: \"kubernetes.io/projected/3d33f6d7-cade-4158-b7ac-bfce716d7679-kube-api-access-zpkk9\") pod \"calico-node-wwn4r\" (UID: \"3d33f6d7-cade-4158-b7ac-bfce716d7679\") " pod="calico-system/calico-node-wwn4r" May 9 00:04:21.377042 kubelet[2467]: E0509 00:04:21.376961 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sc8g8" podUID="cd2cc3f5-b622-427f-8e40-c278d97d553c" May 9 00:04:21.459108 kubelet[2467]: I0509 00:04:21.458921 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd2cc3f5-b622-427f-8e40-c278d97d553c-kubelet-dir\") pod \"csi-node-driver-sc8g8\" (UID: \"cd2cc3f5-b622-427f-8e40-c278d97d553c\") " pod="calico-system/csi-node-driver-sc8g8" May 9 00:04:21.459108 kubelet[2467]: I0509 00:04:21.459017 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cd2cc3f5-b622-427f-8e40-c278d97d553c-registration-dir\") pod \"csi-node-driver-sc8g8\" (UID: \"cd2cc3f5-b622-427f-8e40-c278d97d553c\") " pod="calico-system/csi-node-driver-sc8g8" May 9 00:04:21.459108 kubelet[2467]: I0509 00:04:21.459049 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cd2cc3f5-b622-427f-8e40-c278d97d553c-varrun\") pod \"csi-node-driver-sc8g8\" (UID: \"cd2cc3f5-b622-427f-8e40-c278d97d553c\") " pod="calico-system/csi-node-driver-sc8g8" May 9 00:04:21.459108 kubelet[2467]: I0509 00:04:21.459063 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cd2cc3f5-b622-427f-8e40-c278d97d553c-socket-dir\") pod \"csi-node-driver-sc8g8\" (UID: \"cd2cc3f5-b622-427f-8e40-c278d97d553c\") " pod="calico-system/csi-node-driver-sc8g8" May 9 00:04:21.459108 kubelet[2467]: I0509 00:04:21.459095 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn4dj\" (UniqueName: \"kubernetes.io/projected/cd2cc3f5-b622-427f-8e40-c278d97d553c-kube-api-access-wn4dj\") pod \"csi-node-driver-sc8g8\" (UID: \"cd2cc3f5-b622-427f-8e40-c278d97d553c\") " pod="calico-system/csi-node-driver-sc8g8" May 9 00:04:21.496031 kubelet[2467]: E0509 00:04:21.495963 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.496031 kubelet[2467]: W0509 00:04:21.496007 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.496208 kubelet[2467]: E0509 00:04:21.496043 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.496322 kubelet[2467]: E0509 00:04:21.496309 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.496322 kubelet[2467]: W0509 00:04:21.496322 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.496377 kubelet[2467]: E0509 00:04:21.496333 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.500614 kubelet[2467]: E0509 00:04:21.500510 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:21.501383 containerd[1434]: time="2025-05-09T00:04:21.501319502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56fc77f798-t2fsx,Uid:2b2f34b8-d3ca-4466-a07e-78194d22876b,Namespace:calico-system,Attempt:0,}" May 9 00:04:21.529070 containerd[1434]: time="2025-05-09T00:04:21.528808922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:04:21.529070 containerd[1434]: time="2025-05-09T00:04:21.528887319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:04:21.529070 containerd[1434]: time="2025-05-09T00:04:21.528899325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:21.529070 containerd[1434]: time="2025-05-09T00:04:21.529000213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:21.545211 systemd[1]: Started cri-containerd-64f97fb75924d164a98e709f5134acd4ff2e54f15ff6b8ec161071b5f67ee0ff.scope - libcontainer container 64f97fb75924d164a98e709f5134acd4ff2e54f15ff6b8ec161071b5f67ee0ff. May 9 00:04:21.560485 kubelet[2467]: E0509 00:04:21.560329 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.560485 kubelet[2467]: W0509 00:04:21.560357 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.560485 kubelet[2467]: E0509 00:04:21.560378 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.560708 kubelet[2467]: E0509 00:04:21.560695 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.560765 kubelet[2467]: W0509 00:04:21.560753 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.560831 kubelet[2467]: E0509 00:04:21.560820 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.561114 kubelet[2467]: E0509 00:04:21.561081 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.561114 kubelet[2467]: W0509 00:04:21.561103 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.561184 kubelet[2467]: E0509 00:04:21.561128 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.561285 kubelet[2467]: E0509 00:04:21.561258 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.561285 kubelet[2467]: W0509 00:04:21.561273 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.561346 kubelet[2467]: E0509 00:04:21.561287 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.561428 kubelet[2467]: E0509 00:04:21.561417 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.561453 kubelet[2467]: W0509 00:04:21.561428 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.561453 kubelet[2467]: E0509 00:04:21.561447 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.561661 kubelet[2467]: E0509 00:04:21.561637 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.561661 kubelet[2467]: W0509 00:04:21.561649 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.561798 kubelet[2467]: E0509 00:04:21.561785 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.561897 kubelet[2467]: E0509 00:04:21.561884 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.561897 kubelet[2467]: W0509 00:04:21.561897 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.561949 kubelet[2467]: E0509 00:04:21.561910 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.562149 kubelet[2467]: E0509 00:04:21.562137 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.562149 kubelet[2467]: W0509 00:04:21.562149 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.562295 kubelet[2467]: E0509 00:04:21.562238 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.563102 kubelet[2467]: E0509 00:04:21.563068 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.563102 kubelet[2467]: W0509 00:04:21.563082 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.563189 kubelet[2467]: E0509 00:04:21.563175 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.563352 kubelet[2467]: E0509 00:04:21.563338 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.563352 kubelet[2467]: W0509 00:04:21.563350 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.563430 kubelet[2467]: E0509 00:04:21.563387 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.563508 kubelet[2467]: E0509 00:04:21.563498 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.563508 kubelet[2467]: W0509 00:04:21.563507 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.563589 kubelet[2467]: E0509 00:04:21.563577 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.563735 kubelet[2467]: E0509 00:04:21.563724 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.563735 kubelet[2467]: W0509 00:04:21.563735 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.563801 kubelet[2467]: E0509 00:04:21.563755 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.563910 kubelet[2467]: E0509 00:04:21.563899 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.563910 kubelet[2467]: W0509 00:04:21.563909 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.563968 kubelet[2467]: E0509 00:04:21.563928 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.564162 kubelet[2467]: E0509 00:04:21.564148 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.564218 kubelet[2467]: W0509 00:04:21.564165 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.564218 kubelet[2467]: E0509 00:04:21.564178 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.564444 kubelet[2467]: E0509 00:04:21.564430 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.564479 kubelet[2467]: W0509 00:04:21.564445 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.564479 kubelet[2467]: E0509 00:04:21.564460 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.564630 kubelet[2467]: E0509 00:04:21.564622 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.564630 kubelet[2467]: W0509 00:04:21.564630 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.564687 kubelet[2467]: E0509 00:04:21.564642 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.564804 kubelet[2467]: E0509 00:04:21.564795 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.564831 kubelet[2467]: W0509 00:04:21.564805 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.564871 kubelet[2467]: E0509 00:04:21.564829 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.564971 kubelet[2467]: E0509 00:04:21.564962 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.564971 kubelet[2467]: W0509 00:04:21.564971 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.565059 kubelet[2467]: E0509 00:04:21.565018 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.565122 kubelet[2467]: E0509 00:04:21.565113 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.565122 kubelet[2467]: W0509 00:04:21.565121 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.565177 kubelet[2467]: E0509 00:04:21.565143 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.565285 kubelet[2467]: E0509 00:04:21.565276 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.565285 kubelet[2467]: W0509 00:04:21.565285 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.565331 kubelet[2467]: E0509 00:04:21.565296 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.565435 kubelet[2467]: E0509 00:04:21.565425 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.565435 kubelet[2467]: W0509 00:04:21.565434 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.565484 kubelet[2467]: E0509 00:04:21.565445 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.565646 kubelet[2467]: E0509 00:04:21.565632 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.565646 kubelet[2467]: W0509 00:04:21.565646 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.565702 kubelet[2467]: E0509 00:04:21.565662 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.565826 kubelet[2467]: E0509 00:04:21.565816 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.565863 kubelet[2467]: W0509 00:04:21.565826 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.565863 kubelet[2467]: E0509 00:04:21.565852 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.566395 kubelet[2467]: E0509 00:04:21.566362 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:21.567573 kubelet[2467]: E0509 00:04:21.566655 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.567573 kubelet[2467]: W0509 00:04:21.566673 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.567573 kubelet[2467]: E0509 00:04:21.567441 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.567793 containerd[1434]: time="2025-05-09T00:04:21.566814752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wwn4r,Uid:3d33f6d7-cade-4158-b7ac-bfce716d7679,Namespace:calico-system,Attempt:0,}" May 9 00:04:21.568066 kubelet[2467]: E0509 00:04:21.568047 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.568066 kubelet[2467]: W0509 00:04:21.568063 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.568169 kubelet[2467]: E0509 00:04:21.568079 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.579941 kubelet[2467]: E0509 00:04:21.579871 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:21.580265 kubelet[2467]: W0509 00:04:21.579916 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:21.580265 kubelet[2467]: E0509 00:04:21.580093 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:21.587482 containerd[1434]: time="2025-05-09T00:04:21.587431296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56fc77f798-t2fsx,Uid:2b2f34b8-d3ca-4466-a07e-78194d22876b,Namespace:calico-system,Attempt:0,} returns sandbox id \"64f97fb75924d164a98e709f5134acd4ff2e54f15ff6b8ec161071b5f67ee0ff\"" May 9 00:04:21.589231 kubelet[2467]: E0509 00:04:21.588546 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:21.590040 containerd[1434]: time="2025-05-09T00:04:21.589979390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 9 00:04:21.602450 containerd[1434]: time="2025-05-09T00:04:21.602023530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:04:21.602450 containerd[1434]: time="2025-05-09T00:04:21.602335518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:04:21.602450 containerd[1434]: time="2025-05-09T00:04:21.602348885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:21.602653 containerd[1434]: time="2025-05-09T00:04:21.602586758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:21.617523 systemd[1]: Started cri-containerd-8393266cd20577e95e0c93dfd839e48d36a9d462f0c3c5d798b9d01a8d22d8bc.scope - libcontainer container 8393266cd20577e95e0c93dfd839e48d36a9d462f0c3c5d798b9d01a8d22d8bc. May 9 00:04:21.647832 containerd[1434]: time="2025-05-09T00:04:21.647778893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wwn4r,Uid:3d33f6d7-cade-4158-b7ac-bfce716d7679,Namespace:calico-system,Attempt:0,} returns sandbox id \"8393266cd20577e95e0c93dfd839e48d36a9d462f0c3c5d798b9d01a8d22d8bc\"" May 9 00:04:21.650075 kubelet[2467]: E0509 00:04:21.648571 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:23.282969 kubelet[2467]: E0509 00:04:23.282907 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sc8g8" podUID="cd2cc3f5-b622-427f-8e40-c278d97d553c" May 9 00:04:24.600719 containerd[1434]: time="2025-05-09T00:04:24.600664848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 9 00:04:24.603844 containerd[1434]: time="2025-05-09T00:04:24.603800138Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 3.01365571s" May 9 00:04:24.603844 containerd[1434]: time="2025-05-09T00:04:24.603843796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 9 00:04:24.605707 containerd[1434]: time="2025-05-09T00:04:24.605103554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 9 00:04:24.605925 containerd[1434]: time="2025-05-09T00:04:24.605875072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:24.606785 containerd[1434]: time="2025-05-09T00:04:24.606742309Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:24.607528 containerd[1434]: time="2025-05-09T00:04:24.607482173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:24.615605 containerd[1434]: time="2025-05-09T00:04:24.615573863Z" level=info msg="CreateContainer within sandbox \"64f97fb75924d164a98e709f5134acd4ff2e54f15ff6b8ec161071b5f67ee0ff\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 9 00:04:24.634420 containerd[1434]: time="2025-05-09T00:04:24.634371318Z" level=info msg="CreateContainer within sandbox \"64f97fb75924d164a98e709f5134acd4ff2e54f15ff6b8ec161071b5f67ee0ff\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"314e6071dbbfb8059d9b159519455ddf13542bd545cd46f7c2a4601182c4d36b\"" May 9 00:04:24.638344 containerd[1434]: time="2025-05-09T00:04:24.638309018Z" level=info msg="StartContainer for \"314e6071dbbfb8059d9b159519455ddf13542bd545cd46f7c2a4601182c4d36b\"" May 9 00:04:24.664158 systemd[1]: Started cri-containerd-314e6071dbbfb8059d9b159519455ddf13542bd545cd46f7c2a4601182c4d36b.scope - libcontainer container 314e6071dbbfb8059d9b159519455ddf13542bd545cd46f7c2a4601182c4d36b. May 9 00:04:24.727382 containerd[1434]: time="2025-05-09T00:04:24.727329609Z" level=info msg="StartContainer for \"314e6071dbbfb8059d9b159519455ddf13542bd545cd46f7c2a4601182c4d36b\" returns successfully" May 9 00:04:25.282201 kubelet[2467]: E0509 00:04:25.282143 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sc8g8" podUID="cd2cc3f5-b622-427f-8e40-c278d97d553c" May 9 00:04:25.350223 kubelet[2467]: E0509 00:04:25.350191 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:25.360555 kubelet[2467]: I0509 00:04:25.360404 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-56fc77f798-t2fsx" podStartSLOduration=1.345397131 podStartE2EDuration="4.360387384s" podCreationTimestamp="2025-05-09 00:04:21 +0000 UTC" firstStartedPulling="2025-05-09 00:04:21.589598409 +0000 UTC m=+13.400267700" lastFinishedPulling="2025-05-09 00:04:24.604588662 +0000 UTC m=+16.415257953" observedRunningTime="2025-05-09 00:04:25.360101232 +0000 UTC m=+17.170770523" watchObservedRunningTime="2025-05-09 00:04:25.360387384 +0000 UTC m=+17.171056675" May 9 00:04:25.372250 kubelet[2467]: E0509 00:04:25.372140 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.372250 kubelet[2467]: W0509 00:04:25.372164 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.372250 kubelet[2467]: E0509 00:04:25.372183 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.372517 kubelet[2467]: E0509 00:04:25.372388 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.372517 kubelet[2467]: W0509 00:04:25.372396 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.372517 kubelet[2467]: E0509 00:04:25.372405 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.372585 kubelet[2467]: E0509 00:04:25.372530 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.372585 kubelet[2467]: W0509 00:04:25.372538 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.372585 kubelet[2467]: E0509 00:04:25.372545 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.372712 kubelet[2467]: E0509 00:04:25.372702 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.372712 kubelet[2467]: W0509 00:04:25.372712 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.372775 kubelet[2467]: E0509 00:04:25.372719 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.372877 kubelet[2467]: E0509 00:04:25.372863 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.372877 kubelet[2467]: W0509 00:04:25.372872 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.372952 kubelet[2467]: E0509 00:04:25.372880 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.373027 kubelet[2467]: E0509 00:04:25.373017 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.373027 kubelet[2467]: W0509 00:04:25.373026 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.373096 kubelet[2467]: E0509 00:04:25.373034 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.373175 kubelet[2467]: E0509 00:04:25.373166 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.373175 kubelet[2467]: W0509 00:04:25.373175 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.373251 kubelet[2467]: E0509 00:04:25.373182 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.373313 kubelet[2467]: E0509 00:04:25.373303 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.373313 kubelet[2467]: W0509 00:04:25.373312 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.373384 kubelet[2467]: E0509 00:04:25.373320 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.373459 kubelet[2467]: E0509 00:04:25.373449 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.373459 kubelet[2467]: W0509 00:04:25.373458 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.373535 kubelet[2467]: E0509 00:04:25.373466 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.373613 kubelet[2467]: E0509 00:04:25.373605 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.373613 kubelet[2467]: W0509 00:04:25.373613 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.373677 kubelet[2467]: E0509 00:04:25.373621 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.373749 kubelet[2467]: E0509 00:04:25.373740 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.373749 kubelet[2467]: W0509 00:04:25.373749 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.373813 kubelet[2467]: E0509 00:04:25.373757 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.373884 kubelet[2467]: E0509 00:04:25.373875 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.373884 kubelet[2467]: W0509 00:04:25.373884 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.373947 kubelet[2467]: E0509 00:04:25.373890 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.374040 kubelet[2467]: E0509 00:04:25.374030 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.374040 kubelet[2467]: W0509 00:04:25.374040 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.374107 kubelet[2467]: E0509 00:04:25.374047 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.374181 kubelet[2467]: E0509 00:04:25.374172 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.374181 kubelet[2467]: W0509 00:04:25.374181 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.374244 kubelet[2467]: E0509 00:04:25.374188 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.374312 kubelet[2467]: E0509 00:04:25.374303 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.374312 kubelet[2467]: W0509 00:04:25.374312 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.374366 kubelet[2467]: E0509 00:04:25.374318 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.393184 kubelet[2467]: E0509 00:04:25.393031 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.393184 kubelet[2467]: W0509 00:04:25.393053 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.393184 kubelet[2467]: E0509 00:04:25.393069 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.393841 kubelet[2467]: E0509 00:04:25.393519 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.393841 kubelet[2467]: W0509 00:04:25.393534 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.393841 kubelet[2467]: E0509 00:04:25.393556 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.394284 kubelet[2467]: E0509 00:04:25.394187 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.394284 kubelet[2467]: W0509 00:04:25.394201 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.394284 kubelet[2467]: E0509 00:04:25.394222 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.394740 kubelet[2467]: E0509 00:04:25.394644 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.394740 kubelet[2467]: W0509 00:04:25.394656 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.394740 kubelet[2467]: E0509 00:04:25.394684 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.394959 kubelet[2467]: E0509 00:04:25.394844 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.394959 kubelet[2467]: W0509 00:04:25.394856 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.395241 kubelet[2467]: E0509 00:04:25.395150 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.395508 kubelet[2467]: E0509 00:04:25.395459 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.395508 kubelet[2467]: W0509 00:04:25.395473 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.395715 kubelet[2467]: E0509 00:04:25.395642 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.396329 kubelet[2467]: E0509 00:04:25.396192 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.396329 kubelet[2467]: W0509 00:04:25.396221 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.396329 kubelet[2467]: E0509 00:04:25.396245 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.396588 kubelet[2467]: E0509 00:04:25.396573 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.396741 kubelet[2467]: W0509 00:04:25.396653 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.396741 kubelet[2467]: E0509 00:04:25.396716 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.397007 kubelet[2467]: E0509 00:04:25.396963 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.397007 kubelet[2467]: W0509 00:04:25.396975 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.397201 kubelet[2467]: E0509 00:04:25.397156 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.397353 kubelet[2467]: E0509 00:04:25.397340 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.397353 kubelet[2467]: W0509 00:04:25.397374 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.397353 kubelet[2467]: E0509 00:04:25.397399 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.398074 kubelet[2467]: E0509 00:04:25.397936 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.398074 kubelet[2467]: W0509 00:04:25.397960 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.398074 kubelet[2467]: E0509 00:04:25.397979 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.398251 kubelet[2467]: E0509 00:04:25.398182 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.398251 kubelet[2467]: W0509 00:04:25.398196 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.398251 kubelet[2467]: E0509 00:04:25.398210 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.398416 kubelet[2467]: E0509 00:04:25.398385 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.398416 kubelet[2467]: W0509 00:04:25.398394 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.398416 kubelet[2467]: E0509 00:04:25.398403 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.398782 kubelet[2467]: E0509 00:04:25.398755 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.398782 kubelet[2467]: W0509 00:04:25.398767 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.398782 kubelet[2467]: E0509 00:04:25.398784 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.398985 kubelet[2467]: E0509 00:04:25.398971 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.398985 kubelet[2467]: W0509 00:04:25.398982 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.398985 kubelet[2467]: E0509 00:04:25.399039 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.399450 kubelet[2467]: E0509 00:04:25.399219 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.399450 kubelet[2467]: W0509 00:04:25.399233 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.399450 kubelet[2467]: E0509 00:04:25.399242 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.399450 kubelet[2467]: E0509 00:04:25.399467 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.399598 kubelet[2467]: W0509 00:04:25.399477 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.399598 kubelet[2467]: E0509 00:04:25.399507 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.400048 kubelet[2467]: E0509 00:04:25.400019 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:04:25.400048 kubelet[2467]: W0509 00:04:25.400036 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:04:25.400121 kubelet[2467]: E0509 00:04:25.400047 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:04:25.585051 containerd[1434]: time="2025-05-09T00:04:25.583036884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:25.589534 containerd[1434]: time="2025-05-09T00:04:25.589483855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 9 00:04:25.591536 containerd[1434]: time="2025-05-09T00:04:25.590479005Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:25.592785 containerd[1434]: time="2025-05-09T00:04:25.592727168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:25.593672 containerd[1434]: time="2025-05-09T00:04:25.593621238Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 988.48199ms" May 9 00:04:25.593672 containerd[1434]: time="2025-05-09T00:04:25.593661614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 9 00:04:25.597806 containerd[1434]: time="2025-05-09T00:04:25.597775749Z" level=info msg="CreateContainer within sandbox \"8393266cd20577e95e0c93dfd839e48d36a9d462f0c3c5d798b9d01a8d22d8bc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 9 00:04:25.617119 containerd[1434]: time="2025-05-09T00:04:25.617053595Z" level=info msg="CreateContainer within sandbox \"8393266cd20577e95e0c93dfd839e48d36a9d462f0c3c5d798b9d01a8d22d8bc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d6fc07324e459255a176c0e94ca56ddf9c9c20665ab5f933bc8e7577865fcfab\"" May 9 00:04:25.618456 containerd[1434]: time="2025-05-09T00:04:25.617757471Z" level=info msg="StartContainer for \"d6fc07324e459255a176c0e94ca56ddf9c9c20665ab5f933bc8e7577865fcfab\"" May 9 00:04:25.664224 systemd[1]: Started cri-containerd-d6fc07324e459255a176c0e94ca56ddf9c9c20665ab5f933bc8e7577865fcfab.scope - libcontainer container d6fc07324e459255a176c0e94ca56ddf9c9c20665ab5f933bc8e7577865fcfab. May 9 00:04:25.694784 containerd[1434]: time="2025-05-09T00:04:25.694728119Z" level=info msg="StartContainer for \"d6fc07324e459255a176c0e94ca56ddf9c9c20665ab5f933bc8e7577865fcfab\" returns successfully" May 9 00:04:25.720124 systemd[1]: cri-containerd-d6fc07324e459255a176c0e94ca56ddf9c9c20665ab5f933bc8e7577865fcfab.scope: Deactivated successfully. May 9 00:04:25.752092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6fc07324e459255a176c0e94ca56ddf9c9c20665ab5f933bc8e7577865fcfab-rootfs.mount: Deactivated successfully. May 9 00:04:25.781441 containerd[1434]: time="2025-05-09T00:04:25.778925082Z" level=info msg="shim disconnected" id=d6fc07324e459255a176c0e94ca56ddf9c9c20665ab5f933bc8e7577865fcfab namespace=k8s.io May 9 00:04:25.781795 containerd[1434]: time="2025-05-09T00:04:25.781605214Z" level=warning msg="cleaning up after shim disconnected" id=d6fc07324e459255a176c0e94ca56ddf9c9c20665ab5f933bc8e7577865fcfab namespace=k8s.io May 9 00:04:25.781795 containerd[1434]: time="2025-05-09T00:04:25.781625262Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:04:26.353817 kubelet[2467]: I0509 00:04:26.353750 2467 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:04:26.355805 kubelet[2467]: E0509 00:04:26.354506 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:26.355805 kubelet[2467]: E0509 00:04:26.354744 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:26.356788 containerd[1434]: time="2025-05-09T00:04:26.356383610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 9 00:04:27.282758 kubelet[2467]: E0509 00:04:27.282116 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sc8g8" podUID="cd2cc3f5-b622-427f-8e40-c278d97d553c" May 9 00:04:29.282178 kubelet[2467]: E0509 00:04:29.282130 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sc8g8" podUID="cd2cc3f5-b622-427f-8e40-c278d97d553c" May 9 00:04:29.552186 containerd[1434]: time="2025-05-09T00:04:29.552073202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:29.552635 containerd[1434]: time="2025-05-09T00:04:29.552601094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 9 00:04:29.553890 containerd[1434]: time="2025-05-09T00:04:29.553850424Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:29.555844 containerd[1434]: time="2025-05-09T00:04:29.555797221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:29.556549 containerd[1434]: time="2025-05-09T00:04:29.556517417Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.200095032s" May 9 00:04:29.556617 containerd[1434]: time="2025-05-09T00:04:29.556550508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 9 00:04:29.559946 containerd[1434]: time="2025-05-09T00:04:29.559873156Z" level=info msg="CreateContainer within sandbox \"8393266cd20577e95e0c93dfd839e48d36a9d462f0c3c5d798b9d01a8d22d8bc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 9 00:04:29.571193 containerd[1434]: time="2025-05-09T00:04:29.571146048Z" level=info msg="CreateContainer within sandbox \"8393266cd20577e95e0c93dfd839e48d36a9d462f0c3c5d798b9d01a8d22d8bc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"47de799053bd23ee0cf47079471ef1fbb3b1db46a56847750d5ad04640d618e7\"" May 9 00:04:29.571624 containerd[1434]: time="2025-05-09T00:04:29.571585272Z" level=info msg="StartContainer for \"47de799053bd23ee0cf47079471ef1fbb3b1db46a56847750d5ad04640d618e7\"" May 9 00:04:29.609179 systemd[1]: Started cri-containerd-47de799053bd23ee0cf47079471ef1fbb3b1db46a56847750d5ad04640d618e7.scope - libcontainer container 47de799053bd23ee0cf47079471ef1fbb3b1db46a56847750d5ad04640d618e7. May 9 00:04:29.635107 containerd[1434]: time="2025-05-09T00:04:29.635061901Z" level=info msg="StartContainer for \"47de799053bd23ee0cf47079471ef1fbb3b1db46a56847750d5ad04640d618e7\" returns successfully" May 9 00:04:30.320614 systemd[1]: cri-containerd-47de799053bd23ee0cf47079471ef1fbb3b1db46a56847750d5ad04640d618e7.scope: Deactivated successfully. May 9 00:04:30.339706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47de799053bd23ee0cf47079471ef1fbb3b1db46a56847750d5ad04640d618e7-rootfs.mount: Deactivated successfully. May 9 00:04:30.347889 kubelet[2467]: I0509 00:04:30.347860 2467 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 9 00:04:30.357851 containerd[1434]: time="2025-05-09T00:04:30.357498275Z" level=info msg="shim disconnected" id=47de799053bd23ee0cf47079471ef1fbb3b1db46a56847750d5ad04640d618e7 namespace=k8s.io May 9 00:04:30.357851 containerd[1434]: time="2025-05-09T00:04:30.357848945Z" level=warning msg="cleaning up after shim disconnected" id=47de799053bd23ee0cf47079471ef1fbb3b1db46a56847750d5ad04640d618e7 namespace=k8s.io May 9 00:04:30.358055 containerd[1434]: time="2025-05-09T00:04:30.357863709Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:04:30.365851 kubelet[2467]: E0509 00:04:30.363085 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:30.380339 containerd[1434]: time="2025-05-09T00:04:30.380283223Z" level=warning msg="cleanup warnings time=\"2025-05-09T00:04:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 9 00:04:30.390698 systemd[1]: Created slice kubepods-burstable-pod61e5788f_c1c1_468b_a0b1_b88de0f0fb58.slice - libcontainer container kubepods-burstable-pod61e5788f_c1c1_468b_a0b1_b88de0f0fb58.slice. May 9 00:04:30.398010 systemd[1]: Created slice kubepods-burstable-pod449f9335_cf62_43ca_b5aa_48636c5af7c8.slice - libcontainer container kubepods-burstable-pod449f9335_cf62_43ca_b5aa_48636c5af7c8.slice. May 9 00:04:30.404139 systemd[1]: Created slice kubepods-besteffort-podf38b28a9_35a2_49f2_8623_f72cdb69aaa0.slice - libcontainer container kubepods-besteffort-podf38b28a9_35a2_49f2_8623_f72cdb69aaa0.slice. May 9 00:04:30.410005 systemd[1]: Created slice kubepods-besteffort-pod1dfd2307_f708_4433_a5bd_771cc97fedba.slice - libcontainer container kubepods-besteffort-pod1dfd2307_f708_4433_a5bd_771cc97fedba.slice. May 9 00:04:30.416417 systemd[1]: Created slice kubepods-besteffort-pod4be101ac_b076_41a6_bc24_cc2df8624f75.slice - libcontainer container kubepods-besteffort-pod4be101ac_b076_41a6_bc24_cc2df8624f75.slice. May 9 00:04:30.535342 kubelet[2467]: I0509 00:04:30.535179 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdvlx\" (UniqueName: \"kubernetes.io/projected/4be101ac-b076-41a6-bc24-cc2df8624f75-kube-api-access-xdvlx\") pod \"calico-apiserver-6f76859ff6-dh6dh\" (UID: \"4be101ac-b076-41a6-bc24-cc2df8624f75\") " pod="calico-apiserver/calico-apiserver-6f76859ff6-dh6dh" May 9 00:04:30.535342 kubelet[2467]: I0509 00:04:30.535225 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8plrp\" (UniqueName: \"kubernetes.io/projected/449f9335-cf62-43ca-b5aa-48636c5af7c8-kube-api-access-8plrp\") pod \"coredns-6f6b679f8f-hhhps\" (UID: \"449f9335-cf62-43ca-b5aa-48636c5af7c8\") " pod="kube-system/coredns-6f6b679f8f-hhhps" May 9 00:04:30.535342 kubelet[2467]: I0509 00:04:30.535246 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61e5788f-c1c1-468b-a0b1-b88de0f0fb58-config-volume\") pod \"coredns-6f6b679f8f-qhkqc\" (UID: \"61e5788f-c1c1-468b-a0b1-b88de0f0fb58\") " pod="kube-system/coredns-6f6b679f8f-qhkqc" May 9 00:04:30.535342 kubelet[2467]: I0509 00:04:30.535281 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f38b28a9-35a2-49f2-8623-f72cdb69aaa0-tigera-ca-bundle\") pod \"calico-kube-controllers-cb8c949c8-7dmfb\" (UID: \"f38b28a9-35a2-49f2-8623-f72cdb69aaa0\") " pod="calico-system/calico-kube-controllers-cb8c949c8-7dmfb" May 9 00:04:30.535342 kubelet[2467]: I0509 00:04:30.535302 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/449f9335-cf62-43ca-b5aa-48636c5af7c8-config-volume\") pod \"coredns-6f6b679f8f-hhhps\" (UID: \"449f9335-cf62-43ca-b5aa-48636c5af7c8\") " pod="kube-system/coredns-6f6b679f8f-hhhps" May 9 00:04:30.535568 kubelet[2467]: I0509 00:04:30.535364 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4be101ac-b076-41a6-bc24-cc2df8624f75-calico-apiserver-certs\") pod \"calico-apiserver-6f76859ff6-dh6dh\" (UID: \"4be101ac-b076-41a6-bc24-cc2df8624f75\") " pod="calico-apiserver/calico-apiserver-6f76859ff6-dh6dh" May 9 00:04:30.535568 kubelet[2467]: I0509 00:04:30.535400 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1dfd2307-f708-4433-a5bd-771cc97fedba-calico-apiserver-certs\") pod \"calico-apiserver-6f76859ff6-z8gcd\" (UID: \"1dfd2307-f708-4433-a5bd-771cc97fedba\") " pod="calico-apiserver/calico-apiserver-6f76859ff6-z8gcd" May 9 00:04:30.535568 kubelet[2467]: I0509 00:04:30.535438 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzn2f\" (UniqueName: \"kubernetes.io/projected/61e5788f-c1c1-468b-a0b1-b88de0f0fb58-kube-api-access-gzn2f\") pod \"coredns-6f6b679f8f-qhkqc\" (UID: \"61e5788f-c1c1-468b-a0b1-b88de0f0fb58\") " pod="kube-system/coredns-6f6b679f8f-qhkqc" May 9 00:04:30.535761 kubelet[2467]: I0509 00:04:30.535734 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfpl7\" (UniqueName: \"kubernetes.io/projected/f38b28a9-35a2-49f2-8623-f72cdb69aaa0-kube-api-access-tfpl7\") pod \"calico-kube-controllers-cb8c949c8-7dmfb\" (UID: \"f38b28a9-35a2-49f2-8623-f72cdb69aaa0\") " pod="calico-system/calico-kube-controllers-cb8c949c8-7dmfb" May 9 00:04:30.536537 kubelet[2467]: I0509 00:04:30.536121 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m62nf\" (UniqueName: \"kubernetes.io/projected/1dfd2307-f708-4433-a5bd-771cc97fedba-kube-api-access-m62nf\") pod \"calico-apiserver-6f76859ff6-z8gcd\" (UID: \"1dfd2307-f708-4433-a5bd-771cc97fedba\") " pod="calico-apiserver/calico-apiserver-6f76859ff6-z8gcd" May 9 00:04:30.695720 kubelet[2467]: E0509 00:04:30.695269 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:30.696067 containerd[1434]: time="2025-05-09T00:04:30.695980427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qhkqc,Uid:61e5788f-c1c1-468b-a0b1-b88de0f0fb58,Namespace:kube-system,Attempt:0,}" May 9 00:04:30.701972 kubelet[2467]: E0509 00:04:30.701935 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:30.703230 containerd[1434]: time="2025-05-09T00:04:30.703179925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hhhps,Uid:449f9335-cf62-43ca-b5aa-48636c5af7c8,Namespace:kube-system,Attempt:0,}" May 9 00:04:30.708128 containerd[1434]: time="2025-05-09T00:04:30.708087905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cb8c949c8-7dmfb,Uid:f38b28a9-35a2-49f2-8623-f72cdb69aaa0,Namespace:calico-system,Attempt:0,}" May 9 00:04:30.715094 containerd[1434]: time="2025-05-09T00:04:30.715023041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f76859ff6-z8gcd,Uid:1dfd2307-f708-4433-a5bd-771cc97fedba,Namespace:calico-apiserver,Attempt:0,}" May 9 00:04:30.718806 containerd[1434]: time="2025-05-09T00:04:30.718769056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f76859ff6-dh6dh,Uid:4be101ac-b076-41a6-bc24-cc2df8624f75,Namespace:calico-apiserver,Attempt:0,}" May 9 00:04:31.135496 containerd[1434]: time="2025-05-09T00:04:31.135427015Z" level=error msg="Failed to destroy network for sandbox \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.135813 containerd[1434]: time="2025-05-09T00:04:31.135782082Z" level=error msg="encountered an error cleaning up failed sandbox \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.135875 containerd[1434]: time="2025-05-09T00:04:31.135843260Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hhhps,Uid:449f9335-cf62-43ca-b5aa-48636c5af7c8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.136151 containerd[1434]: time="2025-05-09T00:04:31.136112781Z" level=error msg="Failed to destroy network for sandbox \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.137228 containerd[1434]: time="2025-05-09T00:04:31.136405149Z" level=error msg="encountered an error cleaning up failed sandbox \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.137228 containerd[1434]: time="2025-05-09T00:04:31.136464327Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f76859ff6-z8gcd,Uid:1dfd2307-f708-4433-a5bd-771cc97fedba,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.138773 kubelet[2467]: E0509 00:04:31.138707 2467 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.138891 kubelet[2467]: E0509 00:04:31.138819 2467 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hhhps" May 9 00:04:31.138891 kubelet[2467]: E0509 00:04:31.138843 2467 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hhhps" May 9 00:04:31.138953 kubelet[2467]: E0509 00:04:31.138908 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hhhps_kube-system(449f9335-cf62-43ca-b5aa-48636c5af7c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hhhps_kube-system(449f9335-cf62-43ca-b5aa-48636c5af7c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hhhps" podUID="449f9335-cf62-43ca-b5aa-48636c5af7c8" May 9 00:04:31.139091 kubelet[2467]: E0509 00:04:31.138911 2467 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.139091 kubelet[2467]: E0509 00:04:31.138972 2467 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f76859ff6-z8gcd" May 9 00:04:31.139091 kubelet[2467]: E0509 00:04:31.139010 2467 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f76859ff6-z8gcd" May 9 00:04:31.139180 kubelet[2467]: E0509 00:04:31.139043 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f76859ff6-z8gcd_calico-apiserver(1dfd2307-f708-4433-a5bd-771cc97fedba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f76859ff6-z8gcd_calico-apiserver(1dfd2307-f708-4433-a5bd-771cc97fedba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f76859ff6-z8gcd" podUID="1dfd2307-f708-4433-a5bd-771cc97fedba" May 9 00:04:31.142592 containerd[1434]: time="2025-05-09T00:04:31.142382787Z" level=error msg="Failed to destroy network for sandbox \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.142927 containerd[1434]: time="2025-05-09T00:04:31.142854049Z" level=error msg="encountered an error cleaning up failed sandbox \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.142927 containerd[1434]: time="2025-05-09T00:04:31.142900263Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cb8c949c8-7dmfb,Uid:f38b28a9-35a2-49f2-8623-f72cdb69aaa0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.143299 kubelet[2467]: E0509 00:04:31.143225 2467 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.143544 kubelet[2467]: E0509 00:04:31.143298 2467 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cb8c949c8-7dmfb" May 9 00:04:31.143544 kubelet[2467]: E0509 00:04:31.143318 2467 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cb8c949c8-7dmfb" May 9 00:04:31.143544 kubelet[2467]: E0509 00:04:31.143352 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cb8c949c8-7dmfb_calico-system(f38b28a9-35a2-49f2-8623-f72cdb69aaa0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cb8c949c8-7dmfb_calico-system(f38b28a9-35a2-49f2-8623-f72cdb69aaa0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cb8c949c8-7dmfb" podUID="f38b28a9-35a2-49f2-8623-f72cdb69aaa0" May 9 00:04:31.144585 containerd[1434]: time="2025-05-09T00:04:31.144314368Z" level=error msg="Failed to destroy network for sandbox \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.144661 containerd[1434]: time="2025-05-09T00:04:31.144619980Z" level=error msg="encountered an error cleaning up failed sandbox \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.144693 containerd[1434]: time="2025-05-09T00:04:31.144659912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f76859ff6-dh6dh,Uid:4be101ac-b076-41a6-bc24-cc2df8624f75,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.145309 kubelet[2467]: E0509 00:04:31.145159 2467 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.145309 kubelet[2467]: E0509 00:04:31.145213 2467 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f76859ff6-dh6dh" May 9 00:04:31.145309 kubelet[2467]: E0509 00:04:31.145235 2467 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f76859ff6-dh6dh" May 9 00:04:31.145425 kubelet[2467]: E0509 00:04:31.145269 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f76859ff6-dh6dh_calico-apiserver(4be101ac-b076-41a6-bc24-cc2df8624f75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f76859ff6-dh6dh_calico-apiserver(4be101ac-b076-41a6-bc24-cc2df8624f75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f76859ff6-dh6dh" podUID="4be101ac-b076-41a6-bc24-cc2df8624f75" May 9 00:04:31.147568 containerd[1434]: time="2025-05-09T00:04:31.147535337Z" level=error msg="Failed to destroy network for sandbox \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.147823 containerd[1434]: time="2025-05-09T00:04:31.147799057Z" level=error msg="encountered an error cleaning up failed sandbox \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.147863 containerd[1434]: time="2025-05-09T00:04:31.147840749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qhkqc,Uid:61e5788f-c1c1-468b-a0b1-b88de0f0fb58,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.148055 kubelet[2467]: E0509 00:04:31.148011 2467 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.148111 kubelet[2467]: E0509 00:04:31.148055 2467 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qhkqc" May 9 00:04:31.148111 kubelet[2467]: E0509 00:04:31.148071 2467 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qhkqc" May 9 00:04:31.148111 kubelet[2467]: E0509 00:04:31.148097 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-qhkqc_kube-system(61e5788f-c1c1-468b-a0b1-b88de0f0fb58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-qhkqc_kube-system(61e5788f-c1c1-468b-a0b1-b88de0f0fb58)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qhkqc" podUID="61e5788f-c1c1-468b-a0b1-b88de0f0fb58" May 9 00:04:31.287869 systemd[1]: Created slice kubepods-besteffort-podcd2cc3f5_b622_427f_8e40_c278d97d553c.slice - libcontainer container kubepods-besteffort-podcd2cc3f5_b622_427f_8e40_c278d97d553c.slice. May 9 00:04:31.290612 containerd[1434]: time="2025-05-09T00:04:31.290571484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sc8g8,Uid:cd2cc3f5-b622-427f-8e40-c278d97d553c,Namespace:calico-system,Attempt:0,}" May 9 00:04:31.342405 containerd[1434]: time="2025-05-09T00:04:31.342293242Z" level=error msg="Failed to destroy network for sandbox \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.342877 containerd[1434]: time="2025-05-09T00:04:31.342724852Z" level=error msg="encountered an error cleaning up failed sandbox \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.342877 containerd[1434]: time="2025-05-09T00:04:31.342783830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sc8g8,Uid:cd2cc3f5-b622-427f-8e40-c278d97d553c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.343070 kubelet[2467]: E0509 00:04:31.343008 2467 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.343129 kubelet[2467]: E0509 00:04:31.343078 2467 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sc8g8" May 9 00:04:31.343129 kubelet[2467]: E0509 00:04:31.343097 2467 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sc8g8" May 9 00:04:31.343191 kubelet[2467]: E0509 00:04:31.343133 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sc8g8_calico-system(cd2cc3f5-b622-427f-8e40-c278d97d553c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sc8g8_calico-system(cd2cc3f5-b622-427f-8e40-c278d97d553c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sc8g8" podUID="cd2cc3f5-b622-427f-8e40-c278d97d553c" May 9 00:04:31.366548 kubelet[2467]: E0509 00:04:31.366505 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:31.367671 containerd[1434]: time="2025-05-09T00:04:31.367639306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 9 00:04:31.368307 kubelet[2467]: I0509 00:04:31.367891 2467 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" May 9 00:04:31.368468 containerd[1434]: time="2025-05-09T00:04:31.368438387Z" level=info msg="StopPodSandbox for \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\"" May 9 00:04:31.368689 containerd[1434]: time="2025-05-09T00:04:31.368604317Z" level=info msg="Ensure that sandbox 137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0 in task-service has been cleanup successfully" May 9 00:04:31.369979 kubelet[2467]: I0509 00:04:31.369926 2467 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" May 9 00:04:31.370582 containerd[1434]: time="2025-05-09T00:04:31.370534297Z" level=info msg="StopPodSandbox for \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\"" May 9 00:04:31.370745 containerd[1434]: time="2025-05-09T00:04:31.370710790Z" level=info msg="Ensure that sandbox fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376 in task-service has been cleanup successfully" May 9 00:04:31.371280 kubelet[2467]: I0509 00:04:31.371179 2467 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" May 9 00:04:31.375644 containerd[1434]: time="2025-05-09T00:04:31.372253935Z" level=info msg="StopPodSandbox for \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\"" May 9 00:04:31.375644 containerd[1434]: time="2025-05-09T00:04:31.372410062Z" level=info msg="Ensure that sandbox 2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27 in task-service has been cleanup successfully" May 9 00:04:31.379232 kubelet[2467]: I0509 00:04:31.378750 2467 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" May 9 00:04:31.379697 containerd[1434]: time="2025-05-09T00:04:31.379661363Z" level=info msg="StopPodSandbox for \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\"" May 9 00:04:31.380128 containerd[1434]: time="2025-05-09T00:04:31.380086131Z" level=info msg="Ensure that sandbox 25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6 in task-service has been cleanup successfully" May 9 00:04:31.380802 kubelet[2467]: I0509 00:04:31.380781 2467 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" May 9 00:04:31.384069 containerd[1434]: time="2025-05-09T00:04:31.382447401Z" level=info msg="StopPodSandbox for \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\"" May 9 00:04:31.384069 containerd[1434]: time="2025-05-09T00:04:31.382905019Z" level=info msg="Ensure that sandbox f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227 in task-service has been cleanup successfully" May 9 00:04:31.387559 kubelet[2467]: I0509 00:04:31.386486 2467 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" May 9 00:04:31.389398 containerd[1434]: time="2025-05-09T00:04:31.389365482Z" level=info msg="StopPodSandbox for \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\"" May 9 00:04:31.389564 containerd[1434]: time="2025-05-09T00:04:31.389526930Z" level=info msg="Ensure that sandbox f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1 in task-service has been cleanup successfully" May 9 00:04:31.421259 containerd[1434]: time="2025-05-09T00:04:31.421199538Z" level=error msg="StopPodSandbox for \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\" failed" error="failed to destroy network for sandbox \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.421718 kubelet[2467]: E0509 00:04:31.421531 2467 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" May 9 00:04:31.421718 kubelet[2467]: E0509 00:04:31.421597 2467 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0"} May 9 00:04:31.421718 kubelet[2467]: E0509 00:04:31.421655 2467 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cd2cc3f5-b622-427f-8e40-c278d97d553c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:04:31.421718 kubelet[2467]: E0509 00:04:31.421683 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cd2cc3f5-b622-427f-8e40-c278d97d553c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sc8g8" podUID="cd2cc3f5-b622-427f-8e40-c278d97d553c" May 9 00:04:31.432564 containerd[1434]: time="2025-05-09T00:04:31.432500977Z" level=error msg="StopPodSandbox for \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\" failed" error="failed to destroy network for sandbox \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.433095 containerd[1434]: time="2025-05-09T00:04:31.432554193Z" level=error msg="StopPodSandbox for \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\" failed" error="failed to destroy network for sandbox \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.433095 containerd[1434]: time="2025-05-09T00:04:31.432500697Z" level=error msg="StopPodSandbox for \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\" failed" error="failed to destroy network for sandbox \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.433149 kubelet[2467]: E0509 00:04:31.432782 2467 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" May 9 00:04:31.433149 kubelet[2467]: E0509 00:04:31.432840 2467 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27"} May 9 00:04:31.433149 kubelet[2467]: E0509 00:04:31.432873 2467 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f38b28a9-35a2-49f2-8623-f72cdb69aaa0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:04:31.433149 kubelet[2467]: E0509 00:04:31.432874 2467 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" May 9 00:04:31.433311 kubelet[2467]: E0509 00:04:31.432893 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f38b28a9-35a2-49f2-8623-f72cdb69aaa0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cb8c949c8-7dmfb" podUID="f38b28a9-35a2-49f2-8623-f72cdb69aaa0" May 9 00:04:31.433311 kubelet[2467]: E0509 00:04:31.432908 2467 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227"} May 9 00:04:31.433311 kubelet[2467]: E0509 00:04:31.432924 2467 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" May 9 00:04:31.433311 kubelet[2467]: E0509 00:04:31.432941 2467 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376"} May 9 00:04:31.433311 kubelet[2467]: E0509 00:04:31.432941 2467 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4be101ac-b076-41a6-bc24-cc2df8624f75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:04:31.433455 kubelet[2467]: E0509 00:04:31.432959 2467 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1dfd2307-f708-4433-a5bd-771cc97fedba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:04:31.433455 kubelet[2467]: E0509 00:04:31.432964 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4be101ac-b076-41a6-bc24-cc2df8624f75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f76859ff6-dh6dh" podUID="4be101ac-b076-41a6-bc24-cc2df8624f75" May 9 00:04:31.433455 kubelet[2467]: E0509 00:04:31.432975 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1dfd2307-f708-4433-a5bd-771cc97fedba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f76859ff6-z8gcd" podUID="1dfd2307-f708-4433-a5bd-771cc97fedba" May 9 00:04:31.440342 containerd[1434]: time="2025-05-09T00:04:31.440289720Z" level=error msg="StopPodSandbox for \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\" failed" error="failed to destroy network for sandbox \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.440525 kubelet[2467]: E0509 00:04:31.440492 2467 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" May 9 00:04:31.440577 kubelet[2467]: E0509 00:04:31.440538 2467 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6"} May 9 00:04:31.440615 kubelet[2467]: E0509 00:04:31.440583 2467 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"449f9335-cf62-43ca-b5aa-48636c5af7c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:04:31.440663 kubelet[2467]: E0509 00:04:31.440611 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"449f9335-cf62-43ca-b5aa-48636c5af7c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hhhps" podUID="449f9335-cf62-43ca-b5aa-48636c5af7c8" May 9 00:04:31.443519 containerd[1434]: time="2025-05-09T00:04:31.443473118Z" level=error msg="StopPodSandbox for \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\" failed" error="failed to destroy network for sandbox \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:04:31.443724 kubelet[2467]: E0509 00:04:31.443682 2467 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" May 9 00:04:31.443761 kubelet[2467]: E0509 00:04:31.443727 2467 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1"} May 9 00:04:31.443805 kubelet[2467]: E0509 00:04:31.443761 2467 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"61e5788f-c1c1-468b-a0b1-b88de0f0fb58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:04:31.443805 kubelet[2467]: E0509 00:04:31.443781 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"61e5788f-c1c1-468b-a0b1-b88de0f0fb58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qhkqc" podUID="61e5788f-c1c1-468b-a0b1-b88de0f0fb58" May 9 00:04:32.301292 kubelet[2467]: I0509 00:04:32.301255 2467 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:04:32.302052 kubelet[2467]: E0509 00:04:32.301586 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:32.388953 kubelet[2467]: E0509 00:04:32.388912 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:34.301824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1379999332.mount: Deactivated successfully. May 9 00:04:34.539962 containerd[1434]: time="2025-05-09T00:04:34.539899573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:34.540809 containerd[1434]: time="2025-05-09T00:04:34.540773966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 9 00:04:34.541631 containerd[1434]: time="2025-05-09T00:04:34.541577741Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:34.543209 containerd[1434]: time="2025-05-09T00:04:34.543157202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:34.543903 containerd[1434]: time="2025-05-09T00:04:34.543869552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 3.176186712s" May 9 00:04:34.543966 containerd[1434]: time="2025-05-09T00:04:34.543906922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 9 00:04:34.553106 containerd[1434]: time="2025-05-09T00:04:34.551249160Z" level=info msg="CreateContainer within sandbox \"8393266cd20577e95e0c93dfd839e48d36a9d462f0c3c5d798b9d01a8d22d8bc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 9 00:04:34.567874 containerd[1434]: time="2025-05-09T00:04:34.567823300Z" level=info msg="CreateContainer within sandbox \"8393266cd20577e95e0c93dfd839e48d36a9d462f0c3c5d798b9d01a8d22d8bc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3df1070b91d5b5e806b47fb0fa18ff9a6eac4b20ce7fad152608e8befb384c54\"" May 9 00:04:34.568867 containerd[1434]: time="2025-05-09T00:04:34.568304068Z" level=info msg="StartContainer for \"3df1070b91d5b5e806b47fb0fa18ff9a6eac4b20ce7fad152608e8befb384c54\"" May 9 00:04:34.621153 systemd[1]: Started cri-containerd-3df1070b91d5b5e806b47fb0fa18ff9a6eac4b20ce7fad152608e8befb384c54.scope - libcontainer container 3df1070b91d5b5e806b47fb0fa18ff9a6eac4b20ce7fad152608e8befb384c54. May 9 00:04:34.651758 containerd[1434]: time="2025-05-09T00:04:34.651703270Z" level=info msg="StartContainer for \"3df1070b91d5b5e806b47fb0fa18ff9a6eac4b20ce7fad152608e8befb384c54\" returns successfully" May 9 00:04:34.866473 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 9 00:04:34.866754 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 9 00:04:35.396301 kubelet[2467]: E0509 00:04:35.396250 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:35.412890 kubelet[2467]: I0509 00:04:35.412580 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wwn4r" podStartSLOduration=1.517409751 podStartE2EDuration="14.412562155s" podCreationTimestamp="2025-05-09 00:04:21 +0000 UTC" firstStartedPulling="2025-05-09 00:04:21.649710013 +0000 UTC m=+13.460379304" lastFinishedPulling="2025-05-09 00:04:34.544862457 +0000 UTC m=+26.355531708" observedRunningTime="2025-05-09 00:04:35.411660004 +0000 UTC m=+27.222329295" watchObservedRunningTime="2025-05-09 00:04:35.412562155 +0000 UTC m=+27.223231446" May 9 00:04:36.352050 kernel: bpftool[3800]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 9 00:04:36.399217 kubelet[2467]: E0509 00:04:36.398155 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:36.534388 systemd-networkd[1372]: vxlan.calico: Link UP May 9 00:04:36.534394 systemd-networkd[1372]: vxlan.calico: Gained carrier May 9 00:04:37.908643 systemd-networkd[1372]: vxlan.calico: Gained IPv6LL May 9 00:04:38.449502 systemd[1]: Started sshd@7-10.0.0.43:22-10.0.0.1:35630.service - OpenSSH per-connection server daemon (10.0.0.1:35630). May 9 00:04:38.492806 sshd[3901]: Accepted publickey for core from 10.0.0.1 port 35630 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:04:38.494443 sshd[3901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:04:38.498360 systemd-logind[1421]: New session 8 of user core. May 9 00:04:38.508140 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 00:04:38.729294 sshd[3901]: pam_unix(sshd:session): session closed for user core May 9 00:04:38.732803 systemd[1]: sshd@7-10.0.0.43:22-10.0.0.1:35630.service: Deactivated successfully. May 9 00:04:38.734526 systemd[1]: session-8.scope: Deactivated successfully. May 9 00:04:38.735336 systemd-logind[1421]: Session 8 logged out. Waiting for processes to exit. May 9 00:04:38.736174 systemd-logind[1421]: Removed session 8. May 9 00:04:43.283202 containerd[1434]: time="2025-05-09T00:04:43.283153719Z" level=info msg="StopPodSandbox for \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\"" May 9 00:04:43.283691 containerd[1434]: time="2025-05-09T00:04:43.283619931Z" level=info msg="StopPodSandbox for \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\"" May 9 00:04:43.578092 containerd[1434]: 2025-05-09 00:04:43.399 [INFO][3959] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" May 9 00:04:43.578092 containerd[1434]: 2025-05-09 00:04:43.399 [INFO][3959] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" iface="eth0" netns="/var/run/netns/cni-0b8672e0-102f-c0a6-3033-92d8b4bb2226" May 9 00:04:43.578092 containerd[1434]: 2025-05-09 00:04:43.400 [INFO][3959] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" iface="eth0" netns="/var/run/netns/cni-0b8672e0-102f-c0a6-3033-92d8b4bb2226" May 9 00:04:43.578092 containerd[1434]: 2025-05-09 00:04:43.404 [INFO][3959] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" iface="eth0" netns="/var/run/netns/cni-0b8672e0-102f-c0a6-3033-92d8b4bb2226" May 9 00:04:43.578092 containerd[1434]: 2025-05-09 00:04:43.404 [INFO][3959] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" May 9 00:04:43.578092 containerd[1434]: 2025-05-09 00:04:43.404 [INFO][3959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" May 9 00:04:43.578092 containerd[1434]: 2025-05-09 00:04:43.553 [INFO][3970] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" HandleID="k8s-pod-network.2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" Workload="localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0" May 9 00:04:43.578092 containerd[1434]: 2025-05-09 00:04:43.553 [INFO][3970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:04:43.578092 containerd[1434]: 2025-05-09 00:04:43.553 [INFO][3970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:04:43.578092 containerd[1434]: 2025-05-09 00:04:43.569 [WARNING][3970] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" HandleID="k8s-pod-network.2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" Workload="localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0" May 9 00:04:43.578092 containerd[1434]: 2025-05-09 00:04:43.569 [INFO][3970] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" HandleID="k8s-pod-network.2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" Workload="localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0" May 9 00:04:43.578092 containerd[1434]: 2025-05-09 00:04:43.571 [INFO][3970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:04:43.578092 containerd[1434]: 2025-05-09 00:04:43.573 [INFO][3959] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" May 9 00:04:43.578092 containerd[1434]: time="2025-05-09T00:04:43.576360881Z" level=info msg="TearDown network for sandbox \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\" successfully" May 9 00:04:43.578092 containerd[1434]: time="2025-05-09T00:04:43.576387927Z" level=info msg="StopPodSandbox for \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\" returns successfully" May 9 00:04:43.578812 systemd[1]: run-netns-cni\x2d0b8672e0\x2d102f\x2dc0a6\x2d3033\x2d92d8b4bb2226.mount: Deactivated successfully. May 9 00:04:43.580199 containerd[1434]: time="2025-05-09T00:04:43.579918900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cb8c949c8-7dmfb,Uid:f38b28a9-35a2-49f2-8623-f72cdb69aaa0,Namespace:calico-system,Attempt:1,}" May 9 00:04:43.586526 containerd[1434]: 2025-05-09 00:04:43.398 [INFO][3943] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" May 9 00:04:43.586526 containerd[1434]: 2025-05-09 00:04:43.399 [INFO][3943] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" iface="eth0" netns="/var/run/netns/cni-4aee3963-1982-876f-cd20-fe5da30e93dc" May 9 00:04:43.586526 containerd[1434]: 2025-05-09 00:04:43.400 [INFO][3943] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" iface="eth0" netns="/var/run/netns/cni-4aee3963-1982-876f-cd20-fe5da30e93dc" May 9 00:04:43.586526 containerd[1434]: 2025-05-09 00:04:43.404 [INFO][3943] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" iface="eth0" netns="/var/run/netns/cni-4aee3963-1982-876f-cd20-fe5da30e93dc" May 9 00:04:43.586526 containerd[1434]: 2025-05-09 00:04:43.404 [INFO][3943] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" May 9 00:04:43.586526 containerd[1434]: 2025-05-09 00:04:43.404 [INFO][3943] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" May 9 00:04:43.586526 containerd[1434]: 2025-05-09 00:04:43.553 [INFO][3969] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" HandleID="k8s-pod-network.f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" Workload="localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0" May 9 00:04:43.586526 containerd[1434]: 2025-05-09 00:04:43.553 [INFO][3969] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:04:43.586526 containerd[1434]: 2025-05-09 00:04:43.571 [INFO][3969] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:04:43.586526 containerd[1434]: 2025-05-09 00:04:43.581 [WARNING][3969] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" HandleID="k8s-pod-network.f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" Workload="localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0" May 9 00:04:43.586526 containerd[1434]: 2025-05-09 00:04:43.581 [INFO][3969] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" HandleID="k8s-pod-network.f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" Workload="localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0" May 9 00:04:43.586526 containerd[1434]: 2025-05-09 00:04:43.582 [INFO][3969] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:04:43.586526 containerd[1434]: 2025-05-09 00:04:43.584 [INFO][3943] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" May 9 00:04:43.586853 containerd[1434]: time="2025-05-09T00:04:43.586618135Z" level=info msg="TearDown network for sandbox \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\" successfully" May 9 00:04:43.586853 containerd[1434]: time="2025-05-09T00:04:43.586642500Z" level=info msg="StopPodSandbox for \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\" returns successfully" May 9 00:04:43.587587 containerd[1434]: time="2025-05-09T00:04:43.587544477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f76859ff6-dh6dh,Uid:4be101ac-b076-41a6-bc24-cc2df8624f75,Namespace:calico-apiserver,Attempt:1,}" May 9 00:04:43.588875 systemd[1]: run-netns-cni\x2d4aee3963\x2d1982\x2d876f\x2dcd20\x2dfe5da30e93dc.mount: Deactivated successfully. May 9 00:04:43.747554 systemd[1]: Started sshd@8-10.0.0.43:22-10.0.0.1:44114.service - OpenSSH per-connection server daemon (10.0.0.1:44114). May 9 00:04:43.767243 systemd-networkd[1372]: cali74d3ef5aec7: Link UP May 9 00:04:43.767519 systemd-networkd[1372]: cali74d3ef5aec7: Gained carrier May 9 00:04:43.783343 containerd[1434]: 2025-05-09 00:04:43.666 [INFO][3998] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0 calico-apiserver-6f76859ff6- calico-apiserver 4be101ac-b076-41a6-bc24-cc2df8624f75 851 0 2025-05-09 00:04:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f76859ff6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f76859ff6-dh6dh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali74d3ef5aec7 [] []}} ContainerID="4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84" Namespace="calico-apiserver" Pod="calico-apiserver-6f76859ff6-dh6dh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-" May 9 00:04:43.783343 containerd[1434]: 2025-05-09 00:04:43.666 [INFO][3998] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84" Namespace="calico-apiserver" Pod="calico-apiserver-6f76859ff6-dh6dh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0" May 9 00:04:43.783343 containerd[1434]: 2025-05-09 00:04:43.699 [INFO][4020] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84" HandleID="k8s-pod-network.4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84" Workload="localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0" May 9 00:04:43.783343 containerd[1434]: 2025-05-09 00:04:43.713 [INFO][4020] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84" HandleID="k8s-pod-network.4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84" Workload="localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005030d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f76859ff6-dh6dh", "timestamp":"2025-05-09 00:04:43.699146667 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:04:43.783343 containerd[1434]: 2025-05-09 00:04:43.714 [INFO][4020] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:04:43.783343 containerd[1434]: 2025-05-09 00:04:43.714 [INFO][4020] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:04:43.783343 containerd[1434]: 2025-05-09 00:04:43.714 [INFO][4020] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 00:04:43.783343 containerd[1434]: 2025-05-09 00:04:43.717 [INFO][4020] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84" host="localhost" May 9 00:04:43.783343 containerd[1434]: 2025-05-09 00:04:43.729 [INFO][4020] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 00:04:43.783343 containerd[1434]: 2025-05-09 00:04:43.738 [INFO][4020] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 00:04:43.783343 containerd[1434]: 2025-05-09 00:04:43.740 [INFO][4020] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 00:04:43.783343 containerd[1434]: 2025-05-09 00:04:43.743 [INFO][4020] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 00:04:43.783343 containerd[1434]: 2025-05-09 00:04:43.743 [INFO][4020] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84" host="localhost" May 9 00:04:43.783343 containerd[1434]: 2025-05-09 00:04:43.746 [INFO][4020] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84 May 9 00:04:43.783343 containerd[1434]: 2025-05-09 00:04:43.753 [INFO][4020] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84" host="localhost" May 9 00:04:43.783343 containerd[1434]: 2025-05-09 00:04:43.759 [INFO][4020] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84" host="localhost" May 9 00:04:43.783343 containerd[1434]: 2025-05-09 00:04:43.759 [INFO][4020] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84" host="localhost" May 9 00:04:43.783343 containerd[1434]: 2025-05-09 00:04:43.759 [INFO][4020] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:04:43.783343 containerd[1434]: 2025-05-09 00:04:43.759 [INFO][4020] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84" HandleID="k8s-pod-network.4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84" Workload="localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0" May 9 00:04:43.783926 containerd[1434]: 2025-05-09 00:04:43.762 [INFO][3998] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84" Namespace="calico-apiserver" Pod="calico-apiserver-6f76859ff6-dh6dh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0", GenerateName:"calico-apiserver-6f76859ff6-", Namespace:"calico-apiserver", SelfLink:"", UID:"4be101ac-b076-41a6-bc24-cc2df8624f75", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f76859ff6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f76859ff6-dh6dh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74d3ef5aec7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:04:43.783926 containerd[1434]: 2025-05-09 00:04:43.762 [INFO][3998] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84" Namespace="calico-apiserver" Pod="calico-apiserver-6f76859ff6-dh6dh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0" May 9 00:04:43.783926 containerd[1434]: 2025-05-09 00:04:43.762 [INFO][3998] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74d3ef5aec7 ContainerID="4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84" Namespace="calico-apiserver" Pod="calico-apiserver-6f76859ff6-dh6dh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0" May 9 00:04:43.783926 containerd[1434]: 2025-05-09 00:04:43.768 [INFO][3998] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84" Namespace="calico-apiserver" Pod="calico-apiserver-6f76859ff6-dh6dh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0" May 9 00:04:43.783926 containerd[1434]: 2025-05-09 00:04:43.769 [INFO][3998] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84" Namespace="calico-apiserver" Pod="calico-apiserver-6f76859ff6-dh6dh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0", GenerateName:"calico-apiserver-6f76859ff6-", Namespace:"calico-apiserver", SelfLink:"", UID:"4be101ac-b076-41a6-bc24-cc2df8624f75", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f76859ff6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84", Pod:"calico-apiserver-6f76859ff6-dh6dh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74d3ef5aec7", MAC:"a6:11:9b:22:3f:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:04:43.783926 containerd[1434]: 2025-05-09 00:04:43.779 [INFO][3998] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84" Namespace="calico-apiserver" Pod="calico-apiserver-6f76859ff6-dh6dh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0" May 9 00:04:43.800818 sshd[4032]: Accepted publickey for core from 10.0.0.1 port 44114 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:04:43.802588 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:04:43.808222 containerd[1434]: time="2025-05-09T00:04:43.806935748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:04:43.808222 containerd[1434]: time="2025-05-09T00:04:43.806984037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:04:43.808222 containerd[1434]: time="2025-05-09T00:04:43.807011723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:43.808222 containerd[1434]: time="2025-05-09T00:04:43.807089218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:43.808216 systemd-logind[1421]: New session 9 of user core. May 9 00:04:43.815160 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 00:04:43.820816 systemd[1]: Started cri-containerd-4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84.scope - libcontainer container 4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84. May 9 00:04:43.839871 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:04:43.857041 systemd-networkd[1372]: calif283ba0c735: Link UP May 9 00:04:43.858027 systemd-networkd[1372]: calif283ba0c735: Gained carrier May 9 00:04:43.874860 containerd[1434]: 2025-05-09 00:04:43.660 [INFO][3985] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0 calico-kube-controllers-cb8c949c8- calico-system f38b28a9-35a2-49f2-8623-f72cdb69aaa0 852 0 2025-05-09 00:04:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cb8c949c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-cb8c949c8-7dmfb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif283ba0c735 [] []}} ContainerID="5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9" Namespace="calico-system" Pod="calico-kube-controllers-cb8c949c8-7dmfb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-" May 9 00:04:43.874860 containerd[1434]: 2025-05-09 00:04:43.660 [INFO][3985] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9" Namespace="calico-system" Pod="calico-kube-controllers-cb8c949c8-7dmfb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0" May 9 00:04:43.874860 containerd[1434]: 2025-05-09 00:04:43.695 [INFO][4014] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9" HandleID="k8s-pod-network.5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9" Workload="localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0" May 9 00:04:43.874860 containerd[1434]: 2025-05-09 00:04:43.717 [INFO][4014] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9" HandleID="k8s-pod-network.5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9" Workload="localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000361720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-cb8c949c8-7dmfb", "timestamp":"2025-05-09 00:04:43.695523075 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:04:43.874860 containerd[1434]: 2025-05-09 00:04:43.717 [INFO][4014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:04:43.874860 containerd[1434]: 2025-05-09 00:04:43.759 [INFO][4014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:04:43.874860 containerd[1434]: 2025-05-09 00:04:43.759 [INFO][4014] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 00:04:43.874860 containerd[1434]: 2025-05-09 00:04:43.823 [INFO][4014] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9" host="localhost" May 9 00:04:43.874860 containerd[1434]: 2025-05-09 00:04:43.832 [INFO][4014] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 00:04:43.874860 containerd[1434]: 2025-05-09 00:04:43.836 [INFO][4014] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 00:04:43.874860 containerd[1434]: 2025-05-09 00:04:43.838 [INFO][4014] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 00:04:43.874860 containerd[1434]: 2025-05-09 00:04:43.842 [INFO][4014] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 00:04:43.874860 containerd[1434]: 2025-05-09 00:04:43.842 [INFO][4014] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9" host="localhost" May 9 00:04:43.874860 containerd[1434]: 2025-05-09 00:04:43.843 [INFO][4014] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9 May 9 00:04:43.874860 containerd[1434]: 2025-05-09 00:04:43.847 [INFO][4014] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9" host="localhost" May 9 00:04:43.874860 containerd[1434]: 2025-05-09 00:04:43.852 [INFO][4014] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9" host="localhost" May 9 00:04:43.874860 containerd[1434]: 2025-05-09 00:04:43.852 [INFO][4014] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9" host="localhost" May 9 00:04:43.874860 containerd[1434]: 2025-05-09 00:04:43.852 [INFO][4014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:04:43.874860 containerd[1434]: 2025-05-09 00:04:43.852 [INFO][4014] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9" HandleID="k8s-pod-network.5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9" Workload="localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0" May 9 00:04:43.875584 containerd[1434]: 2025-05-09 00:04:43.854 [INFO][3985] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9" Namespace="calico-system" Pod="calico-kube-controllers-cb8c949c8-7dmfb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0", GenerateName:"calico-kube-controllers-cb8c949c8-", Namespace:"calico-system", SelfLink:"", UID:"f38b28a9-35a2-49f2-8623-f72cdb69aaa0", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cb8c949c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-cb8c949c8-7dmfb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif283ba0c735", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:04:43.875584 containerd[1434]: 2025-05-09 00:04:43.854 [INFO][3985] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9" Namespace="calico-system" Pod="calico-kube-controllers-cb8c949c8-7dmfb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0" May 9 00:04:43.875584 containerd[1434]: 2025-05-09 00:04:43.854 [INFO][3985] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif283ba0c735 ContainerID="5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9" Namespace="calico-system" Pod="calico-kube-controllers-cb8c949c8-7dmfb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0" May 9 00:04:43.875584 containerd[1434]: 2025-05-09 00:04:43.857 [INFO][3985] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9" Namespace="calico-system" Pod="calico-kube-controllers-cb8c949c8-7dmfb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0" May 9 00:04:43.875584 containerd[1434]: 2025-05-09 00:04:43.857 [INFO][3985] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9" Namespace="calico-system" Pod="calico-kube-controllers-cb8c949c8-7dmfb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0", GenerateName:"calico-kube-controllers-cb8c949c8-", Namespace:"calico-system", SelfLink:"", UID:"f38b28a9-35a2-49f2-8623-f72cdb69aaa0", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cb8c949c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9", Pod:"calico-kube-controllers-cb8c949c8-7dmfb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif283ba0c735", MAC:"22:be:13:62:c2:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:04:43.875584 containerd[1434]: 2025-05-09 00:04:43.870 [INFO][3985] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9" Namespace="calico-system" Pod="calico-kube-controllers-cb8c949c8-7dmfb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0" May 9 00:04:43.879089 containerd[1434]: time="2025-05-09T00:04:43.879019819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f76859ff6-dh6dh,Uid:4be101ac-b076-41a6-bc24-cc2df8624f75,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84\"" May 9 00:04:43.881428 containerd[1434]: time="2025-05-09T00:04:43.881394085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 9 00:04:43.894469 containerd[1434]: time="2025-05-09T00:04:43.894098620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:04:43.894469 containerd[1434]: time="2025-05-09T00:04:43.894156111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:04:43.894469 containerd[1434]: time="2025-05-09T00:04:43.894179675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:43.894469 containerd[1434]: time="2025-05-09T00:04:43.894268413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:43.922198 systemd[1]: Started cri-containerd-5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9.scope - libcontainer container 5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9. May 9 00:04:43.933887 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:04:43.954252 containerd[1434]: time="2025-05-09T00:04:43.954211181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cb8c949c8-7dmfb,Uid:f38b28a9-35a2-49f2-8623-f72cdb69aaa0,Namespace:calico-system,Attempt:1,} returns sandbox id \"5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9\"" May 9 00:04:44.022482 sshd[4032]: pam_unix(sshd:session): session closed for user core May 9 00:04:44.026616 systemd[1]: sshd@8-10.0.0.43:22-10.0.0.1:44114.service: Deactivated successfully. May 9 00:04:44.030221 systemd[1]: session-9.scope: Deactivated successfully. May 9 00:04:44.030931 systemd-logind[1421]: Session 9 logged out. Waiting for processes to exit. May 9 00:04:44.031752 systemd-logind[1421]: Removed session 9. May 9 00:04:45.283806 containerd[1434]: time="2025-05-09T00:04:45.283753911Z" level=info msg="StopPodSandbox for \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\"" May 9 00:04:45.320557 containerd[1434]: time="2025-05-09T00:04:45.320505968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:45.320803 containerd[1434]: time="2025-05-09T00:04:45.320709446Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 9 00:04:45.323189 containerd[1434]: time="2025-05-09T00:04:45.323133056Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:45.326217 containerd[1434]: time="2025-05-09T00:04:45.326171059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:45.326966 containerd[1434]: time="2025-05-09T00:04:45.326925319Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.445496627s" May 9 00:04:45.326966 containerd[1434]: time="2025-05-09T00:04:45.326963966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 9 00:04:45.328206 containerd[1434]: time="2025-05-09T00:04:45.328176351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 9 00:04:45.330349 containerd[1434]: time="2025-05-09T00:04:45.330114911Z" level=info msg="CreateContainer within sandbox \"4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 9 00:04:45.368564 containerd[1434]: 2025-05-09 00:04:45.330 [INFO][4174] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" May 9 00:04:45.368564 containerd[1434]: 2025-05-09 00:04:45.330 [INFO][4174] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" iface="eth0" netns="/var/run/netns/cni-d91826c9-7e47-e1b0-2e50-0283ee271623" May 9 00:04:45.368564 containerd[1434]: 2025-05-09 00:04:45.331 [INFO][4174] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" iface="eth0" netns="/var/run/netns/cni-d91826c9-7e47-e1b0-2e50-0283ee271623" May 9 00:04:45.368564 containerd[1434]: 2025-05-09 00:04:45.331 [INFO][4174] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" iface="eth0" netns="/var/run/netns/cni-d91826c9-7e47-e1b0-2e50-0283ee271623" May 9 00:04:45.368564 containerd[1434]: 2025-05-09 00:04:45.331 [INFO][4174] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" May 9 00:04:45.368564 containerd[1434]: 2025-05-09 00:04:45.331 [INFO][4174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" May 9 00:04:45.368564 containerd[1434]: 2025-05-09 00:04:45.349 [INFO][4187] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" HandleID="k8s-pod-network.25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" Workload="localhost-k8s-coredns--6f6b679f8f--hhhps-eth0" May 9 00:04:45.368564 containerd[1434]: 2025-05-09 00:04:45.350 [INFO][4187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:04:45.368564 containerd[1434]: 2025-05-09 00:04:45.350 [INFO][4187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:04:45.368564 containerd[1434]: 2025-05-09 00:04:45.363 [WARNING][4187] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" HandleID="k8s-pod-network.25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" Workload="localhost-k8s-coredns--6f6b679f8f--hhhps-eth0" May 9 00:04:45.368564 containerd[1434]: 2025-05-09 00:04:45.363 [INFO][4187] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" HandleID="k8s-pod-network.25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" Workload="localhost-k8s-coredns--6f6b679f8f--hhhps-eth0" May 9 00:04:45.368564 containerd[1434]: 2025-05-09 00:04:45.364 [INFO][4187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:04:45.368564 containerd[1434]: 2025-05-09 00:04:45.367 [INFO][4174] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" May 9 00:04:45.370659 systemd[1]: run-netns-cni\x2dd91826c9\x2d7e47\x2de1b0\x2d2e50\x2d0283ee271623.mount: Deactivated successfully. May 9 00:04:45.371111 containerd[1434]: time="2025-05-09T00:04:45.371066787Z" level=info msg="TearDown network for sandbox \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\" successfully" May 9 00:04:45.371111 containerd[1434]: time="2025-05-09T00:04:45.371105354Z" level=info msg="StopPodSandbox for \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\" returns successfully" May 9 00:04:45.371436 kubelet[2467]: E0509 00:04:45.371400 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:45.372936 containerd[1434]: time="2025-05-09T00:04:45.372904848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hhhps,Uid:449f9335-cf62-43ca-b5aa-48636c5af7c8,Namespace:kube-system,Attempt:1,}" May 9 00:04:45.406249 containerd[1434]: time="2025-05-09T00:04:45.406199104Z" level=info msg="CreateContainer within sandbox \"4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9851da70adaa94b729e7f9fd68d1a43f23c4855289574164c50cc78e10d104c5\"" May 9 00:04:45.408925 containerd[1434]: time="2025-05-09T00:04:45.406840103Z" level=info msg="StartContainer for \"9851da70adaa94b729e7f9fd68d1a43f23c4855289574164c50cc78e10d104c5\"" May 9 00:04:45.458183 systemd[1]: Started cri-containerd-9851da70adaa94b729e7f9fd68d1a43f23c4855289574164c50cc78e10d104c5.scope - libcontainer container 9851da70adaa94b729e7f9fd68d1a43f23c4855289574164c50cc78e10d104c5. May 9 00:04:45.459245 systemd-networkd[1372]: cali74d3ef5aec7: Gained IPv6LL May 9 00:04:45.506868 containerd[1434]: time="2025-05-09T00:04:45.506788923Z" level=info msg="StartContainer for \"9851da70adaa94b729e7f9fd68d1a43f23c4855289574164c50cc78e10d104c5\" returns successfully" May 9 00:04:45.585253 systemd-networkd[1372]: calif283ba0c735: Gained IPv6LL May 9 00:04:45.673179 systemd-networkd[1372]: cali28a8284eb67: Link UP May 9 00:04:45.673401 systemd-networkd[1372]: cali28a8284eb67: Gained carrier May 9 00:04:45.689739 containerd[1434]: 2025-05-09 00:04:45.465 [INFO][4205] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--hhhps-eth0 coredns-6f6b679f8f- kube-system 449f9335-cf62-43ca-b5aa-48636c5af7c8 873 0 2025-05-09 00:04:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-hhhps eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali28a8284eb67 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhhps" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hhhps-" May 9 00:04:45.689739 containerd[1434]: 2025-05-09 00:04:45.465 [INFO][4205] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhhps" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hhhps-eth0" May 9 00:04:45.689739 containerd[1434]: 2025-05-09 00:04:45.495 [INFO][4237] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b" HandleID="k8s-pod-network.6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b" Workload="localhost-k8s-coredns--6f6b679f8f--hhhps-eth0" May 9 00:04:45.689739 containerd[1434]: 2025-05-09 00:04:45.611 [INFO][4237] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b" HandleID="k8s-pod-network.6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b" Workload="localhost-k8s-coredns--6f6b679f8f--hhhps-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003612f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-hhhps", "timestamp":"2025-05-09 00:04:45.495844773 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:04:45.689739 containerd[1434]: 2025-05-09 00:04:45.611 [INFO][4237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:04:45.689739 containerd[1434]: 2025-05-09 00:04:45.611 [INFO][4237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:04:45.689739 containerd[1434]: 2025-05-09 00:04:45.611 [INFO][4237] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 00:04:45.689739 containerd[1434]: 2025-05-09 00:04:45.614 [INFO][4237] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b" host="localhost" May 9 00:04:45.689739 containerd[1434]: 2025-05-09 00:04:45.619 [INFO][4237] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 00:04:45.689739 containerd[1434]: 2025-05-09 00:04:45.628 [INFO][4237] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 00:04:45.689739 containerd[1434]: 2025-05-09 00:04:45.630 [INFO][4237] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 00:04:45.689739 containerd[1434]: 2025-05-09 00:04:45.632 [INFO][4237] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 00:04:45.689739 containerd[1434]: 2025-05-09 00:04:45.632 [INFO][4237] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b" host="localhost" May 9 00:04:45.689739 containerd[1434]: 2025-05-09 00:04:45.634 [INFO][4237] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b May 9 00:04:45.689739 containerd[1434]: 2025-05-09 00:04:45.643 [INFO][4237] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b" host="localhost" May 9 00:04:45.689739 containerd[1434]: 2025-05-09 00:04:45.668 [INFO][4237] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b" host="localhost" May 9 00:04:45.689739 containerd[1434]: 2025-05-09 00:04:45.668 [INFO][4237] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b" host="localhost" May 9 00:04:45.689739 containerd[1434]: 2025-05-09 00:04:45.668 [INFO][4237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:04:45.689739 containerd[1434]: 2025-05-09 00:04:45.668 [INFO][4237] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b" HandleID="k8s-pod-network.6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b" Workload="localhost-k8s-coredns--6f6b679f8f--hhhps-eth0" May 9 00:04:45.690640 containerd[1434]: 2025-05-09 00:04:45.670 [INFO][4205] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhhps" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hhhps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hhhps-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"449f9335-cf62-43ca-b5aa-48636c5af7c8", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-hhhps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28a8284eb67", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:04:45.690640 containerd[1434]: 2025-05-09 00:04:45.670 [INFO][4205] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhhps" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hhhps-eth0" May 9 00:04:45.690640 containerd[1434]: 2025-05-09 00:04:45.671 [INFO][4205] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28a8284eb67 ContainerID="6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhhps" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hhhps-eth0" May 9 00:04:45.690640 containerd[1434]: 2025-05-09 00:04:45.673 [INFO][4205] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhhps" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hhhps-eth0" May 9 00:04:45.690640 containerd[1434]: 2025-05-09 00:04:45.673 [INFO][4205] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhhps" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hhhps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hhhps-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"449f9335-cf62-43ca-b5aa-48636c5af7c8", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b", Pod:"coredns-6f6b679f8f-hhhps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28a8284eb67", MAC:"22:5d:24:ce:c9:49", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:04:45.690640 containerd[1434]: 2025-05-09 00:04:45.687 [INFO][4205] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhhps" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hhhps-eth0" May 9 00:04:45.712394 containerd[1434]: time="2025-05-09T00:04:45.711912254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:04:45.712394 containerd[1434]: time="2025-05-09T00:04:45.712348054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:04:45.712394 containerd[1434]: time="2025-05-09T00:04:45.712360777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:45.712558 containerd[1434]: time="2025-05-09T00:04:45.712440471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:45.737178 systemd[1]: Started cri-containerd-6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b.scope - libcontainer container 6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b. May 9 00:04:45.746921 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:04:45.763424 containerd[1434]: time="2025-05-09T00:04:45.763381881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hhhps,Uid:449f9335-cf62-43ca-b5aa-48636c5af7c8,Namespace:kube-system,Attempt:1,} returns sandbox id \"6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b\"" May 9 00:04:45.764082 kubelet[2467]: E0509 00:04:45.764053 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:45.765995 containerd[1434]: time="2025-05-09T00:04:45.765959159Z" level=info msg="CreateContainer within sandbox \"6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:04:45.783325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount405613933.mount: Deactivated successfully. May 9 00:04:45.785286 containerd[1434]: time="2025-05-09T00:04:45.785246017Z" level=info msg="CreateContainer within sandbox \"6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6da3d9171788a9734a3146215c162e71059167b92c1c8d40cb89917e1e4168c6\"" May 9 00:04:45.786183 containerd[1434]: time="2025-05-09T00:04:45.786152225Z" level=info msg="StartContainer for \"6da3d9171788a9734a3146215c162e71059167b92c1c8d40cb89917e1e4168c6\"" May 9 00:04:45.814175 systemd[1]: Started cri-containerd-6da3d9171788a9734a3146215c162e71059167b92c1c8d40cb89917e1e4168c6.scope - libcontainer container 6da3d9171788a9734a3146215c162e71059167b92c1c8d40cb89917e1e4168c6. May 9 00:04:45.839224 containerd[1434]: time="2025-05-09T00:04:45.839110449Z" level=info msg="StartContainer for \"6da3d9171788a9734a3146215c162e71059167b92c1c8d40cb89917e1e4168c6\" returns successfully" May 9 00:04:46.289111 containerd[1434]: time="2025-05-09T00:04:46.288513881Z" level=info msg="StopPodSandbox for \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\"" May 9 00:04:46.289111 containerd[1434]: time="2025-05-09T00:04:46.288515121Z" level=info msg="StopPodSandbox for \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\"" May 9 00:04:46.289111 containerd[1434]: time="2025-05-09T00:04:46.288535205Z" level=info msg="StopPodSandbox for \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\"" May 9 00:04:46.466680 containerd[1434]: 2025-05-09 00:04:46.394 [INFO][4408] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" May 9 00:04:46.466680 containerd[1434]: 2025-05-09 00:04:46.394 [INFO][4408] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" iface="eth0" netns="/var/run/netns/cni-bbf99594-11a3-2245-5130-98eff61239e1" May 9 00:04:46.466680 containerd[1434]: 2025-05-09 00:04:46.395 [INFO][4408] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" iface="eth0" netns="/var/run/netns/cni-bbf99594-11a3-2245-5130-98eff61239e1" May 9 00:04:46.466680 containerd[1434]: 2025-05-09 00:04:46.395 [INFO][4408] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" iface="eth0" netns="/var/run/netns/cni-bbf99594-11a3-2245-5130-98eff61239e1" May 9 00:04:46.466680 containerd[1434]: 2025-05-09 00:04:46.395 [INFO][4408] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" May 9 00:04:46.466680 containerd[1434]: 2025-05-09 00:04:46.395 [INFO][4408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" May 9 00:04:46.466680 containerd[1434]: 2025-05-09 00:04:46.430 [INFO][4425] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" HandleID="k8s-pod-network.137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" Workload="localhost-k8s-csi--node--driver--sc8g8-eth0" May 9 00:04:46.466680 containerd[1434]: 2025-05-09 00:04:46.432 [INFO][4425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:04:46.466680 containerd[1434]: 2025-05-09 00:04:46.432 [INFO][4425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:04:46.466680 containerd[1434]: 2025-05-09 00:04:46.449 [WARNING][4425] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" HandleID="k8s-pod-network.137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" Workload="localhost-k8s-csi--node--driver--sc8g8-eth0" May 9 00:04:46.466680 containerd[1434]: 2025-05-09 00:04:46.449 [INFO][4425] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" HandleID="k8s-pod-network.137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" Workload="localhost-k8s-csi--node--driver--sc8g8-eth0" May 9 00:04:46.466680 containerd[1434]: 2025-05-09 00:04:46.453 [INFO][4425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:04:46.466680 containerd[1434]: 2025-05-09 00:04:46.461 [INFO][4408] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" May 9 00:04:46.467570 containerd[1434]: time="2025-05-09T00:04:46.467492842Z" level=info msg="TearDown network for sandbox \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\" successfully" May 9 00:04:46.467705 kubelet[2467]: E0509 00:04:46.467678 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:46.468308 containerd[1434]: time="2025-05-09T00:04:46.467888674Z" level=info msg="StopPodSandbox for \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\" returns successfully" May 9 00:04:46.469332 containerd[1434]: time="2025-05-09T00:04:46.469187348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sc8g8,Uid:cd2cc3f5-b622-427f-8e40-c278d97d553c,Namespace:calico-system,Attempt:1,}" May 9 00:04:46.480488 kubelet[2467]: I0509 00:04:46.480399 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f76859ff6-dh6dh" podStartSLOduration=25.033249714 podStartE2EDuration="26.480373569s" podCreationTimestamp="2025-05-09 00:04:20 +0000 UTC" firstStartedPulling="2025-05-09 00:04:43.880899588 +0000 UTC m=+35.691568879" lastFinishedPulling="2025-05-09 00:04:45.328023363 +0000 UTC m=+37.138692734" observedRunningTime="2025-05-09 00:04:46.480053191 +0000 UTC m=+38.290722482" watchObservedRunningTime="2025-05-09 00:04:46.480373569 +0000 UTC m=+38.291042860" May 9 00:04:46.495149 containerd[1434]: 2025-05-09 00:04:46.399 [INFO][4397] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" May 9 00:04:46.495149 containerd[1434]: 2025-05-09 00:04:46.399 [INFO][4397] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" iface="eth0" netns="/var/run/netns/cni-6739347f-d7f0-6425-3073-9cc0796957ac" May 9 00:04:46.495149 containerd[1434]: 2025-05-09 00:04:46.399 [INFO][4397] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" iface="eth0" netns="/var/run/netns/cni-6739347f-d7f0-6425-3073-9cc0796957ac" May 9 00:04:46.495149 containerd[1434]: 2025-05-09 00:04:46.400 [INFO][4397] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" iface="eth0" netns="/var/run/netns/cni-6739347f-d7f0-6425-3073-9cc0796957ac" May 9 00:04:46.495149 containerd[1434]: 2025-05-09 00:04:46.400 [INFO][4397] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" May 9 00:04:46.495149 containerd[1434]: 2025-05-09 00:04:46.400 [INFO][4397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" May 9 00:04:46.495149 containerd[1434]: 2025-05-09 00:04:46.446 [INFO][4427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" HandleID="k8s-pod-network.fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" Workload="localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0" May 9 00:04:46.495149 containerd[1434]: 2025-05-09 00:04:46.446 [INFO][4427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:04:46.495149 containerd[1434]: 2025-05-09 00:04:46.454 [INFO][4427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:04:46.495149 containerd[1434]: 2025-05-09 00:04:46.481 [WARNING][4427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" HandleID="k8s-pod-network.fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" Workload="localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0" May 9 00:04:46.495149 containerd[1434]: 2025-05-09 00:04:46.481 [INFO][4427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" HandleID="k8s-pod-network.fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" Workload="localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0" May 9 00:04:46.495149 containerd[1434]: 2025-05-09 00:04:46.487 [INFO][4427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:04:46.495149 containerd[1434]: 2025-05-09 00:04:46.492 [INFO][4397] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" May 9 00:04:46.496246 containerd[1434]: time="2025-05-09T00:04:46.496205908Z" level=info msg="TearDown network for sandbox \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\" successfully" May 9 00:04:46.496246 containerd[1434]: time="2025-05-09T00:04:46.496237913Z" level=info msg="StopPodSandbox for \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\" returns successfully" May 9 00:04:46.499682 kubelet[2467]: I0509 00:04:46.499604 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hhhps" podStartSLOduration=31.499575636 podStartE2EDuration="31.499575636s" podCreationTimestamp="2025-05-09 00:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:04:46.49953903 +0000 UTC m=+38.310208321" watchObservedRunningTime="2025-05-09 00:04:46.499575636 +0000 UTC m=+38.310244927" May 9 00:04:46.501453 containerd[1434]: time="2025-05-09T00:04:46.501382042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f76859ff6-z8gcd,Uid:1dfd2307-f708-4433-a5bd-771cc97fedba,Namespace:calico-apiserver,Attempt:1,}" May 9 00:04:46.519592 containerd[1434]: 2025-05-09 00:04:46.403 [INFO][4402] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" May 9 00:04:46.519592 containerd[1434]: 2025-05-09 00:04:46.403 [INFO][4402] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" iface="eth0" netns="/var/run/netns/cni-6e9c2f33-8b78-a1eb-0f41-3eef82c36709" May 9 00:04:46.519592 containerd[1434]: 2025-05-09 00:04:46.404 [INFO][4402] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" iface="eth0" netns="/var/run/netns/cni-6e9c2f33-8b78-a1eb-0f41-3eef82c36709" May 9 00:04:46.519592 containerd[1434]: 2025-05-09 00:04:46.404 [INFO][4402] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" iface="eth0" netns="/var/run/netns/cni-6e9c2f33-8b78-a1eb-0f41-3eef82c36709" May 9 00:04:46.519592 containerd[1434]: 2025-05-09 00:04:46.404 [INFO][4402] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" May 9 00:04:46.519592 containerd[1434]: 2025-05-09 00:04:46.404 [INFO][4402] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" May 9 00:04:46.519592 containerd[1434]: 2025-05-09 00:04:46.472 [INFO][4437] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" HandleID="k8s-pod-network.f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" Workload="localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0" May 9 00:04:46.519592 containerd[1434]: 2025-05-09 00:04:46.472 [INFO][4437] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:04:46.519592 containerd[1434]: 2025-05-09 00:04:46.488 [INFO][4437] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:04:46.519592 containerd[1434]: 2025-05-09 00:04:46.504 [WARNING][4437] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" HandleID="k8s-pod-network.f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" Workload="localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0" May 9 00:04:46.519592 containerd[1434]: 2025-05-09 00:04:46.504 [INFO][4437] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" HandleID="k8s-pod-network.f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" Workload="localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0" May 9 00:04:46.519592 containerd[1434]: 2025-05-09 00:04:46.506 [INFO][4437] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:04:46.519592 containerd[1434]: 2025-05-09 00:04:46.514 [INFO][4402] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" May 9 00:04:46.520368 containerd[1434]: time="2025-05-09T00:04:46.520186718Z" level=info msg="TearDown network for sandbox \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\" successfully" May 9 00:04:46.520368 containerd[1434]: time="2025-05-09T00:04:46.520224605Z" level=info msg="StopPodSandbox for \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\" returns successfully" May 9 00:04:46.520576 kubelet[2467]: E0509 00:04:46.520543 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:46.522507 containerd[1434]: time="2025-05-09T00:04:46.522373713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qhkqc,Uid:61e5788f-c1c1-468b-a0b1-b88de0f0fb58,Namespace:kube-system,Attempt:1,}" May 9 00:04:46.586027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2977839739.mount: Deactivated successfully. May 9 00:04:46.586146 systemd[1]: run-netns-cni\x2dbbf99594\x2d11a3\x2d2245\x2d5130\x2d98eff61239e1.mount: Deactivated successfully. May 9 00:04:46.586199 systemd[1]: run-netns-cni\x2d6739347f\x2dd7f0\x2d6425\x2d3073\x2d9cc0796957ac.mount: Deactivated successfully. May 9 00:04:46.586249 systemd[1]: run-netns-cni\x2d6e9c2f33\x2d8b78\x2da1eb\x2d0f41\x2d3eef82c36709.mount: Deactivated successfully. May 9 00:04:46.860772 systemd-networkd[1372]: cali51d7891b935: Link UP May 9 00:04:46.861954 systemd-networkd[1372]: cali51d7891b935: Gained carrier May 9 00:04:46.889615 containerd[1434]: 2025-05-09 00:04:46.581 [INFO][4457] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--sc8g8-eth0 csi-node-driver- calico-system cd2cc3f5-b622-427f-8e40-c278d97d553c 896 0 2025-05-09 00:04:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-sc8g8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali51d7891b935 [] []}} ContainerID="8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126" Namespace="calico-system" Pod="csi-node-driver-sc8g8" WorkloadEndpoint="localhost-k8s-csi--node--driver--sc8g8-" May 9 00:04:46.889615 containerd[1434]: 2025-05-09 00:04:46.581 [INFO][4457] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126" Namespace="calico-system" Pod="csi-node-driver-sc8g8" WorkloadEndpoint="localhost-k8s-csi--node--driver--sc8g8-eth0" May 9 00:04:46.889615 containerd[1434]: 2025-05-09 00:04:46.656 [INFO][4504] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126" HandleID="k8s-pod-network.8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126" Workload="localhost-k8s-csi--node--driver--sc8g8-eth0" May 9 00:04:46.889615 containerd[1434]: 2025-05-09 00:04:46.674 [INFO][4504] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126" HandleID="k8s-pod-network.8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126" Workload="localhost-k8s-csi--node--driver--sc8g8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e1f00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-sc8g8", "timestamp":"2025-05-09 00:04:46.656476051 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:04:46.889615 containerd[1434]: 2025-05-09 00:04:46.674 [INFO][4504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:04:46.889615 containerd[1434]: 2025-05-09 00:04:46.674 [INFO][4504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:04:46.889615 containerd[1434]: 2025-05-09 00:04:46.674 [INFO][4504] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 00:04:46.889615 containerd[1434]: 2025-05-09 00:04:46.682 [INFO][4504] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126" host="localhost" May 9 00:04:46.889615 containerd[1434]: 2025-05-09 00:04:46.769 [INFO][4504] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 00:04:46.889615 containerd[1434]: 2025-05-09 00:04:46.777 [INFO][4504] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 00:04:46.889615 containerd[1434]: 2025-05-09 00:04:46.781 [INFO][4504] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 00:04:46.889615 containerd[1434]: 2025-05-09 00:04:46.786 [INFO][4504] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 00:04:46.889615 containerd[1434]: 2025-05-09 00:04:46.786 [INFO][4504] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126" host="localhost" May 9 00:04:46.889615 containerd[1434]: 2025-05-09 00:04:46.791 [INFO][4504] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126 May 9 00:04:46.889615 containerd[1434]: 2025-05-09 00:04:46.834 [INFO][4504] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126" host="localhost" May 9 00:04:46.889615 containerd[1434]: 2025-05-09 00:04:46.853 [INFO][4504] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126" host="localhost" May 9 00:04:46.889615 containerd[1434]: 2025-05-09 00:04:46.853 [INFO][4504] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126" host="localhost" May 9 00:04:46.889615 containerd[1434]: 2025-05-09 00:04:46.853 [INFO][4504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:04:46.889615 containerd[1434]: 2025-05-09 00:04:46.853 [INFO][4504] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126" HandleID="k8s-pod-network.8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126" Workload="localhost-k8s-csi--node--driver--sc8g8-eth0" May 9 00:04:46.890453 containerd[1434]: 2025-05-09 00:04:46.857 [INFO][4457] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126" Namespace="calico-system" Pod="csi-node-driver-sc8g8" WorkloadEndpoint="localhost-k8s-csi--node--driver--sc8g8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sc8g8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cd2cc3f5-b622-427f-8e40-c278d97d553c", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-sc8g8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali51d7891b935", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:04:46.890453 containerd[1434]: 2025-05-09 00:04:46.858 [INFO][4457] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126" Namespace="calico-system" Pod="csi-node-driver-sc8g8" WorkloadEndpoint="localhost-k8s-csi--node--driver--sc8g8-eth0" May 9 00:04:46.890453 containerd[1434]: 2025-05-09 00:04:46.858 [INFO][4457] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali51d7891b935 ContainerID="8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126" Namespace="calico-system" Pod="csi-node-driver-sc8g8" WorkloadEndpoint="localhost-k8s-csi--node--driver--sc8g8-eth0" May 9 00:04:46.890453 containerd[1434]: 2025-05-09 00:04:46.863 [INFO][4457] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126" Namespace="calico-system" Pod="csi-node-driver-sc8g8" WorkloadEndpoint="localhost-k8s-csi--node--driver--sc8g8-eth0" May 9 00:04:46.890453 containerd[1434]: 2025-05-09 00:04:46.866 [INFO][4457] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126" Namespace="calico-system" Pod="csi-node-driver-sc8g8" WorkloadEndpoint="localhost-k8s-csi--node--driver--sc8g8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sc8g8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cd2cc3f5-b622-427f-8e40-c278d97d553c", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126", Pod:"csi-node-driver-sc8g8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali51d7891b935", MAC:"b2:fa:f0:f5:0e:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:04:46.890453 containerd[1434]: 2025-05-09 00:04:46.885 [INFO][4457] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126" Namespace="calico-system" Pod="csi-node-driver-sc8g8" WorkloadEndpoint="localhost-k8s-csi--node--driver--sc8g8-eth0" May 9 00:04:46.919067 containerd[1434]: time="2025-05-09T00:04:46.918767298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:04:46.919067 containerd[1434]: time="2025-05-09T00:04:46.918821908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:04:46.919067 containerd[1434]: time="2025-05-09T00:04:46.918832590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:46.919067 containerd[1434]: time="2025-05-09T00:04:46.918918085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:46.944827 systemd-networkd[1372]: cali0ad11ed22e0: Link UP May 9 00:04:46.945302 systemd-networkd[1372]: cali0ad11ed22e0: Gained carrier May 9 00:04:46.974184 systemd[1]: Started cri-containerd-8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126.scope - libcontainer container 8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126. May 9 00:04:46.975323 containerd[1434]: 2025-05-09 00:04:46.623 [INFO][4472] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0 calico-apiserver-6f76859ff6- calico-apiserver 1dfd2307-f708-4433-a5bd-771cc97fedba 897 0 2025-05-09 00:04:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f76859ff6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f76859ff6-z8gcd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0ad11ed22e0 [] []}} ContainerID="467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0" Namespace="calico-apiserver" Pod="calico-apiserver-6f76859ff6-z8gcd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-" May 9 00:04:46.975323 containerd[1434]: 2025-05-09 00:04:46.623 [INFO][4472] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0" Namespace="calico-apiserver" Pod="calico-apiserver-6f76859ff6-z8gcd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0" May 9 00:04:46.975323 containerd[1434]: 2025-05-09 00:04:46.676 [INFO][4512] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0" HandleID="k8s-pod-network.467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0" Workload="localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0" May 9 00:04:46.975323 containerd[1434]: 2025-05-09 00:04:46.772 [INFO][4512] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0" HandleID="k8s-pod-network.467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0" Workload="localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dbc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f76859ff6-z8gcd", "timestamp":"2025-05-09 00:04:46.676046185 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:04:46.975323 containerd[1434]: 2025-05-09 00:04:46.772 [INFO][4512] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:04:46.975323 containerd[1434]: 2025-05-09 00:04:46.853 [INFO][4512] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:04:46.975323 containerd[1434]: 2025-05-09 00:04:46.853 [INFO][4512] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 00:04:46.975323 containerd[1434]: 2025-05-09 00:04:46.867 [INFO][4512] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0" host="localhost" May 9 00:04:46.975323 containerd[1434]: 2025-05-09 00:04:46.888 [INFO][4512] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 00:04:46.975323 containerd[1434]: 2025-05-09 00:04:46.897 [INFO][4512] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 00:04:46.975323 containerd[1434]: 2025-05-09 00:04:46.900 [INFO][4512] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 00:04:46.975323 containerd[1434]: 2025-05-09 00:04:46.911 [INFO][4512] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 00:04:46.975323 containerd[1434]: 2025-05-09 00:04:46.912 [INFO][4512] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0" host="localhost" May 9 00:04:46.975323 containerd[1434]: 2025-05-09 00:04:46.920 [INFO][4512] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0 May 9 00:04:46.975323 containerd[1434]: 2025-05-09 00:04:46.925 [INFO][4512] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0" host="localhost" May 9 00:04:46.975323 containerd[1434]: 2025-05-09 00:04:46.934 [INFO][4512] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0" host="localhost" May 9 00:04:46.975323 containerd[1434]: 2025-05-09 00:04:46.935 [INFO][4512] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0" host="localhost" May 9 00:04:46.975323 containerd[1434]: 2025-05-09 00:04:46.935 [INFO][4512] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:04:46.975323 containerd[1434]: 2025-05-09 00:04:46.935 [INFO][4512] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0" HandleID="k8s-pod-network.467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0" Workload="localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0" May 9 00:04:46.975853 containerd[1434]: 2025-05-09 00:04:46.940 [INFO][4472] cni-plugin/k8s.go 386: Populated endpoint ContainerID="467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0" Namespace="calico-apiserver" Pod="calico-apiserver-6f76859ff6-z8gcd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0", GenerateName:"calico-apiserver-6f76859ff6-", Namespace:"calico-apiserver", SelfLink:"", UID:"1dfd2307-f708-4433-a5bd-771cc97fedba", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f76859ff6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f76859ff6-z8gcd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ad11ed22e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:04:46.975853 containerd[1434]: 2025-05-09 00:04:46.941 [INFO][4472] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0" Namespace="calico-apiserver" Pod="calico-apiserver-6f76859ff6-z8gcd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0" May 9 00:04:46.975853 containerd[1434]: 2025-05-09 00:04:46.941 [INFO][4472] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ad11ed22e0 ContainerID="467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0" Namespace="calico-apiserver" Pod="calico-apiserver-6f76859ff6-z8gcd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0" May 9 00:04:46.975853 containerd[1434]: 2025-05-09 00:04:46.945 [INFO][4472] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0" Namespace="calico-apiserver" Pod="calico-apiserver-6f76859ff6-z8gcd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0" May 9 00:04:46.975853 containerd[1434]: 2025-05-09 00:04:46.950 [INFO][4472] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0" Namespace="calico-apiserver" Pod="calico-apiserver-6f76859ff6-z8gcd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0", GenerateName:"calico-apiserver-6f76859ff6-", Namespace:"calico-apiserver", SelfLink:"", UID:"1dfd2307-f708-4433-a5bd-771cc97fedba", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f76859ff6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0", Pod:"calico-apiserver-6f76859ff6-z8gcd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ad11ed22e0", MAC:"c6:d3:8b:41:72:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:04:46.975853 containerd[1434]: 2025-05-09 00:04:46.971 [INFO][4472] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0" Namespace="calico-apiserver" Pod="calico-apiserver-6f76859ff6-z8gcd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0" May 9 00:04:47.016619 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:04:47.017090 containerd[1434]: time="2025-05-09T00:04:47.016479752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:04:47.017090 containerd[1434]: time="2025-05-09T00:04:47.016555606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:04:47.017090 containerd[1434]: time="2025-05-09T00:04:47.016584011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:47.017542 containerd[1434]: time="2025-05-09T00:04:47.017434800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:47.045186 systemd[1]: Started cri-containerd-467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0.scope - libcontainer container 467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0. May 9 00:04:47.047829 systemd-networkd[1372]: cali0a81fb037e3: Link UP May 9 00:04:47.051477 containerd[1434]: time="2025-05-09T00:04:47.049845784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sc8g8,Uid:cd2cc3f5-b622-427f-8e40-c278d97d553c,Namespace:calico-system,Attempt:1,} returns sandbox id \"8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126\"" May 9 00:04:47.050664 systemd-networkd[1372]: cali0a81fb037e3: Gained carrier May 9 00:04:47.071251 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:04:47.073141 containerd[1434]: 2025-05-09 00:04:46.665 [INFO][4488] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0 coredns-6f6b679f8f- kube-system 61e5788f-c1c1-468b-a0b1-b88de0f0fb58 898 0 2025-05-09 00:04:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-qhkqc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0a81fb037e3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d" Namespace="kube-system" Pod="coredns-6f6b679f8f-qhkqc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qhkqc-" May 9 00:04:47.073141 containerd[1434]: 2025-05-09 00:04:46.665 [INFO][4488] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d" Namespace="kube-system" Pod="coredns-6f6b679f8f-qhkqc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0" May 9 00:04:47.073141 containerd[1434]: 2025-05-09 00:04:46.716 [INFO][4523] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d" HandleID="k8s-pod-network.c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d" Workload="localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0" May 9 00:04:47.073141 containerd[1434]: 2025-05-09 00:04:46.774 [INFO][4523] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d" HandleID="k8s-pod-network.c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d" Workload="localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000428f90), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-qhkqc", "timestamp":"2025-05-09 00:04:46.716927928 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:04:47.073141 containerd[1434]: 2025-05-09 00:04:46.774 [INFO][4523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:04:47.073141 containerd[1434]: 2025-05-09 00:04:46.935 [INFO][4523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:04:47.073141 containerd[1434]: 2025-05-09 00:04:46.935 [INFO][4523] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 00:04:47.073141 containerd[1434]: 2025-05-09 00:04:46.966 [INFO][4523] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d" host="localhost" May 9 00:04:47.073141 containerd[1434]: 2025-05-09 00:04:46.986 [INFO][4523] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 00:04:47.073141 containerd[1434]: 2025-05-09 00:04:47.004 [INFO][4523] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 00:04:47.073141 containerd[1434]: 2025-05-09 00:04:47.006 [INFO][4523] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 00:04:47.073141 containerd[1434]: 2025-05-09 00:04:47.010 [INFO][4523] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 00:04:47.073141 containerd[1434]: 2025-05-09 00:04:47.010 [INFO][4523] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d" host="localhost" May 9 00:04:47.073141 containerd[1434]: 2025-05-09 00:04:47.015 [INFO][4523] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d May 9 00:04:47.073141 containerd[1434]: 2025-05-09 00:04:47.021 [INFO][4523] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d" host="localhost" May 9 00:04:47.073141 containerd[1434]: 2025-05-09 00:04:47.031 [INFO][4523] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d" host="localhost" May 9 00:04:47.073141 containerd[1434]: 2025-05-09 00:04:47.031 [INFO][4523] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d" host="localhost" May 9 00:04:47.073141 containerd[1434]: 2025-05-09 00:04:47.031 [INFO][4523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:04:47.073141 containerd[1434]: 2025-05-09 00:04:47.031 [INFO][4523] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d" HandleID="k8s-pod-network.c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d" Workload="localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0" May 9 00:04:47.073774 containerd[1434]: 2025-05-09 00:04:47.037 [INFO][4488] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d" Namespace="kube-system" Pod="coredns-6f6b679f8f-qhkqc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"61e5788f-c1c1-468b-a0b1-b88de0f0fb58", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-qhkqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0a81fb037e3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:04:47.073774 containerd[1434]: 2025-05-09 00:04:47.038 [INFO][4488] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d" Namespace="kube-system" Pod="coredns-6f6b679f8f-qhkqc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0" May 9 00:04:47.073774 containerd[1434]: 2025-05-09 00:04:47.038 [INFO][4488] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a81fb037e3 ContainerID="c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d" Namespace="kube-system" Pod="coredns-6f6b679f8f-qhkqc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0" May 9 00:04:47.073774 containerd[1434]: 2025-05-09 00:04:47.048 [INFO][4488] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d" Namespace="kube-system" Pod="coredns-6f6b679f8f-qhkqc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0" May 9 00:04:47.073774 containerd[1434]: 2025-05-09 00:04:47.051 [INFO][4488] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d" Namespace="kube-system" Pod="coredns-6f6b679f8f-qhkqc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"61e5788f-c1c1-468b-a0b1-b88de0f0fb58", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d", Pod:"coredns-6f6b679f8f-qhkqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0a81fb037e3", MAC:"16:2f:3f:41:0e:e7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:04:47.073774 containerd[1434]: 2025-05-09 00:04:47.067 [INFO][4488] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d" Namespace="kube-system" Pod="coredns-6f6b679f8f-qhkqc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0" May 9 00:04:47.106511 containerd[1434]: time="2025-05-09T00:04:47.106132810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:04:47.106511 containerd[1434]: time="2025-05-09T00:04:47.106205863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:04:47.106511 containerd[1434]: time="2025-05-09T00:04:47.106230587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:47.106511 containerd[1434]: time="2025-05-09T00:04:47.106350809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:04:47.128416 containerd[1434]: time="2025-05-09T00:04:47.128378685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f76859ff6-z8gcd,Uid:1dfd2307-f708-4433-a5bd-771cc97fedba,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0\"" May 9 00:04:47.133187 systemd[1]: Started cri-containerd-c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d.scope - libcontainer container c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d. May 9 00:04:47.138243 containerd[1434]: time="2025-05-09T00:04:47.138126721Z" level=info msg="CreateContainer within sandbox \"467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 9 00:04:47.152219 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:04:47.180063 containerd[1434]: time="2025-05-09T00:04:47.180023694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qhkqc,Uid:61e5788f-c1c1-468b-a0b1-b88de0f0fb58,Namespace:kube-system,Attempt:1,} returns sandbox id \"c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d\"" May 9 00:04:47.182455 kubelet[2467]: E0509 00:04:47.182307 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:47.185499 containerd[1434]: time="2025-05-09T00:04:47.185361993Z" level=info msg="CreateContainer within sandbox \"c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:04:47.218208 containerd[1434]: time="2025-05-09T00:04:47.218150124Z" level=info msg="CreateContainer within sandbox \"467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a2e5bfc9f1c08ea9f17d6f72db95101a373c5d8548fdd8a0aca2b06bec176351\"" May 9 00:04:47.218893 containerd[1434]: time="2025-05-09T00:04:47.218818281Z" level=info msg="StartContainer for \"a2e5bfc9f1c08ea9f17d6f72db95101a373c5d8548fdd8a0aca2b06bec176351\"" May 9 00:04:47.233846 containerd[1434]: time="2025-05-09T00:04:47.233787796Z" level=info msg="CreateContainer within sandbox \"c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d3d1b5deeb7f7fb72e16d98f52aac218b1c967420916b9a1e30a51eaece57b56\"" May 9 00:04:47.235061 containerd[1434]: time="2025-05-09T00:04:47.235022893Z" level=info msg="StartContainer for \"d3d1b5deeb7f7fb72e16d98f52aac218b1c967420916b9a1e30a51eaece57b56\"" May 9 00:04:47.274507 containerd[1434]: time="2025-05-09T00:04:47.274434909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:47.275377 containerd[1434]: time="2025-05-09T00:04:47.275345629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 9 00:04:47.277234 containerd[1434]: time="2025-05-09T00:04:47.276865217Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:47.280169 containerd[1434]: time="2025-05-09T00:04:47.280132272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:47.280452 containerd[1434]: time="2025-05-09T00:04:47.280416642Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.952203444s" May 9 00:04:47.280516 containerd[1434]: time="2025-05-09T00:04:47.280452328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 9 00:04:47.284410 containerd[1434]: time="2025-05-09T00:04:47.284381100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 9 00:04:47.293496 systemd[1]: Started cri-containerd-a2e5bfc9f1c08ea9f17d6f72db95101a373c5d8548fdd8a0aca2b06bec176351.scope - libcontainer container a2e5bfc9f1c08ea9f17d6f72db95101a373c5d8548fdd8a0aca2b06bec176351. May 9 00:04:47.295716 containerd[1434]: time="2025-05-09T00:04:47.295595633Z" level=info msg="CreateContainer within sandbox \"5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 9 00:04:47.306199 systemd[1]: Started cri-containerd-d3d1b5deeb7f7fb72e16d98f52aac218b1c967420916b9a1e30a51eaece57b56.scope - libcontainer container d3d1b5deeb7f7fb72e16d98f52aac218b1c967420916b9a1e30a51eaece57b56. May 9 00:04:47.314228 containerd[1434]: time="2025-05-09T00:04:47.314075645Z" level=info msg="CreateContainer within sandbox \"5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2ff8b357e8cdcc976faba7658218ad73dda2405790ff15562d505a88fbccf6c1\"" May 9 00:04:47.315522 containerd[1434]: time="2025-05-09T00:04:47.314961001Z" level=info msg="StartContainer for \"2ff8b357e8cdcc976faba7658218ad73dda2405790ff15562d505a88fbccf6c1\"" May 9 00:04:47.338477 containerd[1434]: time="2025-05-09T00:04:47.338305310Z" level=info msg="StartContainer for \"d3d1b5deeb7f7fb72e16d98f52aac218b1c967420916b9a1e30a51eaece57b56\" returns successfully" May 9 00:04:47.344218 containerd[1434]: time="2025-05-09T00:04:47.343912576Z" level=info msg="StartContainer for \"a2e5bfc9f1c08ea9f17d6f72db95101a373c5d8548fdd8a0aca2b06bec176351\" returns successfully" May 9 00:04:47.368200 systemd[1]: Started cri-containerd-2ff8b357e8cdcc976faba7658218ad73dda2405790ff15562d505a88fbccf6c1.scope - libcontainer container 2ff8b357e8cdcc976faba7658218ad73dda2405790ff15562d505a88fbccf6c1. May 9 00:04:47.417945 containerd[1434]: time="2025-05-09T00:04:47.415890844Z" level=info msg="StartContainer for \"2ff8b357e8cdcc976faba7658218ad73dda2405790ff15562d505a88fbccf6c1\" returns successfully" May 9 00:04:47.480094 kubelet[2467]: E0509 00:04:47.480059 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:47.494865 kubelet[2467]: I0509 00:04:47.494320 2467 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:04:47.494865 kubelet[2467]: E0509 00:04:47.494783 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:47.498316 kubelet[2467]: I0509 00:04:47.498259 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-qhkqc" podStartSLOduration=32.498242617 podStartE2EDuration="32.498242617s" podCreationTimestamp="2025-05-09 00:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:04:47.496808844 +0000 UTC m=+39.307478135" watchObservedRunningTime="2025-05-09 00:04:47.498242617 +0000 UTC m=+39.308911868" May 9 00:04:47.528813 kubelet[2467]: I0509 00:04:47.528499 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-cb8c949c8-7dmfb" podStartSLOduration=23.201397673 podStartE2EDuration="26.528480858s" podCreationTimestamp="2025-05-09 00:04:21 +0000 UTC" firstStartedPulling="2025-05-09 00:04:43.955711555 +0000 UTC m=+35.766380846" lastFinishedPulling="2025-05-09 00:04:47.28279474 +0000 UTC m=+39.093464031" observedRunningTime="2025-05-09 00:04:47.526603608 +0000 UTC m=+39.337272899" watchObservedRunningTime="2025-05-09 00:04:47.528480858 +0000 UTC m=+39.339150109" May 9 00:04:47.543081 kubelet[2467]: I0509 00:04:47.542924 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f76859ff6-z8gcd" podStartSLOduration=27.542905597 podStartE2EDuration="27.542905597s" podCreationTimestamp="2025-05-09 00:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:04:47.5425855 +0000 UTC m=+39.353254791" watchObservedRunningTime="2025-05-09 00:04:47.542905597 +0000 UTC m=+39.353574848" May 9 00:04:47.569150 systemd-networkd[1372]: cali28a8284eb67: Gained IPv6LL May 9 00:04:48.081491 systemd-networkd[1372]: cali51d7891b935: Gained IPv6LL May 9 00:04:48.273483 systemd-networkd[1372]: cali0ad11ed22e0: Gained IPv6LL May 9 00:04:48.496108 kubelet[2467]: I0509 00:04:48.496070 2467 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:04:48.497402 kubelet[2467]: E0509 00:04:48.497244 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:48.497402 kubelet[2467]: E0509 00:04:48.497302 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:48.849389 systemd-networkd[1372]: cali0a81fb037e3: Gained IPv6LL May 9 00:04:49.037227 systemd[1]: Started sshd@9-10.0.0.43:22-10.0.0.1:44116.service - OpenSSH per-connection server daemon (10.0.0.1:44116). May 9 00:04:49.091576 sshd[4851]: Accepted publickey for core from 10.0.0.1 port 44116 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:04:49.093525 sshd[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:04:49.100056 systemd-logind[1421]: New session 10 of user core. May 9 00:04:49.109244 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 00:04:49.360401 sshd[4851]: pam_unix(sshd:session): session closed for user core May 9 00:04:49.370965 systemd[1]: sshd@9-10.0.0.43:22-10.0.0.1:44116.service: Deactivated successfully. May 9 00:04:49.372632 systemd[1]: session-10.scope: Deactivated successfully. May 9 00:04:49.377086 systemd-logind[1421]: Session 10 logged out. Waiting for processes to exit. May 9 00:04:49.388312 systemd[1]: Started sshd@10-10.0.0.43:22-10.0.0.1:44124.service - OpenSSH per-connection server daemon (10.0.0.1:44124). May 9 00:04:49.390427 systemd-logind[1421]: Removed session 10. May 9 00:04:49.419412 sshd[4866]: Accepted publickey for core from 10.0.0.1 port 44124 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:04:49.420743 sshd[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:04:49.424944 systemd-logind[1421]: New session 11 of user core. May 9 00:04:49.434158 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 00:04:49.498250 kubelet[2467]: E0509 00:04:49.498172 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:04:49.709621 sshd[4866]: pam_unix(sshd:session): session closed for user core May 9 00:04:49.725921 systemd[1]: sshd@10-10.0.0.43:22-10.0.0.1:44124.service: Deactivated successfully. May 9 00:04:49.727446 systemd[1]: session-11.scope: Deactivated successfully. May 9 00:04:49.730078 systemd-logind[1421]: Session 11 logged out. Waiting for processes to exit. May 9 00:04:49.744421 systemd[1]: Started sshd@11-10.0.0.43:22-10.0.0.1:44126.service - OpenSSH per-connection server daemon (10.0.0.1:44126). May 9 00:04:49.745251 containerd[1434]: time="2025-05-09T00:04:49.744857820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:49.748795 systemd-logind[1421]: Removed session 11. May 9 00:04:49.749136 containerd[1434]: time="2025-05-09T00:04:49.748803921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 9 00:04:49.749684 containerd[1434]: time="2025-05-09T00:04:49.749622098Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:49.752772 containerd[1434]: time="2025-05-09T00:04:49.751825828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:49.753409 containerd[1434]: time="2025-05-09T00:04:49.753377008Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 2.468852763s" May 9 00:04:49.753520 containerd[1434]: time="2025-05-09T00:04:49.753502829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 9 00:04:49.756264 containerd[1434]: time="2025-05-09T00:04:49.756218204Z" level=info msg="CreateContainer within sandbox \"8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 9 00:04:49.784130 containerd[1434]: time="2025-05-09T00:04:49.784079994Z" level=info msg="CreateContainer within sandbox \"8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1481a5f1e71e459ac15e074f72e005ab45f10d21a877dcee578ed9380312884a\"" May 9 00:04:49.786805 containerd[1434]: time="2025-05-09T00:04:49.785225906Z" level=info msg="StartContainer for \"1481a5f1e71e459ac15e074f72e005ab45f10d21a877dcee578ed9380312884a\"" May 9 00:04:49.790810 sshd[4883]: Accepted publickey for core from 10.0.0.1 port 44126 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:04:49.792432 sshd[4883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:04:49.797481 systemd-logind[1421]: New session 12 of user core. May 9 00:04:49.807188 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 00:04:49.819168 systemd[1]: Started cri-containerd-1481a5f1e71e459ac15e074f72e005ab45f10d21a877dcee578ed9380312884a.scope - libcontainer container 1481a5f1e71e459ac15e074f72e005ab45f10d21a877dcee578ed9380312884a. May 9 00:04:49.845207 containerd[1434]: time="2025-05-09T00:04:49.844540449Z" level=info msg="StartContainer for \"1481a5f1e71e459ac15e074f72e005ab45f10d21a877dcee578ed9380312884a\" returns successfully" May 9 00:04:49.847592 containerd[1434]: time="2025-05-09T00:04:49.846837554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 9 00:04:50.016194 sshd[4883]: pam_unix(sshd:session): session closed for user core May 9 00:04:50.025510 systemd[1]: sshd@11-10.0.0.43:22-10.0.0.1:44126.service: Deactivated successfully. May 9 00:04:50.027391 systemd[1]: session-12.scope: Deactivated successfully. May 9 00:04:50.028023 systemd-logind[1421]: Session 12 logged out. Waiting for processes to exit. May 9 00:04:50.028922 systemd-logind[1421]: Removed session 12. May 9 00:04:51.752353 containerd[1434]: time="2025-05-09T00:04:51.752298894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:51.753310 containerd[1434]: time="2025-05-09T00:04:51.753211320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 9 00:04:51.753927 containerd[1434]: time="2025-05-09T00:04:51.753896830Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:51.756373 containerd[1434]: time="2025-05-09T00:04:51.756339181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:04:51.757021 containerd[1434]: time="2025-05-09T00:04:51.756952840Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.910065558s" May 9 00:04:51.757021 containerd[1434]: time="2025-05-09T00:04:51.757001528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 9 00:04:51.760813 containerd[1434]: time="2025-05-09T00:04:51.760759850Z" level=info msg="CreateContainer within sandbox \"8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 9 00:04:51.778236 containerd[1434]: time="2025-05-09T00:04:51.778093468Z" level=info msg="CreateContainer within sandbox \"8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a88e70ed89a10ed68b9fe6c3ddeb2c70e75229dc299f2c088e7eb78ba8a428d7\"" May 9 00:04:51.780230 containerd[1434]: time="2025-05-09T00:04:51.779278178Z" level=info msg="StartContainer for \"a88e70ed89a10ed68b9fe6c3ddeb2c70e75229dc299f2c088e7eb78ba8a428d7\"" May 9 00:04:51.813302 systemd[1]: Started cri-containerd-a88e70ed89a10ed68b9fe6c3ddeb2c70e75229dc299f2c088e7eb78ba8a428d7.scope - libcontainer container a88e70ed89a10ed68b9fe6c3ddeb2c70e75229dc299f2c088e7eb78ba8a428d7. May 9 00:04:51.850553 containerd[1434]: time="2025-05-09T00:04:51.850503514Z" level=info msg="StartContainer for \"a88e70ed89a10ed68b9fe6c3ddeb2c70e75229dc299f2c088e7eb78ba8a428d7\" returns successfully" May 9 00:04:52.401229 kubelet[2467]: I0509 00:04:52.401178 2467 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 9 00:04:52.403495 kubelet[2467]: I0509 00:04:52.403464 2467 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 9 00:04:52.531352 kubelet[2467]: I0509 00:04:52.530348 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-sc8g8" podStartSLOduration=26.824131395 podStartE2EDuration="31.530328553s" podCreationTimestamp="2025-05-09 00:04:21 +0000 UTC" firstStartedPulling="2025-05-09 00:04:47.051860099 +0000 UTC m=+38.862529390" lastFinishedPulling="2025-05-09 00:04:51.758057257 +0000 UTC m=+43.568726548" observedRunningTime="2025-05-09 00:04:52.529775466 +0000 UTC m=+44.340444757" watchObservedRunningTime="2025-05-09 00:04:52.530328553 +0000 UTC m=+44.340997844" May 9 00:04:55.036079 systemd[1]: Started sshd@12-10.0.0.43:22-10.0.0.1:51946.service - OpenSSH per-connection server daemon (10.0.0.1:51946). May 9 00:04:55.083262 sshd[4981]: Accepted publickey for core from 10.0.0.1 port 51946 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:04:55.084942 sshd[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:04:55.090532 systemd-logind[1421]: New session 13 of user core. May 9 00:04:55.102203 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 00:04:55.314243 sshd[4981]: pam_unix(sshd:session): session closed for user core May 9 00:04:55.322665 systemd[1]: sshd@12-10.0.0.43:22-10.0.0.1:51946.service: Deactivated successfully. May 9 00:04:55.325589 systemd[1]: session-13.scope: Deactivated successfully. May 9 00:04:55.327086 systemd-logind[1421]: Session 13 logged out. Waiting for processes to exit. May 9 00:04:55.335530 systemd[1]: Started sshd@13-10.0.0.43:22-10.0.0.1:51962.service - OpenSSH per-connection server daemon (10.0.0.1:51962). May 9 00:04:55.337218 systemd-logind[1421]: Removed session 13. May 9 00:04:55.367399 sshd[4995]: Accepted publickey for core from 10.0.0.1 port 51962 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:04:55.368923 sshd[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:04:55.375511 systemd-logind[1421]: New session 14 of user core. May 9 00:04:55.381146 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 00:04:55.582057 sshd[4995]: pam_unix(sshd:session): session closed for user core May 9 00:04:55.597023 systemd[1]: sshd@13-10.0.0.43:22-10.0.0.1:51962.service: Deactivated successfully. May 9 00:04:55.599067 systemd[1]: session-14.scope: Deactivated successfully. May 9 00:04:55.600651 systemd-logind[1421]: Session 14 logged out. Waiting for processes to exit. May 9 00:04:55.601931 systemd[1]: Started sshd@14-10.0.0.43:22-10.0.0.1:51978.service - OpenSSH per-connection server daemon (10.0.0.1:51978). May 9 00:04:55.605301 systemd-logind[1421]: Removed session 14. May 9 00:04:55.655914 sshd[5007]: Accepted publickey for core from 10.0.0.1 port 51978 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:04:55.657194 sshd[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:04:55.662157 systemd-logind[1421]: New session 15 of user core. May 9 00:04:55.673166 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 00:04:57.061718 sshd[5007]: pam_unix(sshd:session): session closed for user core May 9 00:04:57.071584 systemd[1]: sshd@14-10.0.0.43:22-10.0.0.1:51978.service: Deactivated successfully. May 9 00:04:57.078438 systemd[1]: session-15.scope: Deactivated successfully. May 9 00:04:57.080249 systemd-logind[1421]: Session 15 logged out. Waiting for processes to exit. May 9 00:04:57.093316 systemd[1]: Started sshd@15-10.0.0.43:22-10.0.0.1:51994.service - OpenSSH per-connection server daemon (10.0.0.1:51994). May 9 00:04:57.099392 systemd-logind[1421]: Removed session 15. May 9 00:04:57.129941 sshd[5033]: Accepted publickey for core from 10.0.0.1 port 51994 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:04:57.131427 sshd[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:04:57.135709 systemd-logind[1421]: New session 16 of user core. May 9 00:04:57.149213 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 00:04:57.513689 sshd[5033]: pam_unix(sshd:session): session closed for user core May 9 00:04:57.522594 systemd[1]: sshd@15-10.0.0.43:22-10.0.0.1:51994.service: Deactivated successfully. May 9 00:04:57.525656 systemd[1]: session-16.scope: Deactivated successfully. May 9 00:04:57.529471 systemd-logind[1421]: Session 16 logged out. Waiting for processes to exit. May 9 00:04:57.537676 systemd[1]: Started sshd@16-10.0.0.43:22-10.0.0.1:52010.service - OpenSSH per-connection server daemon (10.0.0.1:52010). May 9 00:04:57.541123 systemd-logind[1421]: Removed session 16. May 9 00:04:57.571054 sshd[5047]: Accepted publickey for core from 10.0.0.1 port 52010 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:04:57.572230 sshd[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:04:57.576367 systemd-logind[1421]: New session 17 of user core. May 9 00:04:57.584178 systemd[1]: Started session-17.scope - Session 17 of User core. May 9 00:04:57.712769 sshd[5047]: pam_unix(sshd:session): session closed for user core May 9 00:04:57.716524 systemd[1]: sshd@16-10.0.0.43:22-10.0.0.1:52010.service: Deactivated successfully. May 9 00:04:57.719604 systemd[1]: session-17.scope: Deactivated successfully. May 9 00:04:57.720519 systemd-logind[1421]: Session 17 logged out. Waiting for processes to exit. May 9 00:04:57.721346 systemd-logind[1421]: Removed session 17. May 9 00:05:02.724905 systemd[1]: Started sshd@17-10.0.0.43:22-10.0.0.1:57850.service - OpenSSH per-connection server daemon (10.0.0.1:57850). May 9 00:05:02.758759 sshd[5065]: Accepted publickey for core from 10.0.0.1 port 57850 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:05:02.760060 sshd[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:05:02.763883 systemd-logind[1421]: New session 18 of user core. May 9 00:05:02.771162 systemd[1]: Started session-18.scope - Session 18 of User core. May 9 00:05:02.892465 sshd[5065]: pam_unix(sshd:session): session closed for user core May 9 00:05:02.895874 systemd[1]: sshd@17-10.0.0.43:22-10.0.0.1:57850.service: Deactivated successfully. May 9 00:05:02.897707 systemd[1]: session-18.scope: Deactivated successfully. May 9 00:05:02.898423 systemd-logind[1421]: Session 18 logged out. Waiting for processes to exit. May 9 00:05:02.899166 systemd-logind[1421]: Removed session 18. May 9 00:05:03.390081 kubelet[2467]: E0509 00:05:03.389404 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:05:06.135312 kubelet[2467]: I0509 00:05:06.134631 2467 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:05:07.905974 systemd[1]: Started sshd@18-10.0.0.43:22-10.0.0.1:57854.service - OpenSSH per-connection server daemon (10.0.0.1:57854). May 9 00:05:07.947943 sshd[5105]: Accepted publickey for core from 10.0.0.1 port 57854 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:05:07.950399 sshd[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:05:07.954798 systemd-logind[1421]: New session 19 of user core. May 9 00:05:07.964166 systemd[1]: Started session-19.scope - Session 19 of User core. May 9 00:05:08.135928 sshd[5105]: pam_unix(sshd:session): session closed for user core May 9 00:05:08.139435 systemd[1]: sshd@18-10.0.0.43:22-10.0.0.1:57854.service: Deactivated successfully. May 9 00:05:08.143358 systemd[1]: session-19.scope: Deactivated successfully. May 9 00:05:08.145099 systemd-logind[1421]: Session 19 logged out. Waiting for processes to exit. May 9 00:05:08.146624 systemd-logind[1421]: Removed session 19. May 9 00:05:08.294868 containerd[1434]: time="2025-05-09T00:05:08.294576637Z" level=info msg="StopPodSandbox for \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\"" May 9 00:05:08.387047 containerd[1434]: 2025-05-09 00:05:08.348 [WARNING][5135] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hhhps-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"449f9335-cf62-43ca-b5aa-48636c5af7c8", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b", Pod:"coredns-6f6b679f8f-hhhps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28a8284eb67", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:05:08.387047 containerd[1434]: 2025-05-09 00:05:08.349 [INFO][5135] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" May 9 00:05:08.387047 containerd[1434]: 2025-05-09 00:05:08.349 [INFO][5135] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" iface="eth0" netns="" May 9 00:05:08.387047 containerd[1434]: 2025-05-09 00:05:08.349 [INFO][5135] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" May 9 00:05:08.387047 containerd[1434]: 2025-05-09 00:05:08.349 [INFO][5135] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" May 9 00:05:08.387047 containerd[1434]: 2025-05-09 00:05:08.373 [INFO][5144] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" HandleID="k8s-pod-network.25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" Workload="localhost-k8s-coredns--6f6b679f8f--hhhps-eth0" May 9 00:05:08.387047 containerd[1434]: 2025-05-09 00:05:08.373 [INFO][5144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:05:08.387047 containerd[1434]: 2025-05-09 00:05:08.373 [INFO][5144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:05:08.387047 containerd[1434]: 2025-05-09 00:05:08.381 [WARNING][5144] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" HandleID="k8s-pod-network.25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" Workload="localhost-k8s-coredns--6f6b679f8f--hhhps-eth0" May 9 00:05:08.387047 containerd[1434]: 2025-05-09 00:05:08.381 [INFO][5144] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" HandleID="k8s-pod-network.25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" Workload="localhost-k8s-coredns--6f6b679f8f--hhhps-eth0" May 9 00:05:08.387047 containerd[1434]: 2025-05-09 00:05:08.383 [INFO][5144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:05:08.387047 containerd[1434]: 2025-05-09 00:05:08.385 [INFO][5135] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" May 9 00:05:08.387047 containerd[1434]: time="2025-05-09T00:05:08.386881833Z" level=info msg="TearDown network for sandbox \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\" successfully" May 9 00:05:08.387047 containerd[1434]: time="2025-05-09T00:05:08.386906556Z" level=info msg="StopPodSandbox for \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\" returns successfully" May 9 00:05:08.387731 containerd[1434]: time="2025-05-09T00:05:08.387610204Z" level=info msg="RemovePodSandbox for \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\"" May 9 00:05:08.396266 containerd[1434]: time="2025-05-09T00:05:08.396217237Z" level=info msg="Forcibly stopping sandbox \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\"" May 9 00:05:08.473918 containerd[1434]: 2025-05-09 00:05:08.435 [WARNING][5168] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hhhps-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"449f9335-cf62-43ca-b5aa-48636c5af7c8", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6449291265cf697b6c7ebeef1241a5af82e18950aca25e855f4674262242820b", Pod:"coredns-6f6b679f8f-hhhps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28a8284eb67", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:05:08.473918 containerd[1434]: 2025-05-09 00:05:08.435 [INFO][5168] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" May 9 00:05:08.473918 containerd[1434]: 2025-05-09 00:05:08.435 [INFO][5168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" iface="eth0" netns="" May 9 00:05:08.473918 containerd[1434]: 2025-05-09 00:05:08.435 [INFO][5168] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" May 9 00:05:08.473918 containerd[1434]: 2025-05-09 00:05:08.435 [INFO][5168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" May 9 00:05:08.473918 containerd[1434]: 2025-05-09 00:05:08.457 [INFO][5177] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" HandleID="k8s-pod-network.25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" Workload="localhost-k8s-coredns--6f6b679f8f--hhhps-eth0" May 9 00:05:08.473918 containerd[1434]: 2025-05-09 00:05:08.457 [INFO][5177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:05:08.473918 containerd[1434]: 2025-05-09 00:05:08.457 [INFO][5177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:05:08.473918 containerd[1434]: 2025-05-09 00:05:08.466 [WARNING][5177] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" HandleID="k8s-pod-network.25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" Workload="localhost-k8s-coredns--6f6b679f8f--hhhps-eth0" May 9 00:05:08.473918 containerd[1434]: 2025-05-09 00:05:08.466 [INFO][5177] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" HandleID="k8s-pod-network.25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" Workload="localhost-k8s-coredns--6f6b679f8f--hhhps-eth0" May 9 00:05:08.473918 containerd[1434]: 2025-05-09 00:05:08.470 [INFO][5177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:05:08.473918 containerd[1434]: 2025-05-09 00:05:08.472 [INFO][5168] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6" May 9 00:05:08.474364 containerd[1434]: time="2025-05-09T00:05:08.473939534Z" level=info msg="TearDown network for sandbox \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\" successfully" May 9 00:05:08.477067 containerd[1434]: time="2025-05-09T00:05:08.476816893Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:05:08.477067 containerd[1434]: time="2025-05-09T00:05:08.476957270Z" level=info msg="RemovePodSandbox \"25c830e2b873782c1b5fe1370290621929add9270a97823d05eab756951dccc6\" returns successfully" May 9 00:05:08.478072 containerd[1434]: time="2025-05-09T00:05:08.478042886Z" level=info msg="StopPodSandbox for \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\"" May 9 00:05:08.567641 containerd[1434]: 2025-05-09 00:05:08.530 [WARNING][5200] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0", GenerateName:"calico-apiserver-6f76859ff6-", Namespace:"calico-apiserver", SelfLink:"", UID:"4be101ac-b076-41a6-bc24-cc2df8624f75", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f76859ff6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84", Pod:"calico-apiserver-6f76859ff6-dh6dh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74d3ef5aec7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:05:08.567641 containerd[1434]: 2025-05-09 00:05:08.530 [INFO][5200] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" May 9 00:05:08.567641 containerd[1434]: 2025-05-09 00:05:08.530 [INFO][5200] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" iface="eth0" netns="" May 9 00:05:08.567641 containerd[1434]: 2025-05-09 00:05:08.530 [INFO][5200] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" May 9 00:05:08.567641 containerd[1434]: 2025-05-09 00:05:08.530 [INFO][5200] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" May 9 00:05:08.567641 containerd[1434]: 2025-05-09 00:05:08.551 [INFO][5209] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" HandleID="k8s-pod-network.f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" Workload="localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0" May 9 00:05:08.567641 containerd[1434]: 2025-05-09 00:05:08.551 [INFO][5209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:05:08.567641 containerd[1434]: 2025-05-09 00:05:08.551 [INFO][5209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:05:08.567641 containerd[1434]: 2025-05-09 00:05:08.562 [WARNING][5209] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" HandleID="k8s-pod-network.f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" Workload="localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0" May 9 00:05:08.567641 containerd[1434]: 2025-05-09 00:05:08.562 [INFO][5209] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" HandleID="k8s-pod-network.f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" Workload="localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0" May 9 00:05:08.567641 containerd[1434]: 2025-05-09 00:05:08.564 [INFO][5209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:05:08.567641 containerd[1434]: 2025-05-09 00:05:08.565 [INFO][5200] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" May 9 00:05:08.567641 containerd[1434]: time="2025-05-09T00:05:08.567494885Z" level=info msg="TearDown network for sandbox \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\" successfully" May 9 00:05:08.567641 containerd[1434]: time="2025-05-09T00:05:08.567517048Z" level=info msg="StopPodSandbox for \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\" returns successfully" May 9 00:05:08.569726 containerd[1434]: time="2025-05-09T00:05:08.568171209Z" level=info msg="RemovePodSandbox for \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\"" May 9 00:05:08.569726 containerd[1434]: time="2025-05-09T00:05:08.568203333Z" level=info msg="Forcibly stopping sandbox \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\"" May 9 00:05:08.646120 containerd[1434]: 2025-05-09 00:05:08.603 [WARNING][5232] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0", GenerateName:"calico-apiserver-6f76859ff6-", Namespace:"calico-apiserver", SelfLink:"", UID:"4be101ac-b076-41a6-bc24-cc2df8624f75", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f76859ff6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ae2b2b57d6c209f68b5e8230034d96450fd1a6762e3e0b6e32d92d37f03bb84", Pod:"calico-apiserver-6f76859ff6-dh6dh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74d3ef5aec7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:05:08.646120 containerd[1434]: 2025-05-09 00:05:08.604 [INFO][5232] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" May 9 00:05:08.646120 containerd[1434]: 2025-05-09 00:05:08.604 [INFO][5232] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" iface="eth0" netns="" May 9 00:05:08.646120 containerd[1434]: 2025-05-09 00:05:08.604 [INFO][5232] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" May 9 00:05:08.646120 containerd[1434]: 2025-05-09 00:05:08.604 [INFO][5232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" May 9 00:05:08.646120 containerd[1434]: 2025-05-09 00:05:08.628 [INFO][5240] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" HandleID="k8s-pod-network.f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" Workload="localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0" May 9 00:05:08.646120 containerd[1434]: 2025-05-09 00:05:08.628 [INFO][5240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:05:08.646120 containerd[1434]: 2025-05-09 00:05:08.628 [INFO][5240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:05:08.646120 containerd[1434]: 2025-05-09 00:05:08.638 [WARNING][5240] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" HandleID="k8s-pod-network.f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" Workload="localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0" May 9 00:05:08.646120 containerd[1434]: 2025-05-09 00:05:08.638 [INFO][5240] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" HandleID="k8s-pod-network.f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" Workload="localhost-k8s-calico--apiserver--6f76859ff6--dh6dh-eth0" May 9 00:05:08.646120 containerd[1434]: 2025-05-09 00:05:08.640 [INFO][5240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:05:08.646120 containerd[1434]: 2025-05-09 00:05:08.643 [INFO][5232] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227" May 9 00:05:08.646522 containerd[1434]: time="2025-05-09T00:05:08.646168100Z" level=info msg="TearDown network for sandbox \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\" successfully" May 9 00:05:08.649171 containerd[1434]: time="2025-05-09T00:05:08.649127509Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:05:08.649257 containerd[1434]: time="2025-05-09T00:05:08.649197598Z" level=info msg="RemovePodSandbox \"f10da5f93a00975cf0da5851eb7d0453221a5cf9d8206410f3e17e113da7e227\" returns successfully" May 9 00:05:08.649700 containerd[1434]: time="2025-05-09T00:05:08.649672577Z" level=info msg="StopPodSandbox for \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\"" May 9 00:05:08.751925 containerd[1434]: 2025-05-09 00:05:08.710 [WARNING][5263] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sc8g8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cd2cc3f5-b622-427f-8e40-c278d97d553c", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126", Pod:"csi-node-driver-sc8g8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali51d7891b935", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:05:08.751925 containerd[1434]: 2025-05-09 00:05:08.710 [INFO][5263] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" May 9 00:05:08.751925 containerd[1434]: 2025-05-09 00:05:08.710 [INFO][5263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" iface="eth0" netns="" May 9 00:05:08.751925 containerd[1434]: 2025-05-09 00:05:08.710 [INFO][5263] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" May 9 00:05:08.751925 containerd[1434]: 2025-05-09 00:05:08.710 [INFO][5263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" May 9 00:05:08.751925 containerd[1434]: 2025-05-09 00:05:08.730 [INFO][5272] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" HandleID="k8s-pod-network.137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" Workload="localhost-k8s-csi--node--driver--sc8g8-eth0" May 9 00:05:08.751925 containerd[1434]: 2025-05-09 00:05:08.730 [INFO][5272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:05:08.751925 containerd[1434]: 2025-05-09 00:05:08.730 [INFO][5272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:05:08.751925 containerd[1434]: 2025-05-09 00:05:08.740 [WARNING][5272] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" HandleID="k8s-pod-network.137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" Workload="localhost-k8s-csi--node--driver--sc8g8-eth0" May 9 00:05:08.751925 containerd[1434]: 2025-05-09 00:05:08.740 [INFO][5272] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" HandleID="k8s-pod-network.137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" Workload="localhost-k8s-csi--node--driver--sc8g8-eth0" May 9 00:05:08.751925 containerd[1434]: 2025-05-09 00:05:08.746 [INFO][5272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:05:08.751925 containerd[1434]: 2025-05-09 00:05:08.749 [INFO][5263] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" May 9 00:05:08.752614 containerd[1434]: time="2025-05-09T00:05:08.752069752Z" level=info msg="TearDown network for sandbox \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\" successfully" May 9 00:05:08.752614 containerd[1434]: time="2025-05-09T00:05:08.752096955Z" level=info msg="StopPodSandbox for \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\" returns successfully" May 9 00:05:08.752614 containerd[1434]: time="2025-05-09T00:05:08.752570294Z" level=info msg="RemovePodSandbox for \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\"" May 9 00:05:08.752614 containerd[1434]: time="2025-05-09T00:05:08.752603738Z" level=info msg="Forcibly stopping sandbox \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\"" May 9 00:05:08.836708 containerd[1434]: 2025-05-09 00:05:08.796 [WARNING][5294] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sc8g8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cd2cc3f5-b622-427f-8e40-c278d97d553c", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d26d0239bdbfe5387a1ae525cbbabe780915c8fbb6e2cdbaaca764fd4305126", Pod:"csi-node-driver-sc8g8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali51d7891b935", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:05:08.836708 containerd[1434]: 2025-05-09 00:05:08.796 [INFO][5294] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" May 9 00:05:08.836708 containerd[1434]: 2025-05-09 00:05:08.796 [INFO][5294] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" iface="eth0" netns="" May 9 00:05:08.836708 containerd[1434]: 2025-05-09 00:05:08.796 [INFO][5294] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" May 9 00:05:08.836708 containerd[1434]: 2025-05-09 00:05:08.796 [INFO][5294] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" May 9 00:05:08.836708 containerd[1434]: 2025-05-09 00:05:08.822 [INFO][5303] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" HandleID="k8s-pod-network.137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" Workload="localhost-k8s-csi--node--driver--sc8g8-eth0" May 9 00:05:08.836708 containerd[1434]: 2025-05-09 00:05:08.822 [INFO][5303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:05:08.836708 containerd[1434]: 2025-05-09 00:05:08.822 [INFO][5303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:05:08.836708 containerd[1434]: 2025-05-09 00:05:08.831 [WARNING][5303] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" HandleID="k8s-pod-network.137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" Workload="localhost-k8s-csi--node--driver--sc8g8-eth0" May 9 00:05:08.836708 containerd[1434]: 2025-05-09 00:05:08.831 [INFO][5303] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" HandleID="k8s-pod-network.137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" Workload="localhost-k8s-csi--node--driver--sc8g8-eth0" May 9 00:05:08.836708 containerd[1434]: 2025-05-09 00:05:08.832 [INFO][5303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:05:08.836708 containerd[1434]: 2025-05-09 00:05:08.834 [INFO][5294] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0" May 9 00:05:08.836708 containerd[1434]: time="2025-05-09T00:05:08.836110156Z" level=info msg="TearDown network for sandbox \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\" successfully" May 9 00:05:08.848628 containerd[1434]: time="2025-05-09T00:05:08.848581872Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:05:08.849015 containerd[1434]: time="2025-05-09T00:05:08.848873068Z" level=info msg="RemovePodSandbox \"137dd846b2c2a3f8054116ee99fe754fe204677d7681ac56aa4a544acabe76a0\" returns successfully" May 9 00:05:08.849501 containerd[1434]: time="2025-05-09T00:05:08.849393133Z" level=info msg="StopPodSandbox for \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\"" May 9 00:05:08.922447 containerd[1434]: 2025-05-09 00:05:08.885 [WARNING][5326] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0", GenerateName:"calico-kube-controllers-cb8c949c8-", Namespace:"calico-system", SelfLink:"", UID:"f38b28a9-35a2-49f2-8623-f72cdb69aaa0", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cb8c949c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9", Pod:"calico-kube-controllers-cb8c949c8-7dmfb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif283ba0c735", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:05:08.922447 containerd[1434]: 2025-05-09 00:05:08.886 [INFO][5326] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" May 9 00:05:08.922447 containerd[1434]: 2025-05-09 00:05:08.886 [INFO][5326] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" iface="eth0" netns="" May 9 00:05:08.922447 containerd[1434]: 2025-05-09 00:05:08.886 [INFO][5326] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" May 9 00:05:08.922447 containerd[1434]: 2025-05-09 00:05:08.886 [INFO][5326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" May 9 00:05:08.922447 containerd[1434]: 2025-05-09 00:05:08.907 [INFO][5334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" HandleID="k8s-pod-network.2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" Workload="localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0" May 9 00:05:08.922447 containerd[1434]: 2025-05-09 00:05:08.908 [INFO][5334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:05:08.922447 containerd[1434]: 2025-05-09 00:05:08.908 [INFO][5334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:05:08.922447 containerd[1434]: 2025-05-09 00:05:08.917 [WARNING][5334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" HandleID="k8s-pod-network.2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" Workload="localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0" May 9 00:05:08.922447 containerd[1434]: 2025-05-09 00:05:08.917 [INFO][5334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" HandleID="k8s-pod-network.2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" Workload="localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0" May 9 00:05:08.922447 containerd[1434]: 2025-05-09 00:05:08.919 [INFO][5334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:05:08.922447 containerd[1434]: 2025-05-09 00:05:08.921 [INFO][5326] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" May 9 00:05:08.922865 containerd[1434]: time="2025-05-09T00:05:08.922475250Z" level=info msg="TearDown network for sandbox \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\" successfully" May 9 00:05:08.922865 containerd[1434]: time="2025-05-09T00:05:08.922499453Z" level=info msg="StopPodSandbox for \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\" returns successfully" May 9 00:05:08.923002 containerd[1434]: time="2025-05-09T00:05:08.922955950Z" level=info msg="RemovePodSandbox for \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\"" May 9 00:05:08.923032 containerd[1434]: time="2025-05-09T00:05:08.923001036Z" level=info msg="Forcibly stopping sandbox \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\"" May 9 00:05:09.005160 containerd[1434]: 2025-05-09 00:05:08.964 [WARNING][5357] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0", GenerateName:"calico-kube-controllers-cb8c949c8-", Namespace:"calico-system", SelfLink:"", UID:"f38b28a9-35a2-49f2-8623-f72cdb69aaa0", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cb8c949c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d7fc8cd15b235ea16344681d60e00240db1bbc2b40cafb69eace32f0ac019d9", Pod:"calico-kube-controllers-cb8c949c8-7dmfb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif283ba0c735", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:05:09.005160 containerd[1434]: 2025-05-09 00:05:08.964 [INFO][5357] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" May 9 00:05:09.005160 containerd[1434]: 2025-05-09 00:05:08.964 [INFO][5357] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" iface="eth0" netns="" May 9 00:05:09.005160 containerd[1434]: 2025-05-09 00:05:08.964 [INFO][5357] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" May 9 00:05:09.005160 containerd[1434]: 2025-05-09 00:05:08.964 [INFO][5357] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" May 9 00:05:09.005160 containerd[1434]: 2025-05-09 00:05:08.986 [INFO][5365] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" HandleID="k8s-pod-network.2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" Workload="localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0" May 9 00:05:09.005160 containerd[1434]: 2025-05-09 00:05:08.986 [INFO][5365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:05:09.005160 containerd[1434]: 2025-05-09 00:05:08.986 [INFO][5365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:05:09.005160 containerd[1434]: 2025-05-09 00:05:08.994 [WARNING][5365] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" HandleID="k8s-pod-network.2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" Workload="localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0" May 9 00:05:09.005160 containerd[1434]: 2025-05-09 00:05:08.995 [INFO][5365] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" HandleID="k8s-pod-network.2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" Workload="localhost-k8s-calico--kube--controllers--cb8c949c8--7dmfb-eth0" May 9 00:05:09.005160 containerd[1434]: 2025-05-09 00:05:08.996 [INFO][5365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:05:09.005160 containerd[1434]: 2025-05-09 00:05:08.999 [INFO][5357] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27" May 9 00:05:09.005783 containerd[1434]: time="2025-05-09T00:05:09.005203126Z" level=info msg="TearDown network for sandbox \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\" successfully" May 9 00:05:09.008001 containerd[1434]: time="2025-05-09T00:05:09.007952746Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:05:09.008069 containerd[1434]: time="2025-05-09T00:05:09.008029796Z" level=info msg="RemovePodSandbox \"2a8e42d65ad4c4a8f643eb5c94e1fabab1cabe11ceb05e2a0a977f90662f4e27\" returns successfully" May 9 00:05:09.009198 containerd[1434]: time="2025-05-09T00:05:09.009163576Z" level=info msg="StopPodSandbox for \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\"" May 9 00:05:09.088045 containerd[1434]: 2025-05-09 00:05:09.049 [WARNING][5388] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"61e5788f-c1c1-468b-a0b1-b88de0f0fb58", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d", Pod:"coredns-6f6b679f8f-qhkqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0a81fb037e3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:05:09.088045 containerd[1434]: 2025-05-09 00:05:09.050 [INFO][5388] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" May 9 00:05:09.088045 containerd[1434]: 2025-05-09 00:05:09.050 [INFO][5388] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" iface="eth0" netns="" May 9 00:05:09.088045 containerd[1434]: 2025-05-09 00:05:09.050 [INFO][5388] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" May 9 00:05:09.088045 containerd[1434]: 2025-05-09 00:05:09.050 [INFO][5388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" May 9 00:05:09.088045 containerd[1434]: 2025-05-09 00:05:09.074 [INFO][5396] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" HandleID="k8s-pod-network.f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" Workload="localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0" May 9 00:05:09.088045 containerd[1434]: 2025-05-09 00:05:09.074 [INFO][5396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:05:09.088045 containerd[1434]: 2025-05-09 00:05:09.074 [INFO][5396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:05:09.088045 containerd[1434]: 2025-05-09 00:05:09.082 [WARNING][5396] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" HandleID="k8s-pod-network.f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" Workload="localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0" May 9 00:05:09.088045 containerd[1434]: 2025-05-09 00:05:09.082 [INFO][5396] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" HandleID="k8s-pod-network.f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" Workload="localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0" May 9 00:05:09.088045 containerd[1434]: 2025-05-09 00:05:09.084 [INFO][5396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:05:09.088045 containerd[1434]: 2025-05-09 00:05:09.086 [INFO][5388] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" May 9 00:05:09.088045 containerd[1434]: time="2025-05-09T00:05:09.088012245Z" level=info msg="TearDown network for sandbox \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\" successfully" May 9 00:05:09.088045 containerd[1434]: time="2025-05-09T00:05:09.088044329Z" level=info msg="StopPodSandbox for \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\" returns successfully" May 9 00:05:09.089498 containerd[1434]: time="2025-05-09T00:05:09.088515747Z" level=info msg="RemovePodSandbox for \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\"" May 9 00:05:09.089498 containerd[1434]: time="2025-05-09T00:05:09.088548031Z" level=info msg="Forcibly stopping sandbox \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\"" May 9 00:05:09.159179 containerd[1434]: 2025-05-09 00:05:09.123 [WARNING][5420] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"61e5788f-c1c1-468b-a0b1-b88de0f0fb58", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c87409f7839b6260c9d5fd15897ffbcff89cc01529fde6a709a4fcee351c2d4d", Pod:"coredns-6f6b679f8f-qhkqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0a81fb037e3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:05:09.159179 containerd[1434]: 2025-05-09 00:05:09.123 [INFO][5420] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" May 9 00:05:09.159179 containerd[1434]: 2025-05-09 00:05:09.123 [INFO][5420] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" iface="eth0" netns="" May 9 00:05:09.159179 containerd[1434]: 2025-05-09 00:05:09.123 [INFO][5420] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" May 9 00:05:09.159179 containerd[1434]: 2025-05-09 00:05:09.123 [INFO][5420] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" May 9 00:05:09.159179 containerd[1434]: 2025-05-09 00:05:09.145 [INFO][5429] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" HandleID="k8s-pod-network.f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" Workload="localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0" May 9 00:05:09.159179 containerd[1434]: 2025-05-09 00:05:09.145 [INFO][5429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:05:09.159179 containerd[1434]: 2025-05-09 00:05:09.145 [INFO][5429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:05:09.159179 containerd[1434]: 2025-05-09 00:05:09.154 [WARNING][5429] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" HandleID="k8s-pod-network.f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" Workload="localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0" May 9 00:05:09.159179 containerd[1434]: 2025-05-09 00:05:09.154 [INFO][5429] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" HandleID="k8s-pod-network.f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" Workload="localhost-k8s-coredns--6f6b679f8f--qhkqc-eth0" May 9 00:05:09.159179 containerd[1434]: 2025-05-09 00:05:09.155 [INFO][5429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:05:09.159179 containerd[1434]: 2025-05-09 00:05:09.157 [INFO][5420] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1" May 9 00:05:09.160728 containerd[1434]: time="2025-05-09T00:05:09.159683946Z" level=info msg="TearDown network for sandbox \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\" successfully" May 9 00:05:09.170086 containerd[1434]: time="2025-05-09T00:05:09.170038307Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:05:09.170169 containerd[1434]: time="2025-05-09T00:05:09.170102995Z" level=info msg="RemovePodSandbox \"f3b32a55b37ccfbfd3e42843053349f54b36148b94fa379559c1a6761738e3d1\" returns successfully" May 9 00:05:09.170703 containerd[1434]: time="2025-05-09T00:05:09.170674265Z" level=info msg="StopPodSandbox for \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\"" May 9 00:05:09.241978 containerd[1434]: 2025-05-09 00:05:09.209 [WARNING][5451] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0", GenerateName:"calico-apiserver-6f76859ff6-", Namespace:"calico-apiserver", SelfLink:"", UID:"1dfd2307-f708-4433-a5bd-771cc97fedba", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f76859ff6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0", Pod:"calico-apiserver-6f76859ff6-z8gcd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ad11ed22e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:05:09.241978 containerd[1434]: 2025-05-09 00:05:09.209 [INFO][5451] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" May 9 00:05:09.241978 containerd[1434]: 2025-05-09 00:05:09.209 [INFO][5451] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" iface="eth0" netns="" May 9 00:05:09.241978 containerd[1434]: 2025-05-09 00:05:09.209 [INFO][5451] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" May 9 00:05:09.241978 containerd[1434]: 2025-05-09 00:05:09.209 [INFO][5451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" May 9 00:05:09.241978 containerd[1434]: 2025-05-09 00:05:09.227 [INFO][5459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" HandleID="k8s-pod-network.fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" Workload="localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0" May 9 00:05:09.241978 containerd[1434]: 2025-05-09 00:05:09.228 [INFO][5459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:05:09.241978 containerd[1434]: 2025-05-09 00:05:09.228 [INFO][5459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:05:09.241978 containerd[1434]: 2025-05-09 00:05:09.236 [WARNING][5459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" HandleID="k8s-pod-network.fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" Workload="localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0" May 9 00:05:09.241978 containerd[1434]: 2025-05-09 00:05:09.237 [INFO][5459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" HandleID="k8s-pod-network.fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" Workload="localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0" May 9 00:05:09.241978 containerd[1434]: 2025-05-09 00:05:09.238 [INFO][5459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:05:09.241978 containerd[1434]: 2025-05-09 00:05:09.240 [INFO][5451] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" May 9 00:05:09.242404 containerd[1434]: time="2025-05-09T00:05:09.242011926Z" level=info msg="TearDown network for sandbox \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\" successfully" May 9 00:05:09.242404 containerd[1434]: time="2025-05-09T00:05:09.242039129Z" level=info msg="StopPodSandbox for \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\" returns successfully" May 9 00:05:09.242652 containerd[1434]: time="2025-05-09T00:05:09.242628002Z" level=info msg="RemovePodSandbox for \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\"" May 9 00:05:09.242692 containerd[1434]: time="2025-05-09T00:05:09.242661086Z" level=info msg="Forcibly stopping sandbox \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\"" May 9 00:05:09.318316 containerd[1434]: 2025-05-09 00:05:09.280 [WARNING][5482] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0", GenerateName:"calico-apiserver-6f76859ff6-", Namespace:"calico-apiserver", SelfLink:"", UID:"1dfd2307-f708-4433-a5bd-771cc97fedba", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f76859ff6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"467d58b21a4247d9cb7327422e7a8dd77978420c4e8afc603e980196b42894a0", Pod:"calico-apiserver-6f76859ff6-z8gcd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ad11ed22e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:05:09.318316 containerd[1434]: 2025-05-09 00:05:09.281 [INFO][5482] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" May 9 00:05:09.318316 containerd[1434]: 2025-05-09 00:05:09.281 [INFO][5482] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" iface="eth0" netns="" May 9 00:05:09.318316 containerd[1434]: 2025-05-09 00:05:09.281 [INFO][5482] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" May 9 00:05:09.318316 containerd[1434]: 2025-05-09 00:05:09.281 [INFO][5482] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" May 9 00:05:09.318316 containerd[1434]: 2025-05-09 00:05:09.305 [INFO][5490] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" HandleID="k8s-pod-network.fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" Workload="localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0" May 9 00:05:09.318316 containerd[1434]: 2025-05-09 00:05:09.305 [INFO][5490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:05:09.318316 containerd[1434]: 2025-05-09 00:05:09.305 [INFO][5490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:05:09.318316 containerd[1434]: 2025-05-09 00:05:09.313 [WARNING][5490] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" HandleID="k8s-pod-network.fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" Workload="localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0" May 9 00:05:09.318316 containerd[1434]: 2025-05-09 00:05:09.313 [INFO][5490] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" HandleID="k8s-pod-network.fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" Workload="localhost-k8s-calico--apiserver--6f76859ff6--z8gcd-eth0" May 9 00:05:09.318316 containerd[1434]: 2025-05-09 00:05:09.314 [INFO][5490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:05:09.318316 containerd[1434]: 2025-05-09 00:05:09.316 [INFO][5482] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376" May 9 00:05:09.318898 containerd[1434]: time="2025-05-09T00:05:09.318338323Z" level=info msg="TearDown network for sandbox \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\" successfully" May 9 00:05:09.320803 containerd[1434]: time="2025-05-09T00:05:09.320768343Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:05:09.320854 containerd[1434]: time="2025-05-09T00:05:09.320823790Z" level=info msg="RemovePodSandbox \"fa90b9ab84c283dfde69c108984464787cebb832a8e971e789460da6dc532376\" returns successfully" May 9 00:05:13.147788 systemd[1]: Started sshd@19-10.0.0.43:22-10.0.0.1:36100.service - OpenSSH per-connection server daemon (10.0.0.1:36100). May 9 00:05:13.187030 sshd[5518]: Accepted publickey for core from 10.0.0.1 port 36100 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 9 00:05:13.188328 sshd[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:05:13.192067 systemd-logind[1421]: New session 20 of user core. May 9 00:05:13.200162 systemd[1]: Started session-20.scope - Session 20 of User core. May 9 00:05:13.378962 sshd[5518]: pam_unix(sshd:session): session closed for user core May 9 00:05:13.382370 systemd[1]: sshd@19-10.0.0.43:22-10.0.0.1:36100.service: Deactivated successfully. May 9 00:05:13.384164 systemd[1]: session-20.scope: Deactivated successfully. May 9 00:05:13.384767 systemd-logind[1421]: Session 20 logged out. Waiting for processes to exit. May 9 00:05:13.385436 systemd-logind[1421]: Removed session 20.