May 15 00:07:37.963764 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 00:07:37.963802 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed May 14 22:53:13 -00 2025 May 15 00:07:37.963813 kernel: KASLR enabled May 15 00:07:37.963818 kernel: efi: EFI v2.7 by EDK II May 15 00:07:37.963824 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 15 00:07:37.963829 kernel: random: crng init done May 15 00:07:37.963836 kernel: ACPI: Early table checksum verification disabled May 15 00:07:37.963842 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 15 00:07:37.963848 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 15 00:07:37.963863 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:07:37.963869 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:07:37.963875 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:07:37.963881 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:07:37.963887 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:07:37.963895 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:07:37.963903 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:07:37.963909 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:07:37.963916 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:07:37.963922 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 15 00:07:37.963928 kernel: NUMA: Failed to initialise from firmware May 15 00:07:37.963935 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 15 00:07:37.963941 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 15 00:07:37.963948 kernel: Zone ranges: May 15 00:07:37.963954 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 15 00:07:37.963961 kernel: DMA32 empty May 15 00:07:37.963968 kernel: Normal empty May 15 00:07:37.963975 kernel: Movable zone start for each node May 15 00:07:37.963981 kernel: Early memory node ranges May 15 00:07:37.963987 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 15 00:07:37.963994 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 15 00:07:37.964000 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 15 00:07:37.964006 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 15 00:07:37.964013 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 15 00:07:37.964019 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 15 00:07:37.964025 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 15 00:07:37.964031 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 15 00:07:37.964038 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 15 00:07:37.964046 kernel: psci: probing for conduit method from ACPI. May 15 00:07:37.964052 kernel: psci: PSCIv1.1 detected in firmware. May 15 00:07:37.964058 kernel: psci: Using standard PSCI v0.2 function IDs May 15 00:07:37.964067 kernel: psci: Trusted OS migration not required May 15 00:07:37.964074 kernel: psci: SMC Calling Convention v1.1 May 15 00:07:37.964081 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 00:07:37.964089 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 15 00:07:37.964096 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 15 00:07:37.964102 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 15 00:07:37.964109 kernel: Detected PIPT I-cache on CPU0 May 15 00:07:37.964116 kernel: CPU features: detected: GIC system register CPU interface May 15 00:07:37.964122 kernel: CPU features: detected: Hardware dirty bit management May 15 00:07:37.964129 kernel: CPU features: detected: Spectre-v4 May 15 00:07:37.964136 kernel: CPU features: detected: Spectre-BHB May 15 00:07:37.964142 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 00:07:37.964149 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 00:07:37.964157 kernel: CPU features: detected: ARM erratum 1418040 May 15 00:07:37.964164 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 00:07:37.964170 kernel: alternatives: applying boot alternatives May 15 00:07:37.964178 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3ad4d2a855aaa69496d8c2bf8d7e3c4212e29ec2df18e8282fb10689c3032596 May 15 00:07:37.964185 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 00:07:37.964192 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 00:07:37.964199 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 00:07:37.964205 kernel: Fallback order for Node 0: 0 May 15 00:07:37.964212 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 15 00:07:37.964219 kernel: Policy zone: DMA May 15 00:07:37.964225 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 00:07:37.964233 kernel: software IO TLB: area num 4. May 15 00:07:37.964240 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 15 00:07:37.964247 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) May 15 00:07:37.964254 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 00:07:37.964261 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 00:07:37.964268 kernel: rcu: RCU event tracing is enabled. May 15 00:07:37.964276 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 00:07:37.964282 kernel: Trampoline variant of Tasks RCU enabled. May 15 00:07:37.964289 kernel: Tracing variant of Tasks RCU enabled. May 15 00:07:37.964297 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 00:07:37.964303 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 00:07:37.964310 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 00:07:37.964318 kernel: GICv3: 256 SPIs implemented May 15 00:07:37.964325 kernel: GICv3: 0 Extended SPIs implemented May 15 00:07:37.964331 kernel: Root IRQ handler: gic_handle_irq May 15 00:07:37.964338 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 15 00:07:37.964345 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 00:07:37.964352 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 00:07:37.964359 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 15 00:07:37.964366 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 15 00:07:37.964373 kernel: GICv3: using LPI property table @0x00000000400f0000 May 15 00:07:37.964379 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 15 00:07:37.964386 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 00:07:37.964395 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:07:37.964401 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 00:07:37.964408 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 00:07:37.964415 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 00:07:37.964422 kernel: arm-pv: using stolen time PV May 15 00:07:37.964429 kernel: Console: colour dummy device 80x25 May 15 00:07:37.964436 kernel: ACPI: Core revision 20230628 May 15 00:07:37.964444 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 00:07:37.964450 kernel: pid_max: default: 32768 minimum: 301 May 15 00:07:37.964458 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 00:07:37.964466 kernel: landlock: Up and running. May 15 00:07:37.964473 kernel: SELinux: Initializing. May 15 00:07:37.964479 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:07:37.964486 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:07:37.964493 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 15 00:07:37.964500 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 00:07:37.964507 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 00:07:37.964514 kernel: rcu: Hierarchical SRCU implementation. May 15 00:07:37.964522 kernel: rcu: Max phase no-delay instances is 400. May 15 00:07:37.964530 kernel: Platform MSI: ITS@0x8080000 domain created May 15 00:07:37.964558 kernel: PCI/MSI: ITS@0x8080000 domain created May 15 00:07:37.964565 kernel: Remapping and enabling EFI services. May 15 00:07:37.964572 kernel: smp: Bringing up secondary CPUs ... May 15 00:07:37.964579 kernel: Detected PIPT I-cache on CPU1 May 15 00:07:37.964586 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 00:07:37.964593 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 15 00:07:37.964600 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:07:37.964606 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 00:07:37.964615 kernel: Detected PIPT I-cache on CPU2 May 15 00:07:37.964622 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 15 00:07:37.964629 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 15 00:07:37.964642 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:07:37.964650 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 15 00:07:37.964657 kernel: Detected PIPT I-cache on CPU3 May 15 00:07:37.964665 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 15 00:07:37.964672 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 15 00:07:37.964679 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:07:37.964686 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 15 00:07:37.964694 kernel: smp: Brought up 1 node, 4 CPUs May 15 00:07:37.964702 kernel: SMP: Total of 4 processors activated. May 15 00:07:37.964710 kernel: CPU features: detected: 32-bit EL0 Support May 15 00:07:37.964717 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 00:07:37.964725 kernel: CPU features: detected: Common not Private translations May 15 00:07:37.964732 kernel: CPU features: detected: CRC32 instructions May 15 00:07:37.964739 kernel: CPU features: detected: Enhanced Virtualization Traps May 15 00:07:37.964746 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 00:07:37.964755 kernel: CPU features: detected: LSE atomic instructions May 15 00:07:37.964762 kernel: CPU features: detected: Privileged Access Never May 15 00:07:37.964770 kernel: CPU features: detected: RAS Extension Support May 15 00:07:37.964777 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 00:07:37.964788 kernel: CPU: All CPU(s) started at EL1 May 15 00:07:37.964796 kernel: alternatives: applying system-wide alternatives May 15 00:07:37.964803 kernel: devtmpfs: initialized May 15 00:07:37.964810 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 00:07:37.964817 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 00:07:37.964826 kernel: pinctrl core: initialized pinctrl subsystem May 15 00:07:37.964833 kernel: SMBIOS 3.0.0 present. May 15 00:07:37.964841 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 15 00:07:37.964848 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 00:07:37.964859 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 00:07:37.964866 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 00:07:37.964874 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 00:07:37.964881 kernel: audit: initializing netlink subsys (disabled) May 15 00:07:37.964888 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 May 15 00:07:37.965323 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 00:07:37.965338 kernel: cpuidle: using governor menu May 15 00:07:37.965346 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 00:07:37.965354 kernel: ASID allocator initialised with 32768 entries May 15 00:07:37.965362 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 00:07:37.965370 kernel: Serial: AMBA PL011 UART driver May 15 00:07:37.965377 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 15 00:07:37.965385 kernel: Modules: 0 pages in range for non-PLT usage May 15 00:07:37.965392 kernel: Modules: 509008 pages in range for PLT usage May 15 00:07:37.965407 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 00:07:37.965415 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 15 00:07:37.965422 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 15 00:07:37.965437 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 15 00:07:37.965444 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 00:07:37.965452 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 15 00:07:37.965459 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 15 00:07:37.965467 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 15 00:07:37.965474 kernel: ACPI: Added _OSI(Module Device) May 15 00:07:37.965483 kernel: ACPI: Added _OSI(Processor Device) May 15 00:07:37.965493 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 00:07:37.965500 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 00:07:37.965508 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 00:07:37.965517 kernel: ACPI: Interpreter enabled May 15 00:07:37.965524 kernel: ACPI: Using GIC for interrupt routing May 15 00:07:37.965531 kernel: ACPI: MCFG table detected, 1 entries May 15 00:07:37.965539 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 00:07:37.965546 kernel: printk: console [ttyAMA0] enabled May 15 00:07:37.965556 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 00:07:37.965708 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:07:37.965797 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 00:07:37.965881 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 00:07:37.965948 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 00:07:37.966012 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 00:07:37.966022 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 00:07:37.966034 kernel: PCI host bridge to bus 0000:00 May 15 00:07:37.966106 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 00:07:37.966167 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 00:07:37.966226 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 00:07:37.966283 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 00:07:37.966363 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 15 00:07:37.966444 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 15 00:07:37.966514 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 15 00:07:37.966579 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 15 00:07:37.966644 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 15 00:07:37.967031 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 15 00:07:37.967135 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 15 00:07:37.967205 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 15 00:07:37.967276 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 00:07:37.967335 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 00:07:37.967393 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 00:07:37.967403 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 00:07:37.967411 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 00:07:37.967419 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 00:07:37.967426 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 00:07:37.967434 kernel: iommu: Default domain type: Translated May 15 00:07:37.967444 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 00:07:37.967452 kernel: efivars: Registered efivars operations May 15 00:07:37.967459 kernel: vgaarb: loaded May 15 00:07:37.967467 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 00:07:37.967474 kernel: VFS: Disk quotas dquot_6.6.0 May 15 00:07:37.967482 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 00:07:37.967489 kernel: pnp: PnP ACPI init May 15 00:07:37.967564 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 00:07:37.967575 kernel: pnp: PnP ACPI: found 1 devices May 15 00:07:37.967585 kernel: NET: Registered PF_INET protocol family May 15 00:07:37.967593 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 00:07:37.967601 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 00:07:37.967608 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 00:07:37.967616 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 00:07:37.967624 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 00:07:37.967631 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 00:07:37.967639 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:07:37.967648 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:07:37.967655 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 00:07:37.967663 kernel: PCI: CLS 0 bytes, default 64 May 15 00:07:37.967671 kernel: kvm [1]: HYP mode not available May 15 00:07:37.967678 kernel: Initialise system trusted keyrings May 15 00:07:37.967686 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 00:07:37.967693 kernel: Key type asymmetric registered May 15 00:07:37.967701 kernel: Asymmetric key parser 'x509' registered May 15 00:07:37.967708 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 00:07:37.967716 kernel: io scheduler mq-deadline registered May 15 00:07:37.967725 kernel: io scheduler kyber registered May 15 00:07:37.967732 kernel: io scheduler bfq registered May 15 00:07:37.967740 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 00:07:37.967747 kernel: ACPI: button: Power Button [PWRB] May 15 00:07:37.967756 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 00:07:37.967840 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 15 00:07:37.967851 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 00:07:37.967866 kernel: thunder_xcv, ver 1.0 May 15 00:07:37.967873 kernel: thunder_bgx, ver 1.0 May 15 00:07:37.967884 kernel: nicpf, ver 1.0 May 15 00:07:37.967891 kernel: nicvf, ver 1.0 May 15 00:07:37.967981 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 00:07:37.968045 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T00:07:37 UTC (1747267657) May 15 00:07:37.968056 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 00:07:37.968064 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 00:07:37.968071 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 15 00:07:37.968079 kernel: watchdog: Hard watchdog permanently disabled May 15 00:07:37.968088 kernel: NET: Registered PF_INET6 protocol family May 15 00:07:37.968096 kernel: Segment Routing with IPv6 May 15 00:07:37.968103 kernel: In-situ OAM (IOAM) with IPv6 May 15 00:07:37.968111 kernel: NET: Registered PF_PACKET protocol family May 15 00:07:37.968118 kernel: Key type dns_resolver registered May 15 00:07:37.968126 kernel: registered taskstats version 1 May 15 00:07:37.968133 kernel: Loading compiled-in X.509 certificates May 15 00:07:37.968141 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 6afb3c096bffb4980a4bcc170ebe3729821d8e0d' May 15 00:07:37.968149 kernel: Key type .fscrypt registered May 15 00:07:37.968157 kernel: Key type fscrypt-provisioning registered May 15 00:07:37.968165 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 00:07:37.968173 kernel: ima: Allocated hash algorithm: sha1 May 15 00:07:37.968180 kernel: ima: No architecture policies found May 15 00:07:37.968188 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 00:07:37.968195 kernel: clk: Disabling unused clocks May 15 00:07:37.968203 kernel: Freeing unused kernel memory: 39424K May 15 00:07:37.968210 kernel: Run /init as init process May 15 00:07:37.968218 kernel: with arguments: May 15 00:07:37.968227 kernel: /init May 15 00:07:37.968234 kernel: with environment: May 15 00:07:37.968241 kernel: HOME=/ May 15 00:07:37.968249 kernel: TERM=linux May 15 00:07:37.968256 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 00:07:37.968266 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 00:07:37.968275 systemd[1]: Detected virtualization kvm. May 15 00:07:37.968285 systemd[1]: Detected architecture arm64. May 15 00:07:37.968293 systemd[1]: Running in initrd. May 15 00:07:37.968301 systemd[1]: No hostname configured, using default hostname. May 15 00:07:37.968309 systemd[1]: Hostname set to . May 15 00:07:37.968317 systemd[1]: Initializing machine ID from VM UUID. May 15 00:07:37.968325 systemd[1]: Queued start job for default target initrd.target. May 15 00:07:37.968333 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:07:37.968341 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:07:37.968351 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 00:07:37.968360 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:07:37.968368 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 00:07:37.968376 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 00:07:37.968386 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 00:07:37.968394 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 00:07:37.968402 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:07:37.968414 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:07:37.968427 systemd[1]: Reached target paths.target - Path Units. May 15 00:07:37.968436 systemd[1]: Reached target slices.target - Slice Units. May 15 00:07:37.968444 systemd[1]: Reached target swap.target - Swaps. May 15 00:07:37.968453 systemd[1]: Reached target timers.target - Timer Units. May 15 00:07:37.968461 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:07:37.968469 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:07:37.968478 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 00:07:37.968486 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 15 00:07:37.968495 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:07:37.969047 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:07:37.969056 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:07:37.969065 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:07:37.969073 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 00:07:37.969081 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:07:37.969089 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 00:07:37.969097 systemd[1]: Starting systemd-fsck-usr.service... May 15 00:07:37.969113 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:07:37.969121 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:07:37.969129 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:07:37.969137 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 00:07:37.969145 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:07:37.969153 systemd[1]: Finished systemd-fsck-usr.service. May 15 00:07:37.969164 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 00:07:37.969172 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:07:37.969213 systemd-journald[238]: Collecting audit messages is disabled. May 15 00:07:37.969235 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:07:37.969244 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:07:37.969252 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 00:07:37.969261 systemd-journald[238]: Journal started May 15 00:07:37.969280 systemd-journald[238]: Runtime Journal (/run/log/journal/6d40f658ad594db6b79a2bc6607ca8b0) is 5.9M, max 47.3M, 41.4M free. May 15 00:07:37.949237 systemd-modules-load[240]: Inserted module 'overlay' May 15 00:07:37.974477 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:07:37.974518 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:07:37.975839 kernel: Bridge firewalling registered May 15 00:07:37.975811 systemd-modules-load[240]: Inserted module 'br_netfilter' May 15 00:07:37.978818 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:07:37.991005 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:07:37.993889 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:07:37.995349 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:07:37.996912 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:07:38.000924 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 00:07:38.004486 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:07:38.011592 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:07:38.017150 dracut-cmdline[273]: dracut-dracut-053 May 15 00:07:38.021717 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3ad4d2a855aaa69496d8c2bf8d7e3c4212e29ec2df18e8282fb10689c3032596 May 15 00:07:38.019955 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:07:38.047537 systemd-resolved[283]: Positive Trust Anchors: May 15 00:07:38.047557 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:07:38.047589 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:07:38.052484 systemd-resolved[283]: Defaulting to hostname 'linux'. May 15 00:07:38.053862 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:07:38.057496 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:07:38.098827 kernel: SCSI subsystem initialized May 15 00:07:38.103804 kernel: Loading iSCSI transport class v2.0-870. May 15 00:07:38.111831 kernel: iscsi: registered transport (tcp) May 15 00:07:38.124822 kernel: iscsi: registered transport (qla4xxx) May 15 00:07:38.124873 kernel: QLogic iSCSI HBA Driver May 15 00:07:38.182269 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 00:07:38.193955 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 00:07:38.212231 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 00:07:38.212302 kernel: device-mapper: uevent: version 1.0.3 May 15 00:07:38.213410 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 00:07:38.266819 kernel: raid6: neonx8 gen() 15774 MB/s May 15 00:07:38.283811 kernel: raid6: neonx4 gen() 15634 MB/s May 15 00:07:38.300822 kernel: raid6: neonx2 gen() 13231 MB/s May 15 00:07:38.317803 kernel: raid6: neonx1 gen() 10489 MB/s May 15 00:07:38.334810 kernel: raid6: int64x8 gen() 6956 MB/s May 15 00:07:38.351813 kernel: raid6: int64x4 gen() 7347 MB/s May 15 00:07:38.368803 kernel: raid6: int64x2 gen() 6125 MB/s May 15 00:07:38.385961 kernel: raid6: int64x1 gen() 5056 MB/s May 15 00:07:38.386022 kernel: raid6: using algorithm neonx8 gen() 15774 MB/s May 15 00:07:38.403931 kernel: raid6: .... xor() 11928 MB/s, rmw enabled May 15 00:07:38.404000 kernel: raid6: using neon recovery algorithm May 15 00:07:38.408824 kernel: xor: measuring software checksum speed May 15 00:07:38.408883 kernel: 8regs : 17854 MB/sec May 15 00:07:38.409996 kernel: 32regs : 19622 MB/sec May 15 00:07:38.411241 kernel: arm64_neon : 26954 MB/sec May 15 00:07:38.411263 kernel: xor: using function: arm64_neon (26954 MB/sec) May 15 00:07:38.460831 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 00:07:38.472650 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 00:07:38.488984 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:07:38.501977 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 15 00:07:38.505357 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:07:38.512996 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 00:07:38.524488 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation May 15 00:07:38.551899 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:07:38.568967 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:07:38.608674 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:07:38.616007 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 00:07:38.630487 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 00:07:38.632765 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:07:38.634452 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:07:38.635896 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:07:38.644985 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 00:07:38.657420 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 00:07:38.668639 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 15 00:07:38.669670 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 00:07:38.672349 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 00:07:38.672384 kernel: GPT:9289727 != 19775487 May 15 00:07:38.672394 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 00:07:38.672404 kernel: GPT:9289727 != 19775487 May 15 00:07:38.673402 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 00:07:38.674340 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:07:38.675161 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:07:38.675284 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:07:38.677562 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:07:38.680368 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:07:38.680733 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:07:38.683142 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:07:38.691233 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:07:38.697812 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (519) May 15 00:07:38.697856 kernel: BTRFS: device fsid c82d3215-8134-4516-8c53-9d29a8823a8c devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (525) May 15 00:07:38.703997 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:07:38.708928 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 00:07:38.717586 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 00:07:38.724215 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 00:07:38.725439 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 00:07:38.731159 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 00:07:38.751980 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 00:07:38.753923 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:07:38.760356 disk-uuid[551]: Primary Header is updated. May 15 00:07:38.760356 disk-uuid[551]: Secondary Entries is updated. May 15 00:07:38.760356 disk-uuid[551]: Secondary Header is updated. May 15 00:07:38.766686 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:07:38.771519 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:07:38.774005 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:07:39.784809 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:07:39.789089 disk-uuid[555]: The operation has completed successfully. May 15 00:07:39.811008 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 00:07:39.811109 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 00:07:39.839991 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 00:07:39.842978 sh[574]: Success May 15 00:07:39.863805 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 00:07:39.904405 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 00:07:39.906537 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 00:07:39.908347 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 00:07:39.918815 kernel: BTRFS info (device dm-0): first mount of filesystem c82d3215-8134-4516-8c53-9d29a8823a8c May 15 00:07:39.918855 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 15 00:07:39.921806 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 00:07:39.921840 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 00:07:39.921858 kernel: BTRFS info (device dm-0): using free space tree May 15 00:07:39.925508 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 00:07:39.926978 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 00:07:39.939982 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 00:07:39.941652 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 00:07:39.949957 kernel: BTRFS info (device vda6): first mount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:07:39.949995 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:07:39.950016 kernel: BTRFS info (device vda6): using free space tree May 15 00:07:39.954207 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:07:39.961638 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 00:07:39.962818 kernel: BTRFS info (device vda6): last unmount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:07:39.968940 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 00:07:39.974039 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 00:07:40.044875 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:07:40.059931 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:07:40.082062 ignition[665]: Ignition 2.19.0 May 15 00:07:40.082071 ignition[665]: Stage: fetch-offline May 15 00:07:40.082124 ignition[665]: no configs at "/usr/lib/ignition/base.d" May 15 00:07:40.082133 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:07:40.082298 ignition[665]: parsed url from cmdline: "" May 15 00:07:40.082301 ignition[665]: no config URL provided May 15 00:07:40.082306 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" May 15 00:07:40.086130 systemd-networkd[767]: lo: Link UP May 15 00:07:40.082312 ignition[665]: no config at "/usr/lib/ignition/user.ign" May 15 00:07:40.086133 systemd-networkd[767]: lo: Gained carrier May 15 00:07:40.082335 ignition[665]: op(1): [started] loading QEMU firmware config module May 15 00:07:40.086780 systemd-networkd[767]: Enumeration completed May 15 00:07:40.082340 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 00:07:40.087361 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:07:40.090668 ignition[665]: op(1): [finished] loading QEMU firmware config module May 15 00:07:40.087364 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:07:40.088059 systemd-networkd[767]: eth0: Link UP May 15 00:07:40.088062 systemd-networkd[767]: eth0: Gained carrier May 15 00:07:40.088069 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:07:40.088906 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:07:40.091955 systemd[1]: Reached target network.target - Network. May 15 00:07:40.115856 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.17/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:07:40.139585 ignition[665]: parsing config with SHA512: e7186ed6462503aa14f00938b2528fa3a031c2f86c53568b0b6d8f3e35c0b344234901455ee13c31a67730767a7d29e150b55933b78c69bd8503826495124463 May 15 00:07:40.143606 unknown[665]: fetched base config from "system" May 15 00:07:40.143615 unknown[665]: fetched user config from "qemu" May 15 00:07:40.144019 ignition[665]: fetch-offline: fetch-offline passed May 15 00:07:40.145881 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:07:40.144080 ignition[665]: Ignition finished successfully May 15 00:07:40.147504 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 00:07:40.158014 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 00:07:40.169049 ignition[773]: Ignition 2.19.0 May 15 00:07:40.169058 ignition[773]: Stage: kargs May 15 00:07:40.169242 ignition[773]: no configs at "/usr/lib/ignition/base.d" May 15 00:07:40.169252 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:07:40.171863 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 00:07:40.170097 ignition[773]: kargs: kargs passed May 15 00:07:40.170145 ignition[773]: Ignition finished successfully May 15 00:07:40.174227 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 00:07:40.188390 ignition[781]: Ignition 2.19.0 May 15 00:07:40.188404 ignition[781]: Stage: disks May 15 00:07:40.188587 ignition[781]: no configs at "/usr/lib/ignition/base.d" May 15 00:07:40.191441 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 00:07:40.188597 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:07:40.192814 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 00:07:40.189524 ignition[781]: disks: disks passed May 15 00:07:40.194507 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 00:07:40.189570 ignition[781]: Ignition finished successfully May 15 00:07:40.196593 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:07:40.198489 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:07:40.199942 systemd[1]: Reached target basic.target - Basic System. May 15 00:07:40.208959 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 00:07:40.220081 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 00:07:40.233277 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 00:07:40.241951 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 00:07:40.283812 kernel: EXT4-fs (vda9): mounted filesystem 5a01cbd3-e7cb-4475-87b3-07e348161203 r/w with ordered data mode. Quota mode: none. May 15 00:07:40.283977 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 00:07:40.285271 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 00:07:40.300967 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:07:40.302906 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 00:07:40.304329 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 00:07:40.304375 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 00:07:40.311143 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) May 15 00:07:40.311174 kernel: BTRFS info (device vda6): first mount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:07:40.304399 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:07:40.315860 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:07:40.315884 kernel: BTRFS info (device vda6): using free space tree May 15 00:07:40.309078 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 00:07:40.318552 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:07:40.315164 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 00:07:40.320017 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:07:40.358131 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory May 15 00:07:40.361463 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory May 15 00:07:40.364622 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory May 15 00:07:40.368836 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory May 15 00:07:40.448881 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 00:07:40.461909 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 00:07:40.464416 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 00:07:40.469801 kernel: BTRFS info (device vda6): last unmount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:07:40.485101 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 00:07:40.486971 ignition[914]: INFO : Ignition 2.19.0 May 15 00:07:40.486971 ignition[914]: INFO : Stage: mount May 15 00:07:40.486971 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:07:40.486971 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:07:40.490691 ignition[914]: INFO : mount: mount passed May 15 00:07:40.490691 ignition[914]: INFO : Ignition finished successfully May 15 00:07:40.489850 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 00:07:40.496911 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 00:07:40.917546 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 00:07:40.926964 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:07:40.933709 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (929) May 15 00:07:40.933748 kernel: BTRFS info (device vda6): first mount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:07:40.933802 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:07:40.935334 kernel: BTRFS info (device vda6): using free space tree May 15 00:07:40.937818 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:07:40.938650 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:07:40.954769 ignition[946]: INFO : Ignition 2.19.0 May 15 00:07:40.954769 ignition[946]: INFO : Stage: files May 15 00:07:40.956561 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:07:40.956561 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:07:40.956561 ignition[946]: DEBUG : files: compiled without relabeling support, skipping May 15 00:07:40.959997 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 00:07:40.959997 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 00:07:40.959997 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 00:07:40.959997 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 00:07:40.959997 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 00:07:40.959489 unknown[946]: wrote ssh authorized keys file for user: core May 15 00:07:40.967646 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 00:07:40.967646 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 15 00:07:41.164734 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 00:07:41.415175 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 00:07:41.415175 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 15 00:07:41.420213 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 15 00:07:41.420213 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 00:07:41.420213 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 00:07:41.420213 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:07:41.420213 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:07:41.420213 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:07:41.420213 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:07:41.420213 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:07:41.420213 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:07:41.420213 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 00:07:41.420213 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 00:07:41.420213 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 00:07:41.420213 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 15 00:07:41.686987 systemd-networkd[767]: eth0: Gained IPv6LL May 15 00:07:41.829911 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 15 00:07:42.164558 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 00:07:42.164558 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 15 00:07:42.168731 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:07:42.168731 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:07:42.168731 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 15 00:07:42.168731 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 15 00:07:42.168731 ignition[946]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:07:42.168731 ignition[946]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:07:42.168731 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 15 00:07:42.168731 ignition[946]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 15 00:07:42.208821 ignition[946]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:07:42.213263 ignition[946]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:07:42.216023 ignition[946]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 15 00:07:42.216023 ignition[946]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 15 00:07:42.216023 ignition[946]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 15 00:07:42.216023 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 00:07:42.216023 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 00:07:42.216023 ignition[946]: INFO : files: files passed May 15 00:07:42.216023 ignition[946]: INFO : Ignition finished successfully May 15 00:07:42.217088 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 00:07:42.230019 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 00:07:42.233489 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 00:07:42.235169 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 00:07:42.236900 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 00:07:42.241885 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory May 15 00:07:42.245135 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:07:42.245135 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 00:07:42.248591 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:07:42.252840 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:07:42.254660 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 00:07:42.260944 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 00:07:42.282919 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 00:07:42.284014 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 00:07:42.285476 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 00:07:42.287406 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 00:07:42.289253 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 00:07:42.295966 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 00:07:42.307803 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:07:42.310481 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 00:07:42.322569 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 00:07:42.323959 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:07:42.325262 systemd[1]: Stopped target timers.target - Timer Units. May 15 00:07:42.326317 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 00:07:42.326440 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:07:42.328902 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 00:07:42.330024 systemd[1]: Stopped target basic.target - Basic System. May 15 00:07:42.331915 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 00:07:42.333746 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:07:42.335756 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 00:07:42.337986 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 00:07:42.340035 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:07:42.342110 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 00:07:42.344150 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 00:07:42.346152 systemd[1]: Stopped target swap.target - Swaps. May 15 00:07:42.347942 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 00:07:42.348079 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 00:07:42.351295 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 00:07:42.353617 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:07:42.355594 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 00:07:42.358864 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:07:42.361018 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 00:07:42.361144 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 00:07:42.364014 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 00:07:42.364139 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:07:42.366684 systemd[1]: Stopped target paths.target - Path Units. May 15 00:07:42.368327 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 00:07:42.371856 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:07:42.373695 systemd[1]: Stopped target slices.target - Slice Units. May 15 00:07:42.375618 systemd[1]: Stopped target sockets.target - Socket Units. May 15 00:07:42.377982 systemd[1]: iscsid.socket: Deactivated successfully. May 15 00:07:42.378075 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:07:42.379797 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 00:07:42.379964 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:07:42.381680 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 00:07:42.381802 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:07:42.383718 systemd[1]: ignition-files.service: Deactivated successfully. May 15 00:07:42.383837 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 00:07:42.401055 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 00:07:42.403048 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 00:07:42.404099 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 00:07:42.404236 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:07:42.406327 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 00:07:42.406437 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:07:42.413104 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 00:07:42.414358 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 00:07:42.418006 ignition[1001]: INFO : Ignition 2.19.0 May 15 00:07:42.418006 ignition[1001]: INFO : Stage: umount May 15 00:07:42.418006 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:07:42.418006 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:07:42.418006 ignition[1001]: INFO : umount: umount passed May 15 00:07:42.418006 ignition[1001]: INFO : Ignition finished successfully May 15 00:07:42.416977 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 00:07:42.417072 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 00:07:42.419918 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 00:07:42.421479 systemd[1]: Stopped target network.target - Network. May 15 00:07:42.426782 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 00:07:42.426893 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 00:07:42.429116 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 00:07:42.429170 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 00:07:42.430888 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 00:07:42.430934 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 00:07:42.432819 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 00:07:42.432878 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 00:07:42.435105 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 00:07:42.436921 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 00:07:42.442833 systemd-networkd[767]: eth0: DHCPv6 lease lost May 15 00:07:42.444235 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 00:07:42.444352 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 00:07:42.446126 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 00:07:42.446157 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 00:07:42.466922 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 00:07:42.467893 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 00:07:42.467962 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:07:42.470163 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:07:42.478547 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 00:07:42.478646 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 00:07:42.490171 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 00:07:42.491006 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:07:42.492894 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 00:07:42.492984 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 00:07:42.495250 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 00:07:42.495334 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 00:07:42.499976 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 00:07:42.500114 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 00:07:42.502999 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 00:07:42.503122 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:07:42.505930 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 00:07:42.506063 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 00:07:42.509420 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 00:07:42.509613 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 00:07:42.512540 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:07:42.512594 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:07:42.515678 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 00:07:42.515729 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 00:07:42.531961 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 00:07:42.533106 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:07:42.533166 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 00:07:42.535280 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 00:07:42.535326 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 00:07:42.537149 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 00:07:42.537197 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:07:42.539693 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 00:07:42.539741 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:07:42.541776 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 00:07:42.541840 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:07:42.543898 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 00:07:42.543944 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:07:42.546029 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:07:42.546076 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:07:42.548573 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 00:07:42.548661 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 00:07:42.550808 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 00:07:42.553721 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 00:07:42.564282 systemd[1]: Switching root. May 15 00:07:42.595064 systemd-journald[238]: Journal stopped May 15 00:07:43.331678 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 15 00:07:43.331736 kernel: SELinux: policy capability network_peer_controls=1 May 15 00:07:43.331748 kernel: SELinux: policy capability open_perms=1 May 15 00:07:43.331763 kernel: SELinux: policy capability extended_socket_class=1 May 15 00:07:43.331773 kernel: SELinux: policy capability always_check_network=0 May 15 00:07:43.331803 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 00:07:43.331817 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 00:07:43.331834 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 00:07:43.331845 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 00:07:43.331855 kernel: audit: type=1403 audit(1747267662.747:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 00:07:43.331868 systemd[1]: Successfully loaded SELinux policy in 32.449ms. May 15 00:07:43.331885 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.439ms. May 15 00:07:43.331896 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 00:07:43.331907 systemd[1]: Detected virtualization kvm. May 15 00:07:43.331917 systemd[1]: Detected architecture arm64. May 15 00:07:43.331927 systemd[1]: Detected first boot. May 15 00:07:43.331937 systemd[1]: Initializing machine ID from VM UUID. May 15 00:07:43.331947 zram_generator::config[1046]: No configuration found. May 15 00:07:43.331960 systemd[1]: Populated /etc with preset unit settings. May 15 00:07:43.331971 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 00:07:43.331985 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 00:07:43.331996 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 00:07:43.332007 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 00:07:43.332017 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 00:07:43.332029 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 00:07:43.332040 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 00:07:43.332058 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 00:07:43.332070 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 00:07:43.332081 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 00:07:43.332092 systemd[1]: Created slice user.slice - User and Session Slice. May 15 00:07:43.332102 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:07:43.332114 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:07:43.332125 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 00:07:43.332135 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 00:07:43.332146 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 00:07:43.332158 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:07:43.332169 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 15 00:07:43.332179 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:07:43.332189 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 00:07:43.332199 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 00:07:43.332209 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 00:07:43.332220 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 00:07:43.332230 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:07:43.332243 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:07:43.332254 systemd[1]: Reached target slices.target - Slice Units. May 15 00:07:43.332265 systemd[1]: Reached target swap.target - Swaps. May 15 00:07:43.332367 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 00:07:43.332385 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 00:07:43.332397 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:07:43.332409 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:07:43.332422 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:07:43.332445 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 00:07:43.332502 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 00:07:43.332517 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 00:07:43.332528 systemd[1]: Mounting media.mount - External Media Directory... May 15 00:07:43.332538 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 00:07:43.332548 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 00:07:43.332559 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 00:07:43.332570 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 00:07:43.332580 systemd[1]: Reached target machines.target - Containers. May 15 00:07:43.332591 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 00:07:43.332604 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:07:43.332615 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:07:43.332626 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 00:07:43.332636 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:07:43.332647 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:07:43.332660 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:07:43.332671 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 00:07:43.332681 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:07:43.332693 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 00:07:43.332703 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 00:07:43.332714 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 00:07:43.332724 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 00:07:43.332734 systemd[1]: Stopped systemd-fsck-usr.service. May 15 00:07:43.332744 kernel: fuse: init (API version 7.39) May 15 00:07:43.332754 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:07:43.332768 kernel: ACPI: bus type drm_connector registered May 15 00:07:43.332777 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:07:43.332817 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 00:07:43.332837 kernel: loop: module loaded May 15 00:07:43.332848 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 00:07:43.332859 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:07:43.332870 systemd[1]: verity-setup.service: Deactivated successfully. May 15 00:07:43.332880 systemd[1]: Stopped verity-setup.service. May 15 00:07:43.332929 systemd-journald[1117]: Collecting audit messages is disabled. May 15 00:07:43.333048 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 00:07:43.333070 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 00:07:43.333085 systemd-journald[1117]: Journal started May 15 00:07:43.333107 systemd-journald[1117]: Runtime Journal (/run/log/journal/6d40f658ad594db6b79a2bc6607ca8b0) is 5.9M, max 47.3M, 41.4M free. May 15 00:07:43.106895 systemd[1]: Queued start job for default target multi-user.target. May 15 00:07:43.129237 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 00:07:43.129599 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 00:07:43.337129 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:07:43.337831 systemd[1]: Mounted media.mount - External Media Directory. May 15 00:07:43.339010 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 00:07:43.340246 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 00:07:43.341610 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 00:07:43.342930 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 00:07:43.345830 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:07:43.347329 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 00:07:43.347475 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 00:07:43.348978 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:07:43.349109 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:07:43.350644 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:07:43.350798 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:07:43.352173 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:07:43.352311 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:07:43.354013 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 00:07:43.354155 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 00:07:43.357095 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:07:43.357234 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:07:43.358672 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:07:43.360061 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 00:07:43.361778 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 00:07:43.375715 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 00:07:43.385356 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 00:07:43.387851 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 00:07:43.389046 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 00:07:43.389089 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:07:43.391263 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 15 00:07:43.393686 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 00:07:43.397017 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 00:07:43.398386 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:07:43.399906 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 00:07:43.402372 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 00:07:43.403694 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:07:43.408024 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 00:07:43.409590 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:07:43.411095 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:07:43.417981 systemd-journald[1117]: Time spent on flushing to /var/log/journal/6d40f658ad594db6b79a2bc6607ca8b0 is 31.237ms for 856 entries. May 15 00:07:43.417981 systemd-journald[1117]: System Journal (/var/log/journal/6d40f658ad594db6b79a2bc6607ca8b0) is 8.0M, max 195.6M, 187.6M free. May 15 00:07:43.469487 systemd-journald[1117]: Received client request to flush runtime journal. May 15 00:07:43.469607 kernel: loop0: detected capacity change from 0 to 114432 May 15 00:07:43.469634 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 00:07:43.416033 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 00:07:43.422098 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 00:07:43.429383 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:07:43.431610 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 00:07:43.433296 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 00:07:43.436909 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 00:07:43.439366 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 00:07:43.445405 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 00:07:43.454071 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 15 00:07:43.457008 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 00:07:43.473045 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. May 15 00:07:43.473064 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. May 15 00:07:43.477420 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:07:43.484634 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 00:07:43.486801 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:07:43.502280 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 00:07:43.505118 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 00:07:43.508872 kernel: loop1: detected capacity change from 0 to 114328 May 15 00:07:43.511632 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 00:07:43.514406 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 15 00:07:43.530435 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 00:07:43.537059 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:07:43.551490 kernel: loop2: detected capacity change from 0 to 194096 May 15 00:07:43.552535 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. May 15 00:07:43.552554 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. May 15 00:07:43.558880 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:07:43.605354 kernel: loop3: detected capacity change from 0 to 114432 May 15 00:07:43.611886 kernel: loop4: detected capacity change from 0 to 114328 May 15 00:07:43.619426 kernel: loop5: detected capacity change from 0 to 194096 May 15 00:07:43.622466 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 00:07:43.624010 (sd-merge)[1185]: Merged extensions into '/usr'. May 15 00:07:43.628171 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... May 15 00:07:43.628186 systemd[1]: Reloading... May 15 00:07:43.679845 zram_generator::config[1208]: No configuration found. May 15 00:07:43.739936 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 00:07:43.796711 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:07:43.834488 systemd[1]: Reloading finished in 205 ms. May 15 00:07:43.867000 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 00:07:43.868694 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 00:07:43.889083 systemd[1]: Starting ensure-sysext.service... May 15 00:07:43.891351 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:07:43.914509 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 00:07:43.914769 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 00:07:43.915465 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 00:07:43.915686 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. May 15 00:07:43.915742 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. May 15 00:07:43.918026 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:07:43.918037 systemd-tmpfiles[1246]: Skipping /boot May 15 00:07:43.925211 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:07:43.925225 systemd-tmpfiles[1246]: Skipping /boot May 15 00:07:43.928114 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... May 15 00:07:43.928131 systemd[1]: Reloading... May 15 00:07:43.976821 zram_generator::config[1273]: No configuration found. May 15 00:07:44.062508 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:07:44.101708 systemd[1]: Reloading finished in 173 ms. May 15 00:07:44.117988 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 00:07:44.137249 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:07:44.144745 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 15 00:07:44.147519 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 00:07:44.149970 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 00:07:44.153066 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:07:44.159138 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:07:44.164146 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 00:07:44.173093 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:07:44.177103 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:07:44.180192 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:07:44.184896 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:07:44.186058 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:07:44.187054 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 00:07:44.189059 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:07:44.189207 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:07:44.194375 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:07:44.194553 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:07:44.200860 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:07:44.201038 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:07:44.204489 systemd-udevd[1315]: Using default interface naming scheme 'v255'. May 15 00:07:44.204611 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:07:44.207107 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:07:44.211213 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:07:44.212709 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:07:44.214091 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 00:07:44.217401 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 00:07:44.219778 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 00:07:44.222295 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:07:44.223848 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:07:44.225840 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:07:44.225969 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:07:44.232830 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 00:07:44.237879 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:07:44.247893 augenrules[1346]: No rules May 15 00:07:44.253098 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:07:44.256646 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:07:44.259028 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:07:44.262440 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:07:44.263647 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:07:44.264348 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:07:44.266557 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 15 00:07:44.269835 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 00:07:44.271418 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 00:07:44.273526 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:07:44.273663 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:07:44.275337 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:07:44.276842 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:07:44.278362 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:07:44.278483 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:07:44.280068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:07:44.280192 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:07:44.285729 systemd[1]: Finished ensure-sysext.service. May 15 00:07:44.295374 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 15 00:07:44.304140 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:07:44.306126 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:07:44.306333 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:07:44.309333 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 00:07:44.314971 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1358) May 15 00:07:44.315968 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:07:44.341623 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 00:07:44.349073 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 00:07:44.377806 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 00:07:44.395432 systemd-resolved[1313]: Positive Trust Anchors: May 15 00:07:44.395453 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:07:44.395487 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:07:44.410386 systemd-resolved[1313]: Defaulting to hostname 'linux'. May 15 00:07:44.412099 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:07:44.413354 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:07:44.420728 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 00:07:44.422657 systemd[1]: Reached target time-set.target - System Time Set. May 15 00:07:44.447082 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:07:44.450678 systemd-networkd[1384]: lo: Link UP May 15 00:07:44.451049 systemd-networkd[1384]: lo: Gained carrier May 15 00:07:44.451868 systemd-networkd[1384]: Enumeration completed May 15 00:07:44.452061 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:07:44.452906 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:07:44.453008 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:07:44.453278 systemd[1]: Reached target network.target - Network. May 15 00:07:44.454088 systemd-networkd[1384]: eth0: Link UP May 15 00:07:44.454173 systemd-networkd[1384]: eth0: Gained carrier May 15 00:07:44.454225 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:07:44.456519 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 00:07:44.459145 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 00:07:44.464677 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 00:07:44.503860 systemd-networkd[1384]: eth0: DHCPv4 address 10.0.0.17/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:07:44.504640 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. May 15 00:07:44.505472 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 00:07:44.505524 systemd-timesyncd[1385]: Initial clock synchronization to Thu 2025-05-15 00:07:44.130082 UTC. May 15 00:07:44.545320 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:07:44.564452 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:07:44.580964 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 00:07:44.582674 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:07:44.583958 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:07:44.585290 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 00:07:44.586649 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 00:07:44.588208 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 00:07:44.589710 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 00:07:44.591151 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 00:07:44.592567 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 00:07:44.592622 systemd[1]: Reached target paths.target - Path Units. May 15 00:07:44.593655 systemd[1]: Reached target timers.target - Timer Units. May 15 00:07:44.595581 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 00:07:44.598262 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 00:07:44.609926 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 00:07:44.612635 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 00:07:44.614462 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 00:07:44.615901 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:07:44.617011 systemd[1]: Reached target basic.target - Basic System. May 15 00:07:44.618107 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 00:07:44.618148 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 00:07:44.619198 systemd[1]: Starting containerd.service - containerd container runtime... May 15 00:07:44.621601 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 00:07:44.622333 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:07:44.624982 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 00:07:44.628461 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 00:07:44.630665 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 00:07:44.631915 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 00:07:44.637343 jq[1415]: false May 15 00:07:44.637967 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 00:07:44.643106 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 00:07:44.646885 extend-filesystems[1416]: Found loop3 May 15 00:07:44.646885 extend-filesystems[1416]: Found loop4 May 15 00:07:44.646885 extend-filesystems[1416]: Found loop5 May 15 00:07:44.646885 extend-filesystems[1416]: Found vda May 15 00:07:44.646885 extend-filesystems[1416]: Found vda1 May 15 00:07:44.646885 extend-filesystems[1416]: Found vda2 May 15 00:07:44.646885 extend-filesystems[1416]: Found vda3 May 15 00:07:44.646885 extend-filesystems[1416]: Found usr May 15 00:07:44.646885 extend-filesystems[1416]: Found vda4 May 15 00:07:44.646885 extend-filesystems[1416]: Found vda6 May 15 00:07:44.646885 extend-filesystems[1416]: Found vda7 May 15 00:07:44.646885 extend-filesystems[1416]: Found vda9 May 15 00:07:44.646885 extend-filesystems[1416]: Checking size of /dev/vda9 May 15 00:07:44.661433 extend-filesystems[1416]: Resized partition /dev/vda9 May 15 00:07:44.647419 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 00:07:44.662610 extend-filesystems[1432]: resize2fs 1.47.1 (20-May-2024) May 15 00:07:44.669587 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 00:07:44.662679 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 00:07:44.674623 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 00:07:44.675123 dbus-daemon[1414]: [system] SELinux support is enabled May 15 00:07:44.675192 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 00:07:44.676845 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1372) May 15 00:07:44.682977 systemd[1]: Starting update-engine.service - Update Engine... May 15 00:07:44.688210 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 00:07:44.692642 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 00:07:44.697007 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 00:07:44.700887 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 00:07:44.714944 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 00:07:44.715004 jq[1438]: true May 15 00:07:44.701055 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 00:07:44.701315 systemd[1]: motdgen.service: Deactivated successfully. May 15 00:07:44.701450 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 00:07:44.715207 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 00:07:44.715359 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 00:07:44.717137 extend-filesystems[1432]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 00:07:44.717137 extend-filesystems[1432]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 00:07:44.717137 extend-filesystems[1432]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 00:07:44.728315 extend-filesystems[1416]: Resized filesystem in /dev/vda9 May 15 00:07:44.732129 jq[1442]: true May 15 00:07:44.733182 (ntainerd)[1443]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 00:07:44.734293 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 00:07:44.734907 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 00:07:44.756377 systemd-logind[1429]: Watching system buttons on /dev/input/event0 (Power Button) May 15 00:07:44.756587 systemd-logind[1429]: New seat seat0. May 15 00:07:44.759927 systemd[1]: Started systemd-logind.service - User Login Management. May 15 00:07:44.762637 update_engine[1436]: I20250515 00:07:44.762426 1436 main.cc:92] Flatcar Update Engine starting May 15 00:07:44.764707 dbus-daemon[1414]: [system] Successfully activated service 'org.freedesktop.systemd1' May 15 00:07:44.767029 tar[1440]: linux-arm64/helm May 15 00:07:44.768824 update_engine[1436]: I20250515 00:07:44.767760 1436 update_check_scheduler.cc:74] Next update check in 7m9s May 15 00:07:44.772498 systemd[1]: Started update-engine.service - Update Engine. May 15 00:07:44.775128 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 00:07:44.775274 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 00:07:44.777994 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 00:07:44.778114 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 00:07:44.796051 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 00:07:44.824657 bash[1470]: Updated "/home/core/.ssh/authorized_keys" May 15 00:07:44.831975 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 00:07:44.834097 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 00:07:44.865909 locksmithd[1469]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 00:07:44.949285 containerd[1443]: time="2025-05-15T00:07:44.949196240Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 15 00:07:44.974971 containerd[1443]: time="2025-05-15T00:07:44.974918440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 00:07:44.976871 containerd[1443]: time="2025-05-15T00:07:44.976800320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 00:07:44.976871 containerd[1443]: time="2025-05-15T00:07:44.976863200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 00:07:44.976947 containerd[1443]: time="2025-05-15T00:07:44.976889440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 00:07:44.977533 containerd[1443]: time="2025-05-15T00:07:44.977066880Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 00:07:44.977533 containerd[1443]: time="2025-05-15T00:07:44.977097280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 00:07:44.977533 containerd[1443]: time="2025-05-15T00:07:44.977162000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:07:44.977533 containerd[1443]: time="2025-05-15T00:07:44.977176840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 00:07:44.977533 containerd[1443]: time="2025-05-15T00:07:44.977357360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:07:44.977533 containerd[1443]: time="2025-05-15T00:07:44.977377960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 00:07:44.977533 containerd[1443]: time="2025-05-15T00:07:44.977397400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:07:44.977533 containerd[1443]: time="2025-05-15T00:07:44.977412600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 00:07:44.977533 containerd[1443]: time="2025-05-15T00:07:44.977491760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 00:07:44.977728 containerd[1443]: time="2025-05-15T00:07:44.977692480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 00:07:44.977882 containerd[1443]: time="2025-05-15T00:07:44.977855600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:07:44.977916 containerd[1443]: time="2025-05-15T00:07:44.977882880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 00:07:44.978014 containerd[1443]: time="2025-05-15T00:07:44.977986400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 00:07:44.978072 containerd[1443]: time="2025-05-15T00:07:44.978053880Z" level=info msg="metadata content store policy set" policy=shared May 15 00:07:44.987438 containerd[1443]: time="2025-05-15T00:07:44.987385880Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 00:07:44.987528 containerd[1443]: time="2025-05-15T00:07:44.987457600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 00:07:44.987528 containerd[1443]: time="2025-05-15T00:07:44.987475840Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 00:07:44.987528 containerd[1443]: time="2025-05-15T00:07:44.987492000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 00:07:44.987528 containerd[1443]: time="2025-05-15T00:07:44.987525920Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 00:07:44.987734 containerd[1443]: time="2025-05-15T00:07:44.987694680Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 00:07:44.988750 containerd[1443]: time="2025-05-15T00:07:44.988013320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 00:07:44.988750 containerd[1443]: time="2025-05-15T00:07:44.988167120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 00:07:44.988750 containerd[1443]: time="2025-05-15T00:07:44.988184680Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 00:07:44.988750 containerd[1443]: time="2025-05-15T00:07:44.988198400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 00:07:44.988750 containerd[1443]: time="2025-05-15T00:07:44.988213160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 00:07:44.988750 containerd[1443]: time="2025-05-15T00:07:44.988226520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 00:07:44.988750 containerd[1443]: time="2025-05-15T00:07:44.988239520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 00:07:44.988750 containerd[1443]: time="2025-05-15T00:07:44.988253520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 00:07:44.988750 containerd[1443]: time="2025-05-15T00:07:44.988267520Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 00:07:44.988750 containerd[1443]: time="2025-05-15T00:07:44.988280000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 00:07:44.988750 containerd[1443]: time="2025-05-15T00:07:44.988292240Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 00:07:44.988750 containerd[1443]: time="2025-05-15T00:07:44.988303960Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 00:07:44.988750 containerd[1443]: time="2025-05-15T00:07:44.988325480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 00:07:44.988750 containerd[1443]: time="2025-05-15T00:07:44.988339800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 00:07:44.989061 containerd[1443]: time="2025-05-15T00:07:44.988351800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 00:07:44.989061 containerd[1443]: time="2025-05-15T00:07:44.988364760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 00:07:44.989061 containerd[1443]: time="2025-05-15T00:07:44.988376920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 00:07:44.989061 containerd[1443]: time="2025-05-15T00:07:44.988389240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 00:07:44.989061 containerd[1443]: time="2025-05-15T00:07:44.988400840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 00:07:44.989061 containerd[1443]: time="2025-05-15T00:07:44.988413400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 00:07:44.989061 containerd[1443]: time="2025-05-15T00:07:44.988429440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 00:07:44.989061 containerd[1443]: time="2025-05-15T00:07:44.988444360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 00:07:44.989061 containerd[1443]: time="2025-05-15T00:07:44.988456560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 00:07:44.989061 containerd[1443]: time="2025-05-15T00:07:44.988468480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 00:07:44.989061 containerd[1443]: time="2025-05-15T00:07:44.988480600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 00:07:44.989061 containerd[1443]: time="2025-05-15T00:07:44.988495560Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 00:07:44.989061 containerd[1443]: time="2025-05-15T00:07:44.988515320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 00:07:44.989061 containerd[1443]: time="2025-05-15T00:07:44.988528280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 00:07:44.989061 containerd[1443]: time="2025-05-15T00:07:44.988538840Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 00:07:44.990014 containerd[1443]: time="2025-05-15T00:07:44.989982680Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 00:07:44.990202 containerd[1443]: time="2025-05-15T00:07:44.990180000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 00:07:44.990267 containerd[1443]: time="2025-05-15T00:07:44.990253360Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 00:07:44.990374 containerd[1443]: time="2025-05-15T00:07:44.990355920Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 00:07:44.990428 containerd[1443]: time="2025-05-15T00:07:44.990415480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 00:07:44.990497 containerd[1443]: time="2025-05-15T00:07:44.990484000Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 00:07:44.990605 containerd[1443]: time="2025-05-15T00:07:44.990589240Z" level=info msg="NRI interface is disabled by configuration." May 15 00:07:44.990663 containerd[1443]: time="2025-05-15T00:07:44.990651400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 00:07:44.991283 containerd[1443]: time="2025-05-15T00:07:44.991165720Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 00:07:44.991509 containerd[1443]: time="2025-05-15T00:07:44.991490160Z" level=info msg="Connect containerd service" May 15 00:07:44.991606 containerd[1443]: time="2025-05-15T00:07:44.991591040Z" level=info msg="using legacy CRI server" May 15 00:07:44.991728 containerd[1443]: time="2025-05-15T00:07:44.991641880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 00:07:44.991891 containerd[1443]: time="2025-05-15T00:07:44.991870000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 00:07:44.992962 containerd[1443]: time="2025-05-15T00:07:44.992933320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:07:44.993286 containerd[1443]: time="2025-05-15T00:07:44.993246320Z" level=info msg="Start subscribing containerd event" May 15 00:07:44.993329 containerd[1443]: time="2025-05-15T00:07:44.993296560Z" level=info msg="Start recovering state" May 15 00:07:44.993403 containerd[1443]: time="2025-05-15T00:07:44.993384040Z" level=info msg="Start event monitor" May 15 00:07:44.993403 containerd[1443]: time="2025-05-15T00:07:44.993400880Z" level=info msg="Start snapshots syncer" May 15 00:07:44.993515 containerd[1443]: time="2025-05-15T00:07:44.993410320Z" level=info msg="Start cni network conf syncer for default" May 15 00:07:44.993515 containerd[1443]: time="2025-05-15T00:07:44.993418640Z" level=info msg="Start streaming server" May 15 00:07:44.994096 containerd[1443]: time="2025-05-15T00:07:44.994073400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 00:07:44.994463 containerd[1443]: time="2025-05-15T00:07:44.994441360Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 00:07:44.994895 containerd[1443]: time="2025-05-15T00:07:44.994876960Z" level=info msg="containerd successfully booted in 0.047829s" May 15 00:07:44.994945 systemd[1]: Started containerd.service - containerd container runtime. May 15 00:07:45.114856 tar[1440]: linux-arm64/LICENSE May 15 00:07:45.114856 tar[1440]: linux-arm64/README.md May 15 00:07:45.128359 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 00:07:45.244096 sshd_keygen[1437]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 00:07:45.264844 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 00:07:45.278105 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 00:07:45.284490 systemd[1]: issuegen.service: Deactivated successfully. May 15 00:07:45.285844 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 00:07:45.288935 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 00:07:45.302411 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 00:07:45.306197 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 00:07:45.308429 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 15 00:07:45.309745 systemd[1]: Reached target getty.target - Login Prompts. May 15 00:07:45.782940 systemd-networkd[1384]: eth0: Gained IPv6LL May 15 00:07:45.785702 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 00:07:45.787505 systemd[1]: Reached target network-online.target - Network is Online. May 15 00:07:45.798028 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 00:07:45.800426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:07:45.802469 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 00:07:45.816134 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 00:07:45.816294 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 00:07:45.818095 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 00:07:45.820392 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 00:07:46.285697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:07:46.287282 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 00:07:46.289949 (kubelet)[1528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:07:46.291877 systemd[1]: Startup finished in 586ms (kernel) + 5.015s (initrd) + 3.578s (userspace) = 9.180s. May 15 00:07:46.743496 kubelet[1528]: E0515 00:07:46.743389 1528 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:07:46.746048 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:07:46.746185 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:07:50.909714 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 00:07:50.910943 systemd[1]: Started sshd@0-10.0.0.17:22-10.0.0.1:39234.service - OpenSSH per-connection server daemon (10.0.0.1:39234). May 15 00:07:50.997838 sshd[1543]: Accepted publickey for core from 10.0.0.1 port 39234 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:07:51.000061 sshd[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:51.008862 systemd-logind[1429]: New session 1 of user core. May 15 00:07:51.009979 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 00:07:51.024086 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 00:07:51.034320 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 00:07:51.038084 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 00:07:51.044124 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 00:07:51.146577 systemd[1547]: Queued start job for default target default.target. May 15 00:07:51.156777 systemd[1547]: Created slice app.slice - User Application Slice. May 15 00:07:51.156823 systemd[1547]: Reached target paths.target - Paths. May 15 00:07:51.156836 systemd[1547]: Reached target timers.target - Timers. May 15 00:07:51.158180 systemd[1547]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 00:07:51.168862 systemd[1547]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 00:07:51.168938 systemd[1547]: Reached target sockets.target - Sockets. May 15 00:07:51.168951 systemd[1547]: Reached target basic.target - Basic System. May 15 00:07:51.168994 systemd[1547]: Reached target default.target - Main User Target. May 15 00:07:51.169022 systemd[1547]: Startup finished in 118ms. May 15 00:07:51.169328 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 00:07:51.170759 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 00:07:51.236024 systemd[1]: Started sshd@1-10.0.0.17:22-10.0.0.1:39246.service - OpenSSH per-connection server daemon (10.0.0.1:39246). May 15 00:07:51.272845 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 39246 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:07:51.274344 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:51.278679 systemd-logind[1429]: New session 2 of user core. May 15 00:07:51.289003 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 00:07:51.342980 sshd[1558]: pam_unix(sshd:session): session closed for user core May 15 00:07:51.354438 systemd[1]: sshd@1-10.0.0.17:22-10.0.0.1:39246.service: Deactivated successfully. May 15 00:07:51.356946 systemd[1]: session-2.scope: Deactivated successfully. May 15 00:07:51.359179 systemd-logind[1429]: Session 2 logged out. Waiting for processes to exit. May 15 00:07:51.366161 systemd[1]: Started sshd@2-10.0.0.17:22-10.0.0.1:39254.service - OpenSSH per-connection server daemon (10.0.0.1:39254). May 15 00:07:51.367092 systemd-logind[1429]: Removed session 2. May 15 00:07:51.397549 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 39254 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:07:51.398982 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:51.403405 systemd-logind[1429]: New session 3 of user core. May 15 00:07:51.414988 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 00:07:51.463591 sshd[1565]: pam_unix(sshd:session): session closed for user core May 15 00:07:51.477445 systemd[1]: sshd@2-10.0.0.17:22-10.0.0.1:39254.service: Deactivated successfully. May 15 00:07:51.479576 systemd[1]: session-3.scope: Deactivated successfully. May 15 00:07:51.480889 systemd-logind[1429]: Session 3 logged out. Waiting for processes to exit. May 15 00:07:51.482142 systemd[1]: Started sshd@3-10.0.0.17:22-10.0.0.1:39264.service - OpenSSH per-connection server daemon (10.0.0.1:39264). May 15 00:07:51.482893 systemd-logind[1429]: Removed session 3. May 15 00:07:51.523878 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 39264 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:07:51.525368 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:51.529830 systemd-logind[1429]: New session 4 of user core. May 15 00:07:51.536978 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 00:07:51.594052 sshd[1572]: pam_unix(sshd:session): session closed for user core May 15 00:07:51.602564 systemd[1]: sshd@3-10.0.0.17:22-10.0.0.1:39264.service: Deactivated successfully. May 15 00:07:51.604362 systemd[1]: session-4.scope: Deactivated successfully. May 15 00:07:51.605834 systemd-logind[1429]: Session 4 logged out. Waiting for processes to exit. May 15 00:07:51.616161 systemd[1]: Started sshd@4-10.0.0.17:22-10.0.0.1:39274.service - OpenSSH per-connection server daemon (10.0.0.1:39274). May 15 00:07:51.617139 systemd-logind[1429]: Removed session 4. May 15 00:07:51.654239 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 39274 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:07:51.655681 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:51.660011 systemd-logind[1429]: New session 5 of user core. May 15 00:07:51.669972 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 00:07:51.731006 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 00:07:51.731293 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:07:51.750813 sudo[1582]: pam_unix(sudo:session): session closed for user root May 15 00:07:51.753440 sshd[1579]: pam_unix(sshd:session): session closed for user core May 15 00:07:51.768025 systemd[1]: sshd@4-10.0.0.17:22-10.0.0.1:39274.service: Deactivated successfully. May 15 00:07:51.771552 systemd[1]: session-5.scope: Deactivated successfully. May 15 00:07:51.773036 systemd-logind[1429]: Session 5 logged out. Waiting for processes to exit. May 15 00:07:51.785133 systemd[1]: Started sshd@5-10.0.0.17:22-10.0.0.1:39280.service - OpenSSH per-connection server daemon (10.0.0.1:39280). May 15 00:07:51.786216 systemd-logind[1429]: Removed session 5. May 15 00:07:51.818851 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 39280 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:07:51.820362 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:51.824533 systemd-logind[1429]: New session 6 of user core. May 15 00:07:51.844175 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 00:07:51.896037 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 00:07:51.896313 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:07:51.899912 sudo[1591]: pam_unix(sudo:session): session closed for user root May 15 00:07:51.905039 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 15 00:07:51.905325 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:07:51.923085 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 15 00:07:51.925304 auditctl[1594]: No rules May 15 00:07:51.926311 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:07:51.926534 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 15 00:07:51.928506 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 15 00:07:51.955710 augenrules[1612]: No rules May 15 00:07:51.957102 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 15 00:07:51.958482 sudo[1590]: pam_unix(sudo:session): session closed for user root May 15 00:07:51.960229 sshd[1587]: pam_unix(sshd:session): session closed for user core May 15 00:07:51.976356 systemd[1]: sshd@5-10.0.0.17:22-10.0.0.1:39280.service: Deactivated successfully. May 15 00:07:51.979578 systemd[1]: session-6.scope: Deactivated successfully. May 15 00:07:51.981049 systemd-logind[1429]: Session 6 logged out. Waiting for processes to exit. May 15 00:07:51.983441 systemd[1]: Started sshd@6-10.0.0.17:22-10.0.0.1:39292.service - OpenSSH per-connection server daemon (10.0.0.1:39292). May 15 00:07:51.984498 systemd-logind[1429]: Removed session 6. May 15 00:07:52.019735 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 39292 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:07:52.021357 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:52.025239 systemd-logind[1429]: New session 7 of user core. May 15 00:07:52.038978 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 00:07:52.090234 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 00:07:52.090527 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:07:52.413071 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 00:07:52.413145 (dockerd)[1642]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 00:07:52.677810 dockerd[1642]: time="2025-05-15T00:07:52.677654225Z" level=info msg="Starting up" May 15 00:07:52.928289 dockerd[1642]: time="2025-05-15T00:07:52.927917020Z" level=info msg="Loading containers: start." May 15 00:07:53.025818 kernel: Initializing XFRM netlink socket May 15 00:07:53.097746 systemd-networkd[1384]: docker0: Link UP May 15 00:07:53.122427 dockerd[1642]: time="2025-05-15T00:07:53.122365592Z" level=info msg="Loading containers: done." May 15 00:07:53.135719 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3937115001-merged.mount: Deactivated successfully. May 15 00:07:53.139286 dockerd[1642]: time="2025-05-15T00:07:53.139214663Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 00:07:53.139398 dockerd[1642]: time="2025-05-15T00:07:53.139377278Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 15 00:07:53.139547 dockerd[1642]: time="2025-05-15T00:07:53.139520728Z" level=info msg="Daemon has completed initialization" May 15 00:07:53.181514 dockerd[1642]: time="2025-05-15T00:07:53.181256266Z" level=info msg="API listen on /run/docker.sock" May 15 00:07:53.181549 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 00:07:54.029638 containerd[1443]: time="2025-05-15T00:07:54.029531556Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 15 00:07:54.731109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount876958285.mount: Deactivated successfully. May 15 00:07:56.454973 containerd[1443]: time="2025-05-15T00:07:56.454917300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:56.455988 containerd[1443]: time="2025-05-15T00:07:56.455753451Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 15 00:07:56.456763 containerd[1443]: time="2025-05-15T00:07:56.456730426Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:56.460505 containerd[1443]: time="2025-05-15T00:07:56.460460347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:56.462033 containerd[1443]: time="2025-05-15T00:07:56.461745834Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.432172047s" May 15 00:07:56.462033 containerd[1443]: time="2025-05-15T00:07:56.461808272Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 15 00:07:56.481828 containerd[1443]: time="2025-05-15T00:07:56.481729161Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 15 00:07:56.996471 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 00:07:57.006024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:07:57.105534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:07:57.109921 (kubelet)[1869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:07:57.154445 kubelet[1869]: E0515 00:07:57.154380 1869 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:07:57.157738 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:07:57.157914 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:07:58.988157 containerd[1443]: time="2025-05-15T00:07:58.988104661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:58.989325 containerd[1443]: time="2025-05-15T00:07:58.989121980Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 15 00:07:58.990388 containerd[1443]: time="2025-05-15T00:07:58.990095623Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:58.993109 containerd[1443]: time="2025-05-15T00:07:58.993044519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:58.994396 containerd[1443]: time="2025-05-15T00:07:58.994340871Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 2.512570613s" May 15 00:07:58.994396 containerd[1443]: time="2025-05-15T00:07:58.994379866Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 15 00:07:59.013959 containerd[1443]: time="2025-05-15T00:07:59.013922443Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 15 00:08:00.164933 containerd[1443]: time="2025-05-15T00:08:00.164879278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:00.165466 containerd[1443]: time="2025-05-15T00:08:00.165429254Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 15 00:08:00.166345 containerd[1443]: time="2025-05-15T00:08:00.166297801Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:00.169570 containerd[1443]: time="2025-05-15T00:08:00.169524923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:00.170978 containerd[1443]: time="2025-05-15T00:08:00.170840582Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.156877148s" May 15 00:08:00.170978 containerd[1443]: time="2025-05-15T00:08:00.170876791Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 15 00:08:00.189956 containerd[1443]: time="2025-05-15T00:08:00.189895077Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 15 00:08:01.309951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount557732563.mount: Deactivated successfully. May 15 00:08:01.516454 containerd[1443]: time="2025-05-15T00:08:01.516405329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:01.517343 containerd[1443]: time="2025-05-15T00:08:01.517310011Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 15 00:08:01.518034 containerd[1443]: time="2025-05-15T00:08:01.517973038Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:01.520325 containerd[1443]: time="2025-05-15T00:08:01.520292241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:01.521243 containerd[1443]: time="2025-05-15T00:08:01.520878734Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.330941836s" May 15 00:08:01.521336 containerd[1443]: time="2025-05-15T00:08:01.521319758Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 15 00:08:01.540078 containerd[1443]: time="2025-05-15T00:08:01.540035199Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 00:08:02.066580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1590350205.mount: Deactivated successfully. May 15 00:08:02.964359 containerd[1443]: time="2025-05-15T00:08:02.964139326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:02.965161 containerd[1443]: time="2025-05-15T00:08:02.964943569Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 15 00:08:02.965959 containerd[1443]: time="2025-05-15T00:08:02.965920651Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:02.969855 containerd[1443]: time="2025-05-15T00:08:02.969806646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:02.970584 containerd[1443]: time="2025-05-15T00:08:02.970549070Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.430350176s" May 15 00:08:02.970584 containerd[1443]: time="2025-05-15T00:08:02.970582666Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 15 00:08:02.990378 containerd[1443]: time="2025-05-15T00:08:02.990343072Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 15 00:08:03.505768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3493341748.mount: Deactivated successfully. May 15 00:08:03.509157 containerd[1443]: time="2025-05-15T00:08:03.509118275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:03.509827 containerd[1443]: time="2025-05-15T00:08:03.509661203Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 15 00:08:03.510681 containerd[1443]: time="2025-05-15T00:08:03.510622827Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:03.512864 containerd[1443]: time="2025-05-15T00:08:03.512819512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:03.514136 containerd[1443]: time="2025-05-15T00:08:03.513775959Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 523.393154ms" May 15 00:08:03.514136 containerd[1443]: time="2025-05-15T00:08:03.513829969Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 15 00:08:03.537585 containerd[1443]: time="2025-05-15T00:08:03.537537366Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 15 00:08:04.034437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2771309809.mount: Deactivated successfully. May 15 00:08:06.861488 containerd[1443]: time="2025-05-15T00:08:06.861440773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:06.862424 containerd[1443]: time="2025-05-15T00:08:06.862109308Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 15 00:08:06.863841 containerd[1443]: time="2025-05-15T00:08:06.863727936Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:06.866895 containerd[1443]: time="2025-05-15T00:08:06.866848446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:06.868042 containerd[1443]: time="2025-05-15T00:08:06.867998131Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.33029016s" May 15 00:08:06.868042 containerd[1443]: time="2025-05-15T00:08:06.868034946Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 15 00:08:07.180491 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 00:08:07.191102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:08:07.283380 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:08:07.287834 (kubelet)[2056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:08:07.332333 kubelet[2056]: E0515 00:08:07.332240 2056 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:08:07.335536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:08:07.335728 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:08:11.425334 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:08:11.437067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:08:11.455579 systemd[1]: Reloading requested from client PID 2117 ('systemctl') (unit session-7.scope)... May 15 00:08:11.455596 systemd[1]: Reloading... May 15 00:08:11.519952 zram_generator::config[2156]: No configuration found. May 15 00:08:11.628012 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:08:11.682687 systemd[1]: Reloading finished in 226 ms. May 15 00:08:11.722297 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:08:11.722516 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:08:11.726838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:08:11.820160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:08:11.825319 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:08:11.861297 kubelet[2202]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:08:11.861297 kubelet[2202]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:08:11.861297 kubelet[2202]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:08:11.862185 kubelet[2202]: I0515 00:08:11.862139 2202 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:08:12.593555 kubelet[2202]: I0515 00:08:12.593492 2202 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 00:08:12.593555 kubelet[2202]: I0515 00:08:12.593522 2202 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:08:12.593738 kubelet[2202]: I0515 00:08:12.593719 2202 server.go:927] "Client rotation is on, will bootstrap in background" May 15 00:08:12.638693 kubelet[2202]: E0515 00:08:12.638648 2202 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.17:6443: connect: connection refused May 15 00:08:12.638693 kubelet[2202]: I0515 00:08:12.638689 2202 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:08:12.648668 kubelet[2202]: I0515 00:08:12.648644 2202 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:08:12.648845 kubelet[2202]: I0515 00:08:12.648818 2202 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:08:12.649006 kubelet[2202]: I0515 00:08:12.648842 2202 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 00:08:12.649092 kubelet[2202]: I0515 00:08:12.649080 2202 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:08:12.649092 kubelet[2202]: I0515 00:08:12.649090 2202 container_manager_linux.go:301] "Creating device plugin manager" May 15 00:08:12.649276 kubelet[2202]: I0515 00:08:12.649264 2202 state_mem.go:36] "Initialized new in-memory state store" May 15 00:08:12.650337 kubelet[2202]: I0515 00:08:12.650175 2202 kubelet.go:400] "Attempting to sync node with API server" May 15 00:08:12.650337 kubelet[2202]: I0515 00:08:12.650197 2202 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:08:12.650449 kubelet[2202]: I0515 00:08:12.650393 2202 kubelet.go:312] "Adding apiserver pod source" May 15 00:08:12.650596 kubelet[2202]: I0515 00:08:12.650504 2202 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:08:12.650865 kubelet[2202]: W0515 00:08:12.650815 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused May 15 00:08:12.650936 kubelet[2202]: E0515 00:08:12.650873 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused May 15 00:08:12.651183 kubelet[2202]: W0515 00:08:12.651082 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused May 15 00:08:12.651183 kubelet[2202]: E0515 00:08:12.651165 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused May 15 00:08:12.651580 kubelet[2202]: I0515 00:08:12.651551 2202 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 15 00:08:12.653837 kubelet[2202]: I0515 00:08:12.653813 2202 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:08:12.654006 kubelet[2202]: W0515 00:08:12.653986 2202 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 00:08:12.654973 kubelet[2202]: I0515 00:08:12.654831 2202 server.go:1264] "Started kubelet" May 15 00:08:12.655047 kubelet[2202]: I0515 00:08:12.654997 2202 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:08:12.655586 kubelet[2202]: I0515 00:08:12.655383 2202 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:08:12.655708 kubelet[2202]: I0515 00:08:12.655691 2202 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:08:12.656114 kubelet[2202]: I0515 00:08:12.656096 2202 server.go:455] "Adding debug handlers to kubelet server" May 15 00:08:12.661008 kubelet[2202]: I0515 00:08:12.657907 2202 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:08:12.661008 kubelet[2202]: E0515 00:08:12.659057 2202 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:12.661008 kubelet[2202]: I0515 00:08:12.659274 2202 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 00:08:12.661008 kubelet[2202]: I0515 00:08:12.659364 2202 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 00:08:12.661008 kubelet[2202]: I0515 00:08:12.660280 2202 reconciler.go:26] "Reconciler: start to sync state" May 15 00:08:12.661008 kubelet[2202]: W0515 00:08:12.660556 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused May 15 00:08:12.661008 kubelet[2202]: E0515 00:08:12.660593 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused May 15 00:08:12.661008 kubelet[2202]: E0515 00:08:12.660962 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="200ms" May 15 00:08:12.661938 kubelet[2202]: E0515 00:08:12.657214 2202 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.17:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.17:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f8ab58434c2b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 00:08:12.654805684 +0000 UTC m=+0.826315165,LastTimestamp:2025-05-15 00:08:12.654805684 +0000 UTC m=+0.826315165,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 00:08:12.661938 kubelet[2202]: I0515 00:08:12.661395 2202 factory.go:221] Registration of the systemd container factory successfully May 15 00:08:12.661938 kubelet[2202]: I0515 00:08:12.661474 2202 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:08:12.662854 kubelet[2202]: I0515 00:08:12.662799 2202 factory.go:221] Registration of the containerd container factory successfully May 15 00:08:12.663039 kubelet[2202]: E0515 00:08:12.663023 2202 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:08:12.673879 kubelet[2202]: I0515 00:08:12.673849 2202 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:08:12.673999 kubelet[2202]: I0515 00:08:12.673899 2202 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:08:12.673999 kubelet[2202]: I0515 00:08:12.673916 2202 state_mem.go:36] "Initialized new in-memory state store" May 15 00:08:12.675404 kubelet[2202]: I0515 00:08:12.675186 2202 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:08:12.676446 kubelet[2202]: I0515 00:08:12.676423 2202 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:08:12.676721 kubelet[2202]: I0515 00:08:12.676708 2202 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:08:12.676773 kubelet[2202]: I0515 00:08:12.676741 2202 kubelet.go:2337] "Starting kubelet main sync loop" May 15 00:08:12.676814 kubelet[2202]: E0515 00:08:12.676780 2202 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:08:12.677237 kubelet[2202]: W0515 00:08:12.677168 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused May 15 00:08:12.677237 kubelet[2202]: E0515 00:08:12.677205 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused May 15 00:08:12.760661 kubelet[2202]: I0515 00:08:12.760620 2202 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 00:08:12.761036 kubelet[2202]: E0515 00:08:12.760992 2202 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="localhost" May 15 00:08:12.777172 kubelet[2202]: E0515 00:08:12.777140 2202 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 00:08:12.784444 kubelet[2202]: I0515 00:08:12.784336 2202 policy_none.go:49] "None policy: Start" May 15 00:08:12.785122 kubelet[2202]: I0515 00:08:12.785101 2202 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:08:12.785200 kubelet[2202]: I0515 00:08:12.785129 2202 state_mem.go:35] "Initializing new in-memory state store" May 15 00:08:12.789968 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 00:08:12.808361 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 00:08:12.811932 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 00:08:12.822719 kubelet[2202]: I0515 00:08:12.822689 2202 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:08:12.822983 kubelet[2202]: I0515 00:08:12.822939 2202 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:08:12.823067 kubelet[2202]: I0515 00:08:12.823047 2202 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:08:12.824604 kubelet[2202]: E0515 00:08:12.824572 2202 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 00:08:12.861536 kubelet[2202]: E0515 00:08:12.861408 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="400ms" May 15 00:08:12.962664 kubelet[2202]: I0515 00:08:12.962623 2202 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 00:08:12.963090 kubelet[2202]: E0515 00:08:12.963046 2202 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="localhost" May 15 00:08:12.978204 kubelet[2202]: I0515 00:08:12.978158 2202 topology_manager.go:215] "Topology Admit Handler" podUID="8aa5d000121d77143d1d94287bacffd3" podNamespace="kube-system" podName="kube-apiserver-localhost" May 15 00:08:12.979449 kubelet[2202]: I0515 00:08:12.979387 2202 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 15 00:08:12.980426 kubelet[2202]: I0515 00:08:12.980400 2202 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 15 00:08:12.985664 systemd[1]: Created slice kubepods-burstable-pod8aa5d000121d77143d1d94287bacffd3.slice - libcontainer container kubepods-burstable-pod8aa5d000121d77143d1d94287bacffd3.slice. May 15 00:08:12.997154 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 15 00:08:13.000531 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 15 00:08:13.062129 kubelet[2202]: I0515 00:08:13.062089 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:08:13.062129 kubelet[2202]: I0515 00:08:13.062131 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:08:13.062254 kubelet[2202]: I0515 00:08:13.062153 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:08:13.062254 kubelet[2202]: I0515 00:08:13.062170 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 15 00:08:13.062254 kubelet[2202]: I0515 00:08:13.062185 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8aa5d000121d77143d1d94287bacffd3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8aa5d000121d77143d1d94287bacffd3\") " pod="kube-system/kube-apiserver-localhost" May 15 00:08:13.062254 kubelet[2202]: I0515 00:08:13.062200 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8aa5d000121d77143d1d94287bacffd3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8aa5d000121d77143d1d94287bacffd3\") " pod="kube-system/kube-apiserver-localhost" May 15 00:08:13.062254 kubelet[2202]: I0515 00:08:13.062229 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8aa5d000121d77143d1d94287bacffd3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8aa5d000121d77143d1d94287bacffd3\") " pod="kube-system/kube-apiserver-localhost" May 15 00:08:13.062410 kubelet[2202]: I0515 00:08:13.062251 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:08:13.062410 kubelet[2202]: I0515 00:08:13.062292 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:08:13.262602 kubelet[2202]: E0515 00:08:13.262489 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="800ms" May 15 00:08:13.295705 kubelet[2202]: E0515 00:08:13.295672 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:13.296426 containerd[1443]: time="2025-05-15T00:08:13.296328157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8aa5d000121d77143d1d94287bacffd3,Namespace:kube-system,Attempt:0,}" May 15 00:08:13.300038 kubelet[2202]: E0515 00:08:13.300009 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:13.300586 containerd[1443]: time="2025-05-15T00:08:13.300358056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 15 00:08:13.302649 kubelet[2202]: E0515 00:08:13.302605 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:13.303014 containerd[1443]: time="2025-05-15T00:08:13.302970938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 15 00:08:13.365217 kubelet[2202]: I0515 00:08:13.365177 2202 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 00:08:13.365543 kubelet[2202]: E0515 00:08:13.365501 2202 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="localhost" May 15 00:08:13.457289 kubelet[2202]: W0515 00:08:13.457212 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused May 15 00:08:13.457289 kubelet[2202]: E0515 00:08:13.457288 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused May 15 00:08:13.835158 kubelet[2202]: W0515 00:08:13.835121 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused May 15 00:08:13.835158 kubelet[2202]: E0515 00:08:13.835160 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused May 15 00:08:13.852625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2878122774.mount: Deactivated successfully. May 15 00:08:13.858048 containerd[1443]: time="2025-05-15T00:08:13.857996172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:08:13.859908 containerd[1443]: time="2025-05-15T00:08:13.859878429Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:08:13.860593 containerd[1443]: time="2025-05-15T00:08:13.860561187Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:08:13.861827 containerd[1443]: time="2025-05-15T00:08:13.861398891Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:08:13.862013 containerd[1443]: time="2025-05-15T00:08:13.861985556Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 15 00:08:13.862510 containerd[1443]: time="2025-05-15T00:08:13.862466619Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:08:13.863022 containerd[1443]: time="2025-05-15T00:08:13.862852947Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:08:13.864531 containerd[1443]: time="2025-05-15T00:08:13.864498669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:08:13.867732 containerd[1443]: time="2025-05-15T00:08:13.867695459Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 564.595385ms" May 15 00:08:13.872065 containerd[1443]: time="2025-05-15T00:08:13.872006844Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 575.597577ms" May 15 00:08:13.873050 containerd[1443]: time="2025-05-15T00:08:13.873020871Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 572.597048ms" May 15 00:08:13.948634 kubelet[2202]: W0515 00:08:13.948290 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused May 15 00:08:13.948634 kubelet[2202]: E0515 00:08:13.948607 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused May 15 00:08:14.008456 containerd[1443]: time="2025-05-15T00:08:14.008357856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:08:14.008456 containerd[1443]: time="2025-05-15T00:08:14.008414721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:08:14.008610 containerd[1443]: time="2025-05-15T00:08:14.008453723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:14.009357 containerd[1443]: time="2025-05-15T00:08:14.009261374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:08:14.009357 containerd[1443]: time="2025-05-15T00:08:14.009336340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:08:14.009454 containerd[1443]: time="2025-05-15T00:08:14.009397800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:14.009576 containerd[1443]: time="2025-05-15T00:08:14.009519801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:14.009824 containerd[1443]: time="2025-05-15T00:08:14.008938169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:08:14.009891 containerd[1443]: time="2025-05-15T00:08:14.009821506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:08:14.009891 containerd[1443]: time="2025-05-15T00:08:14.009837930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:14.009970 containerd[1443]: time="2025-05-15T00:08:14.009923447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:14.011064 containerd[1443]: time="2025-05-15T00:08:14.011016778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:14.031949 systemd[1]: Started cri-containerd-4e3b59256e99985e7e21d1199aa0468c32b27e41d8ce31f218b342a16ad9e461.scope - libcontainer container 4e3b59256e99985e7e21d1199aa0468c32b27e41d8ce31f218b342a16ad9e461. May 15 00:08:14.033225 systemd[1]: Started cri-containerd-7f16ae43d5ea2fddcf0bc810daf4b4b43b2f1663a9972f3360871f96a62eb93b.scope - libcontainer container 7f16ae43d5ea2fddcf0bc810daf4b4b43b2f1663a9972f3360871f96a62eb93b. May 15 00:08:14.034395 systemd[1]: Started cri-containerd-fecda6378673524e74c323e57285d97cfcd5cde96d46601b297c4a3d0e8102c4.scope - libcontainer container fecda6378673524e74c323e57285d97cfcd5cde96d46601b297c4a3d0e8102c4. May 15 00:08:14.062883 containerd[1443]: time="2025-05-15T00:08:14.062778601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f16ae43d5ea2fddcf0bc810daf4b4b43b2f1663a9972f3360871f96a62eb93b\"" May 15 00:08:14.063238 kubelet[2202]: E0515 00:08:14.062810 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="1.6s" May 15 00:08:14.064331 kubelet[2202]: E0515 00:08:14.064293 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:14.067826 containerd[1443]: time="2025-05-15T00:08:14.067564964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8aa5d000121d77143d1d94287bacffd3,Namespace:kube-system,Attempt:0,} returns sandbox id \"fecda6378673524e74c323e57285d97cfcd5cde96d46601b297c4a3d0e8102c4\"" May 15 00:08:14.068159 containerd[1443]: time="2025-05-15T00:08:14.068093608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e3b59256e99985e7e21d1199aa0468c32b27e41d8ce31f218b342a16ad9e461\"" May 15 00:08:14.068252 kubelet[2202]: E0515 00:08:14.068226 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:14.069391 kubelet[2202]: E0515 00:08:14.069359 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:14.070367 containerd[1443]: time="2025-05-15T00:08:14.070333819Z" level=info msg="CreateContainer within sandbox \"7f16ae43d5ea2fddcf0bc810daf4b4b43b2f1663a9972f3360871f96a62eb93b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 00:08:14.070902 containerd[1443]: time="2025-05-15T00:08:14.070865219Z" level=info msg="CreateContainer within sandbox \"fecda6378673524e74c323e57285d97cfcd5cde96d46601b297c4a3d0e8102c4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 00:08:14.071250 containerd[1443]: time="2025-05-15T00:08:14.071224868Z" level=info msg="CreateContainer within sandbox \"4e3b59256e99985e7e21d1199aa0468c32b27e41d8ce31f218b342a16ad9e461\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 00:08:14.086484 containerd[1443]: time="2025-05-15T00:08:14.086356642Z" level=info msg="CreateContainer within sandbox \"7f16ae43d5ea2fddcf0bc810daf4b4b43b2f1663a9972f3360871f96a62eb93b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bf6bb70f670e492211b49813385cdc6715c4afec14c3709ad7f61fde992bde29\"" May 15 00:08:14.087306 containerd[1443]: time="2025-05-15T00:08:14.087009844Z" level=info msg="StartContainer for \"bf6bb70f670e492211b49813385cdc6715c4afec14c3709ad7f61fde992bde29\"" May 15 00:08:14.089934 containerd[1443]: time="2025-05-15T00:08:14.089892068Z" level=info msg="CreateContainer within sandbox \"fecda6378673524e74c323e57285d97cfcd5cde96d46601b297c4a3d0e8102c4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1ba4397561f8259c95d9727171315de7f4c7b7b1b996fb46a809bbadc6ed6578\"" May 15 00:08:14.090408 containerd[1443]: time="2025-05-15T00:08:14.090381190Z" level=info msg="StartContainer for \"1ba4397561f8259c95d9727171315de7f4c7b7b1b996fb46a809bbadc6ed6578\"" May 15 00:08:14.090680 containerd[1443]: time="2025-05-15T00:08:14.090592144Z" level=info msg="CreateContainer within sandbox \"4e3b59256e99985e7e21d1199aa0468c32b27e41d8ce31f218b342a16ad9e461\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"79abd5f660a20bf0e9c6d86d2ca9296ea31acc3facab09e4d6ddb3056c2ff69e\"" May 15 00:08:14.091262 containerd[1443]: time="2025-05-15T00:08:14.091228162Z" level=info msg="StartContainer for \"79abd5f660a20bf0e9c6d86d2ca9296ea31acc3facab09e4d6ddb3056c2ff69e\"" May 15 00:08:14.109983 systemd[1]: Started cri-containerd-bf6bb70f670e492211b49813385cdc6715c4afec14c3709ad7f61fde992bde29.scope - libcontainer container bf6bb70f670e492211b49813385cdc6715c4afec14c3709ad7f61fde992bde29. May 15 00:08:14.113279 systemd[1]: Started cri-containerd-1ba4397561f8259c95d9727171315de7f4c7b7b1b996fb46a809bbadc6ed6578.scope - libcontainer container 1ba4397561f8259c95d9727171315de7f4c7b7b1b996fb46a809bbadc6ed6578. May 15 00:08:14.114141 systemd[1]: Started cri-containerd-79abd5f660a20bf0e9c6d86d2ca9296ea31acc3facab09e4d6ddb3056c2ff69e.scope - libcontainer container 79abd5f660a20bf0e9c6d86d2ca9296ea31acc3facab09e4d6ddb3056c2ff69e. May 15 00:08:14.145379 containerd[1443]: time="2025-05-15T00:08:14.145332017Z" level=info msg="StartContainer for \"bf6bb70f670e492211b49813385cdc6715c4afec14c3709ad7f61fde992bde29\" returns successfully" May 15 00:08:14.167056 kubelet[2202]: I0515 00:08:14.166671 2202 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 00:08:14.167181 kubelet[2202]: E0515 00:08:14.167062 2202 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="localhost" May 15 00:08:14.178083 containerd[1443]: time="2025-05-15T00:08:14.173271037Z" level=info msg="StartContainer for \"1ba4397561f8259c95d9727171315de7f4c7b7b1b996fb46a809bbadc6ed6578\" returns successfully" May 15 00:08:14.178083 containerd[1443]: time="2025-05-15T00:08:14.173354435Z" level=info msg="StartContainer for \"79abd5f660a20bf0e9c6d86d2ca9296ea31acc3facab09e4d6ddb3056c2ff69e\" returns successfully" May 15 00:08:14.259505 kubelet[2202]: W0515 00:08:14.259416 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused May 15 00:08:14.259505 kubelet[2202]: E0515 00:08:14.259481 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused May 15 00:08:14.685156 kubelet[2202]: E0515 00:08:14.684573 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:14.686150 kubelet[2202]: E0515 00:08:14.686123 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:14.690418 kubelet[2202]: E0515 00:08:14.690378 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:15.691051 kubelet[2202]: E0515 00:08:15.691023 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:15.769201 kubelet[2202]: I0515 00:08:15.769143 2202 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 00:08:16.104964 kubelet[2202]: E0515 00:08:16.104864 2202 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 00:08:16.202629 kubelet[2202]: I0515 00:08:16.202594 2202 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 15 00:08:16.652066 kubelet[2202]: I0515 00:08:16.652028 2202 apiserver.go:52] "Watching apiserver" May 15 00:08:16.660451 kubelet[2202]: I0515 00:08:16.660411 2202 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 00:08:17.411474 kubelet[2202]: E0515 00:08:17.411406 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:17.691350 kubelet[2202]: E0515 00:08:17.691249 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:17.846558 kubelet[2202]: E0515 00:08:17.846470 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:18.279581 systemd[1]: Reloading requested from client PID 2477 ('systemctl') (unit session-7.scope)... May 15 00:08:18.279595 systemd[1]: Reloading... May 15 00:08:18.305361 kubelet[2202]: E0515 00:08:18.304407 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:18.357828 zram_generator::config[2522]: No configuration found. May 15 00:08:18.434580 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:08:18.498642 systemd[1]: Reloading finished in 218 ms. May 15 00:08:18.533577 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:08:18.551366 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:08:18.551619 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:08:18.551674 systemd[1]: kubelet.service: Consumed 1.228s CPU time, 116.8M memory peak, 0B memory swap peak. May 15 00:08:18.560032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:08:18.651309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:08:18.655223 (kubelet)[2558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:08:18.695703 kubelet[2558]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:08:18.695703 kubelet[2558]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:08:18.695703 kubelet[2558]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:08:18.696072 kubelet[2558]: I0515 00:08:18.695758 2558 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:08:18.701169 kubelet[2558]: I0515 00:08:18.700238 2558 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 00:08:18.701169 kubelet[2558]: I0515 00:08:18.700270 2558 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:08:18.701169 kubelet[2558]: I0515 00:08:18.700429 2558 server.go:927] "Client rotation is on, will bootstrap in background" May 15 00:08:18.702641 kubelet[2558]: I0515 00:08:18.702616 2558 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 00:08:18.703951 kubelet[2558]: I0515 00:08:18.703916 2558 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:08:18.708725 kubelet[2558]: I0515 00:08:18.708682 2558 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:08:18.709072 kubelet[2558]: I0515 00:08:18.709041 2558 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:08:18.709301 kubelet[2558]: I0515 00:08:18.709140 2558 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 00:08:18.709417 kubelet[2558]: I0515 00:08:18.709405 2558 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:08:18.709476 kubelet[2558]: I0515 00:08:18.709467 2558 container_manager_linux.go:301] "Creating device plugin manager" May 15 00:08:18.709558 kubelet[2558]: I0515 00:08:18.709548 2558 state_mem.go:36] "Initialized new in-memory state store" May 15 00:08:18.709709 kubelet[2558]: I0515 00:08:18.709696 2558 kubelet.go:400] "Attempting to sync node with API server" May 15 00:08:18.709808 kubelet[2558]: I0515 00:08:18.709761 2558 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:08:18.709958 kubelet[2558]: I0515 00:08:18.709946 2558 kubelet.go:312] "Adding apiserver pod source" May 15 00:08:18.710039 kubelet[2558]: I0515 00:08:18.710030 2558 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:08:18.713392 kubelet[2558]: I0515 00:08:18.713365 2558 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 15 00:08:18.713554 kubelet[2558]: I0515 00:08:18.713536 2558 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:08:18.713968 kubelet[2558]: I0515 00:08:18.713950 2558 server.go:1264] "Started kubelet" May 15 00:08:18.714746 kubelet[2558]: I0515 00:08:18.714715 2558 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:08:18.715911 kubelet[2558]: I0515 00:08:18.715857 2558 server.go:455] "Adding debug handlers to kubelet server" May 15 00:08:18.715911 kubelet[2558]: I0515 00:08:18.715890 2558 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:08:18.719722 kubelet[2558]: I0515 00:08:18.716943 2558 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:08:18.719722 kubelet[2558]: I0515 00:08:18.717112 2558 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:08:18.719722 kubelet[2558]: E0515 00:08:18.718723 2558 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:18.719722 kubelet[2558]: I0515 00:08:18.718750 2558 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 00:08:18.719722 kubelet[2558]: I0515 00:08:18.719159 2558 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 00:08:18.719722 kubelet[2558]: I0515 00:08:18.719325 2558 reconciler.go:26] "Reconciler: start to sync state" May 15 00:08:18.721917 kubelet[2558]: I0515 00:08:18.721067 2558 factory.go:221] Registration of the systemd container factory successfully May 15 00:08:18.721917 kubelet[2558]: I0515 00:08:18.721190 2558 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:08:18.723457 kubelet[2558]: E0515 00:08:18.723427 2558 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:08:18.726321 kubelet[2558]: I0515 00:08:18.726295 2558 factory.go:221] Registration of the containerd container factory successfully May 15 00:08:18.735001 kubelet[2558]: I0515 00:08:18.734948 2558 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:08:18.736094 kubelet[2558]: I0515 00:08:18.736060 2558 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:08:18.736094 kubelet[2558]: I0515 00:08:18.736089 2558 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:08:18.736194 kubelet[2558]: I0515 00:08:18.736112 2558 kubelet.go:2337] "Starting kubelet main sync loop" May 15 00:08:18.736194 kubelet[2558]: E0515 00:08:18.736151 2558 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:08:18.767098 kubelet[2558]: I0515 00:08:18.767074 2558 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:08:18.767098 kubelet[2558]: I0515 00:08:18.767092 2558 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:08:18.767098 kubelet[2558]: I0515 00:08:18.767111 2558 state_mem.go:36] "Initialized new in-memory state store" May 15 00:08:18.767272 kubelet[2558]: I0515 00:08:18.767261 2558 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 00:08:18.767310 kubelet[2558]: I0515 00:08:18.767272 2558 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 00:08:18.767310 kubelet[2558]: I0515 00:08:18.767302 2558 policy_none.go:49] "None policy: Start" May 15 00:08:18.767931 kubelet[2558]: I0515 00:08:18.767898 2558 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:08:18.767931 kubelet[2558]: I0515 00:08:18.767920 2558 state_mem.go:35] "Initializing new in-memory state store" May 15 00:08:18.768066 kubelet[2558]: I0515 00:08:18.768050 2558 state_mem.go:75] "Updated machine memory state" May 15 00:08:18.772035 kubelet[2558]: I0515 00:08:18.771975 2558 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:08:18.772243 kubelet[2558]: I0515 00:08:18.772134 2558 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:08:18.772302 kubelet[2558]: I0515 00:08:18.772247 2558 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:08:18.823217 kubelet[2558]: I0515 00:08:18.822460 2558 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 00:08:18.828125 kubelet[2558]: I0515 00:08:18.827968 2558 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 15 00:08:18.828125 kubelet[2558]: I0515 00:08:18.828047 2558 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 15 00:08:18.836240 kubelet[2558]: I0515 00:08:18.836208 2558 topology_manager.go:215] "Topology Admit Handler" podUID="8aa5d000121d77143d1d94287bacffd3" podNamespace="kube-system" podName="kube-apiserver-localhost" May 15 00:08:18.836350 kubelet[2558]: I0515 00:08:18.836307 2558 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 15 00:08:18.836376 kubelet[2558]: I0515 00:08:18.836364 2558 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 15 00:08:18.841149 kubelet[2558]: E0515 00:08:18.841073 2558 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 00:08:18.842033 kubelet[2558]: E0515 00:08:18.841966 2558 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 15 00:08:18.842092 kubelet[2558]: E0515 00:08:18.842061 2558 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 00:08:19.020343 kubelet[2558]: I0515 00:08:19.020199 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8aa5d000121d77143d1d94287bacffd3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8aa5d000121d77143d1d94287bacffd3\") " pod="kube-system/kube-apiserver-localhost" May 15 00:08:19.020343 kubelet[2558]: I0515 00:08:19.020240 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8aa5d000121d77143d1d94287bacffd3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8aa5d000121d77143d1d94287bacffd3\") " pod="kube-system/kube-apiserver-localhost" May 15 00:08:19.020343 kubelet[2558]: I0515 00:08:19.020266 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:08:19.020343 kubelet[2558]: I0515 00:08:19.020282 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:08:19.020343 kubelet[2558]: I0515 00:08:19.020299 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 15 00:08:19.020570 kubelet[2558]: I0515 00:08:19.020317 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8aa5d000121d77143d1d94287bacffd3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8aa5d000121d77143d1d94287bacffd3\") " pod="kube-system/kube-apiserver-localhost" May 15 00:08:19.020570 kubelet[2558]: I0515 00:08:19.020352 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:08:19.020570 kubelet[2558]: I0515 00:08:19.020396 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:08:19.020570 kubelet[2558]: I0515 00:08:19.020419 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:08:19.143095 kubelet[2558]: E0515 00:08:19.142640 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:19.143651 kubelet[2558]: E0515 00:08:19.143229 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:19.144010 kubelet[2558]: E0515 00:08:19.143979 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:19.711039 kubelet[2558]: I0515 00:08:19.710983 2558 apiserver.go:52] "Watching apiserver" May 15 00:08:19.720007 kubelet[2558]: I0515 00:08:19.719974 2558 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 00:08:19.757741 kubelet[2558]: E0515 00:08:19.757152 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:19.765670 kubelet[2558]: E0515 00:08:19.765631 2558 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 00:08:19.766264 kubelet[2558]: E0515 00:08:19.766079 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:19.767372 kubelet[2558]: E0515 00:08:19.766736 2558 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 15 00:08:19.767372 kubelet[2558]: E0515 00:08:19.767128 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:19.784554 kubelet[2558]: I0515 00:08:19.784493 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.784478266 podStartE2EDuration="1.784478266s" podCreationTimestamp="2025-05-15 00:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:08:19.777914786 +0000 UTC m=+1.119774582" watchObservedRunningTime="2025-05-15 00:08:19.784478266 +0000 UTC m=+1.126338022" May 15 00:08:19.794460 kubelet[2558]: I0515 00:08:19.794378 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.794359175 podStartE2EDuration="2.794359175s" podCreationTimestamp="2025-05-15 00:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:08:19.78442117 +0000 UTC m=+1.126280966" watchObservedRunningTime="2025-05-15 00:08:19.794359175 +0000 UTC m=+1.136219011" May 15 00:08:19.809497 kubelet[2558]: I0515 00:08:19.809433 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.809417328 podStartE2EDuration="2.809417328s" podCreationTimestamp="2025-05-15 00:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:08:19.792966777 +0000 UTC m=+1.134826573" watchObservedRunningTime="2025-05-15 00:08:19.809417328 +0000 UTC m=+1.151277124" May 15 00:08:20.760804 kubelet[2558]: E0515 00:08:20.759295 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:20.760804 kubelet[2558]: E0515 00:08:20.759346 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:23.412927 sudo[1623]: pam_unix(sudo:session): session closed for user root May 15 00:08:23.419685 sshd[1620]: pam_unix(sshd:session): session closed for user core May 15 00:08:23.423021 systemd[1]: sshd@6-10.0.0.17:22-10.0.0.1:39292.service: Deactivated successfully. May 15 00:08:23.423323 systemd-logind[1429]: Session 7 logged out. Waiting for processes to exit. May 15 00:08:23.424697 systemd[1]: session-7.scope: Deactivated successfully. May 15 00:08:23.424931 systemd[1]: session-7.scope: Consumed 6.674s CPU time, 189.0M memory peak, 0B memory swap peak. May 15 00:08:23.428009 systemd-logind[1429]: Removed session 7. May 15 00:08:26.664501 kubelet[2558]: E0515 00:08:26.664458 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:26.771987 kubelet[2558]: E0515 00:08:26.771942 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:27.947743 kubelet[2558]: E0515 00:08:27.947648 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:28.774640 kubelet[2558]: E0515 00:08:28.774590 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:29.423658 kubelet[2558]: E0515 00:08:29.423611 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:29.776217 kubelet[2558]: E0515 00:08:29.776177 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:30.118506 update_engine[1436]: I20250515 00:08:30.118345 1436 update_attempter.cc:509] Updating boot flags... May 15 00:08:30.149811 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2658) May 15 00:08:30.174826 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2657) May 15 00:08:30.207079 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2657) May 15 00:08:32.266084 kubelet[2558]: I0515 00:08:32.266041 2558 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 00:08:32.281870 containerd[1443]: time="2025-05-15T00:08:32.281811001Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 00:08:32.282271 kubelet[2558]: I0515 00:08:32.282189 2558 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 00:08:33.197934 kubelet[2558]: I0515 00:08:33.197873 2558 topology_manager.go:215] "Topology Admit Handler" podUID="9ec4dc2f-ae0e-416e-8a6f-d66b47dcb4bd" podNamespace="kube-system" podName="kube-proxy-kdk6l" May 15 00:08:33.208668 systemd[1]: Created slice kubepods-besteffort-pod9ec4dc2f_ae0e_416e_8a6f_d66b47dcb4bd.slice - libcontainer container kubepods-besteffort-pod9ec4dc2f_ae0e_416e_8a6f_d66b47dcb4bd.slice. May 15 00:08:33.305391 kubelet[2558]: I0515 00:08:33.305333 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ec4dc2f-ae0e-416e-8a6f-d66b47dcb4bd-lib-modules\") pod \"kube-proxy-kdk6l\" (UID: \"9ec4dc2f-ae0e-416e-8a6f-d66b47dcb4bd\") " pod="kube-system/kube-proxy-kdk6l" May 15 00:08:33.305391 kubelet[2558]: I0515 00:08:33.305375 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9knbj\" (UniqueName: \"kubernetes.io/projected/9ec4dc2f-ae0e-416e-8a6f-d66b47dcb4bd-kube-api-access-9knbj\") pod \"kube-proxy-kdk6l\" (UID: \"9ec4dc2f-ae0e-416e-8a6f-d66b47dcb4bd\") " pod="kube-system/kube-proxy-kdk6l" May 15 00:08:33.305391 kubelet[2558]: I0515 00:08:33.305403 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9ec4dc2f-ae0e-416e-8a6f-d66b47dcb4bd-kube-proxy\") pod \"kube-proxy-kdk6l\" (UID: \"9ec4dc2f-ae0e-416e-8a6f-d66b47dcb4bd\") " pod="kube-system/kube-proxy-kdk6l" May 15 00:08:33.305809 kubelet[2558]: I0515 00:08:33.305420 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ec4dc2f-ae0e-416e-8a6f-d66b47dcb4bd-xtables-lock\") pod \"kube-proxy-kdk6l\" (UID: \"9ec4dc2f-ae0e-416e-8a6f-d66b47dcb4bd\") " pod="kube-system/kube-proxy-kdk6l" May 15 00:08:33.407851 kubelet[2558]: I0515 00:08:33.407795 2558 topology_manager.go:215] "Topology Admit Handler" podUID="4d15b353-6d32-4422-9dd9-62135e7f5af0" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-vmpsg" May 15 00:08:33.417844 systemd[1]: Created slice kubepods-besteffort-pod4d15b353_6d32_4422_9dd9_62135e7f5af0.slice - libcontainer container kubepods-besteffort-pod4d15b353_6d32_4422_9dd9_62135e7f5af0.slice. May 15 00:08:33.507398 kubelet[2558]: I0515 00:08:33.507272 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4d15b353-6d32-4422-9dd9-62135e7f5af0-var-lib-calico\") pod \"tigera-operator-797db67f8-vmpsg\" (UID: \"4d15b353-6d32-4422-9dd9-62135e7f5af0\") " pod="tigera-operator/tigera-operator-797db67f8-vmpsg" May 15 00:08:33.507398 kubelet[2558]: I0515 00:08:33.507320 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2gcx\" (UniqueName: \"kubernetes.io/projected/4d15b353-6d32-4422-9dd9-62135e7f5af0-kube-api-access-z2gcx\") pod \"tigera-operator-797db67f8-vmpsg\" (UID: \"4d15b353-6d32-4422-9dd9-62135e7f5af0\") " pod="tigera-operator/tigera-operator-797db67f8-vmpsg" May 15 00:08:33.522591 kubelet[2558]: E0515 00:08:33.522549 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:33.525120 containerd[1443]: time="2025-05-15T00:08:33.525068100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kdk6l,Uid:9ec4dc2f-ae0e-416e-8a6f-d66b47dcb4bd,Namespace:kube-system,Attempt:0,}" May 15 00:08:33.542793 containerd[1443]: time="2025-05-15T00:08:33.542368634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:08:33.542793 containerd[1443]: time="2025-05-15T00:08:33.542761328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:08:33.542793 containerd[1443]: time="2025-05-15T00:08:33.542775170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:33.543844 containerd[1443]: time="2025-05-15T00:08:33.542893786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:33.564983 systemd[1]: Started cri-containerd-dc3f8b2447e82921e6d81a1e3243dcd2e924b4cdf98daea6a2f62f5b7c750528.scope - libcontainer container dc3f8b2447e82921e6d81a1e3243dcd2e924b4cdf98daea6a2f62f5b7c750528. May 15 00:08:33.582651 containerd[1443]: time="2025-05-15T00:08:33.582591034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kdk6l,Uid:9ec4dc2f-ae0e-416e-8a6f-d66b47dcb4bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc3f8b2447e82921e6d81a1e3243dcd2e924b4cdf98daea6a2f62f5b7c750528\"" May 15 00:08:33.585101 kubelet[2558]: E0515 00:08:33.585029 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:33.587765 containerd[1443]: time="2025-05-15T00:08:33.587705176Z" level=info msg="CreateContainer within sandbox \"dc3f8b2447e82921e6d81a1e3243dcd2e924b4cdf98daea6a2f62f5b7c750528\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 00:08:33.602850 containerd[1443]: time="2025-05-15T00:08:33.602781366Z" level=info msg="CreateContainer within sandbox \"dc3f8b2447e82921e6d81a1e3243dcd2e924b4cdf98daea6a2f62f5b7c750528\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"87162b09f7da7962ee0d6bf5e404fbc77ed3fb34ba6f4aafc7ce4451e33dee96\"" May 15 00:08:33.608840 containerd[1443]: time="2025-05-15T00:08:33.608684376Z" level=info msg="StartContainer for \"87162b09f7da7962ee0d6bf5e404fbc77ed3fb34ba6f4aafc7ce4451e33dee96\"" May 15 00:08:33.665986 systemd[1]: Started cri-containerd-87162b09f7da7962ee0d6bf5e404fbc77ed3fb34ba6f4aafc7ce4451e33dee96.scope - libcontainer container 87162b09f7da7962ee0d6bf5e404fbc77ed3fb34ba6f4aafc7ce4451e33dee96. May 15 00:08:33.692774 containerd[1443]: time="2025-05-15T00:08:33.692719749Z" level=info msg="StartContainer for \"87162b09f7da7962ee0d6bf5e404fbc77ed3fb34ba6f4aafc7ce4451e33dee96\" returns successfully" May 15 00:08:33.721987 containerd[1443]: time="2025-05-15T00:08:33.721924037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-vmpsg,Uid:4d15b353-6d32-4422-9dd9-62135e7f5af0,Namespace:tigera-operator,Attempt:0,}" May 15 00:08:33.746985 containerd[1443]: time="2025-05-15T00:08:33.746652951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:08:33.746985 containerd[1443]: time="2025-05-15T00:08:33.746718920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:08:33.746985 containerd[1443]: time="2025-05-15T00:08:33.746740203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:33.746985 containerd[1443]: time="2025-05-15T00:08:33.746871861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:33.770988 systemd[1]: Started cri-containerd-0399947591a66dc747fed488d6b047ba9c605ea93257ad28d1a4a567ddd03020.scope - libcontainer container 0399947591a66dc747fed488d6b047ba9c605ea93257ad28d1a4a567ddd03020. May 15 00:08:33.791195 kubelet[2558]: E0515 00:08:33.791161 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:33.813029 containerd[1443]: time="2025-05-15T00:08:33.812952730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-vmpsg,Uid:4d15b353-6d32-4422-9dd9-62135e7f5af0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0399947591a66dc747fed488d6b047ba9c605ea93257ad28d1a4a567ddd03020\"" May 15 00:08:33.823212 containerd[1443]: time="2025-05-15T00:08:33.816049996Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 00:08:35.038776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3062499223.mount: Deactivated successfully. May 15 00:08:35.480493 containerd[1443]: time="2025-05-15T00:08:35.480364653Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:35.481929 containerd[1443]: time="2025-05-15T00:08:35.481069021Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 15 00:08:35.482090 containerd[1443]: time="2025-05-15T00:08:35.482037222Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:35.484295 containerd[1443]: time="2025-05-15T00:08:35.484223495Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:35.485383 containerd[1443]: time="2025-05-15T00:08:35.485225701Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 1.66913722s" May 15 00:08:35.485383 containerd[1443]: time="2025-05-15T00:08:35.485269066Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 15 00:08:35.496908 containerd[1443]: time="2025-05-15T00:08:35.496655569Z" level=info msg="CreateContainer within sandbox \"0399947591a66dc747fed488d6b047ba9c605ea93257ad28d1a4a567ddd03020\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 00:08:35.537177 containerd[1443]: time="2025-05-15T00:08:35.537114344Z" level=info msg="CreateContainer within sandbox \"0399947591a66dc747fed488d6b047ba9c605ea93257ad28d1a4a567ddd03020\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"847a2ab54d38a88b0175c3ab81222c1aa33f86f357e66d7d44a54e5b8c199f98\"" May 15 00:08:35.537685 containerd[1443]: time="2025-05-15T00:08:35.537644730Z" level=info msg="StartContainer for \"847a2ab54d38a88b0175c3ab81222c1aa33f86f357e66d7d44a54e5b8c199f98\"" May 15 00:08:35.570030 systemd[1]: Started cri-containerd-847a2ab54d38a88b0175c3ab81222c1aa33f86f357e66d7d44a54e5b8c199f98.scope - libcontainer container 847a2ab54d38a88b0175c3ab81222c1aa33f86f357e66d7d44a54e5b8c199f98. May 15 00:08:35.606385 containerd[1443]: time="2025-05-15T00:08:35.606328072Z" level=info msg="StartContainer for \"847a2ab54d38a88b0175c3ab81222c1aa33f86f357e66d7d44a54e5b8c199f98\" returns successfully" May 15 00:08:35.822328 kubelet[2558]: I0515 00:08:35.822248 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kdk6l" podStartSLOduration=2.821306572 podStartE2EDuration="2.821306572s" podCreationTimestamp="2025-05-15 00:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:08:33.803570003 +0000 UTC m=+15.145429759" watchObservedRunningTime="2025-05-15 00:08:35.821306572 +0000 UTC m=+17.163166368" May 15 00:08:38.768751 kubelet[2558]: I0515 00:08:38.768390 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-vmpsg" podStartSLOduration=4.092095088 podStartE2EDuration="5.768373778s" podCreationTimestamp="2025-05-15 00:08:33 +0000 UTC" firstStartedPulling="2025-05-15 00:08:33.814645603 +0000 UTC m=+15.156505399" lastFinishedPulling="2025-05-15 00:08:35.490924333 +0000 UTC m=+16.832784089" observedRunningTime="2025-05-15 00:08:35.822384987 +0000 UTC m=+17.164244783" watchObservedRunningTime="2025-05-15 00:08:38.768373778 +0000 UTC m=+20.110233534" May 15 00:08:40.161363 kubelet[2558]: I0515 00:08:40.161286 2558 topology_manager.go:215] "Topology Admit Handler" podUID="6bccf4a6-921a-4a70-9f8e-6ee957a042b7" podNamespace="calico-system" podName="calico-typha-5bd9bc9dd8-s9z7g" May 15 00:08:40.174001 systemd[1]: Created slice kubepods-besteffort-pod6bccf4a6_921a_4a70_9f8e_6ee957a042b7.slice - libcontainer container kubepods-besteffort-pod6bccf4a6_921a_4a70_9f8e_6ee957a042b7.slice. May 15 00:08:40.217969 kubelet[2558]: I0515 00:08:40.217895 2558 topology_manager.go:215] "Topology Admit Handler" podUID="cc434b6d-44eb-4eb0-b7d2-1711df6ad36f" podNamespace="calico-system" podName="calico-node-kc4h2" May 15 00:08:40.227041 systemd[1]: Created slice kubepods-besteffort-podcc434b6d_44eb_4eb0_b7d2_1711df6ad36f.slice - libcontainer container kubepods-besteffort-podcc434b6d_44eb_4eb0_b7d2_1711df6ad36f.slice. May 15 00:08:40.260477 kubelet[2558]: I0515 00:08:40.260427 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bccf4a6-921a-4a70-9f8e-6ee957a042b7-tigera-ca-bundle\") pod \"calico-typha-5bd9bc9dd8-s9z7g\" (UID: \"6bccf4a6-921a-4a70-9f8e-6ee957a042b7\") " pod="calico-system/calico-typha-5bd9bc9dd8-s9z7g" May 15 00:08:40.260477 kubelet[2558]: I0515 00:08:40.260475 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6bccf4a6-921a-4a70-9f8e-6ee957a042b7-typha-certs\") pod \"calico-typha-5bd9bc9dd8-s9z7g\" (UID: \"6bccf4a6-921a-4a70-9f8e-6ee957a042b7\") " pod="calico-system/calico-typha-5bd9bc9dd8-s9z7g" May 15 00:08:40.260627 kubelet[2558]: I0515 00:08:40.260508 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj8pc\" (UniqueName: \"kubernetes.io/projected/6bccf4a6-921a-4a70-9f8e-6ee957a042b7-kube-api-access-cj8pc\") pod \"calico-typha-5bd9bc9dd8-s9z7g\" (UID: \"6bccf4a6-921a-4a70-9f8e-6ee957a042b7\") " pod="calico-system/calico-typha-5bd9bc9dd8-s9z7g" May 15 00:08:40.326204 kubelet[2558]: I0515 00:08:40.326104 2558 topology_manager.go:215] "Topology Admit Handler" podUID="262bbf25-d43c-443e-a611-5ff6be2347dc" podNamespace="calico-system" podName="csi-node-driver-nrvq4" May 15 00:08:40.326439 kubelet[2558]: E0515 00:08:40.326391 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nrvq4" podUID="262bbf25-d43c-443e-a611-5ff6be2347dc" May 15 00:08:40.361574 kubelet[2558]: I0515 00:08:40.361523 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc434b6d-44eb-4eb0-b7d2-1711df6ad36f-tigera-ca-bundle\") pod \"calico-node-kc4h2\" (UID: \"cc434b6d-44eb-4eb0-b7d2-1711df6ad36f\") " pod="calico-system/calico-node-kc4h2" May 15 00:08:40.361574 kubelet[2558]: I0515 00:08:40.361576 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mqz7\" (UniqueName: \"kubernetes.io/projected/cc434b6d-44eb-4eb0-b7d2-1711df6ad36f-kube-api-access-8mqz7\") pod \"calico-node-kc4h2\" (UID: \"cc434b6d-44eb-4eb0-b7d2-1711df6ad36f\") " pod="calico-system/calico-node-kc4h2" May 15 00:08:40.361743 kubelet[2558]: I0515 00:08:40.361599 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cc434b6d-44eb-4eb0-b7d2-1711df6ad36f-var-run-calico\") pod \"calico-node-kc4h2\" (UID: \"cc434b6d-44eb-4eb0-b7d2-1711df6ad36f\") " pod="calico-system/calico-node-kc4h2" May 15 00:08:40.361743 kubelet[2558]: I0515 00:08:40.361615 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cc434b6d-44eb-4eb0-b7d2-1711df6ad36f-cni-bin-dir\") pod \"calico-node-kc4h2\" (UID: \"cc434b6d-44eb-4eb0-b7d2-1711df6ad36f\") " pod="calico-system/calico-node-kc4h2" May 15 00:08:40.361743 kubelet[2558]: I0515 00:08:40.361631 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cc434b6d-44eb-4eb0-b7d2-1711df6ad36f-cni-log-dir\") pod \"calico-node-kc4h2\" (UID: \"cc434b6d-44eb-4eb0-b7d2-1711df6ad36f\") " pod="calico-system/calico-node-kc4h2" May 15 00:08:40.361743 kubelet[2558]: I0515 00:08:40.361707 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cc434b6d-44eb-4eb0-b7d2-1711df6ad36f-cni-net-dir\") pod \"calico-node-kc4h2\" (UID: \"cc434b6d-44eb-4eb0-b7d2-1711df6ad36f\") " pod="calico-system/calico-node-kc4h2" May 15 00:08:40.361895 kubelet[2558]: I0515 00:08:40.361853 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cc434b6d-44eb-4eb0-b7d2-1711df6ad36f-node-certs\") pod \"calico-node-kc4h2\" (UID: \"cc434b6d-44eb-4eb0-b7d2-1711df6ad36f\") " pod="calico-system/calico-node-kc4h2" May 15 00:08:40.361895 kubelet[2558]: I0515 00:08:40.361886 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cc434b6d-44eb-4eb0-b7d2-1711df6ad36f-flexvol-driver-host\") pod \"calico-node-kc4h2\" (UID: \"cc434b6d-44eb-4eb0-b7d2-1711df6ad36f\") " pod="calico-system/calico-node-kc4h2" May 15 00:08:40.361942 kubelet[2558]: I0515 00:08:40.361922 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cc434b6d-44eb-4eb0-b7d2-1711df6ad36f-var-lib-calico\") pod \"calico-node-kc4h2\" (UID: \"cc434b6d-44eb-4eb0-b7d2-1711df6ad36f\") " pod="calico-system/calico-node-kc4h2" May 15 00:08:40.361984 kubelet[2558]: I0515 00:08:40.361939 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc434b6d-44eb-4eb0-b7d2-1711df6ad36f-lib-modules\") pod \"calico-node-kc4h2\" (UID: \"cc434b6d-44eb-4eb0-b7d2-1711df6ad36f\") " pod="calico-system/calico-node-kc4h2" May 15 00:08:40.362021 kubelet[2558]: I0515 00:08:40.362000 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc434b6d-44eb-4eb0-b7d2-1711df6ad36f-xtables-lock\") pod \"calico-node-kc4h2\" (UID: \"cc434b6d-44eb-4eb0-b7d2-1711df6ad36f\") " pod="calico-system/calico-node-kc4h2" May 15 00:08:40.362044 kubelet[2558]: I0515 00:08:40.362022 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cc434b6d-44eb-4eb0-b7d2-1711df6ad36f-policysync\") pod \"calico-node-kc4h2\" (UID: \"cc434b6d-44eb-4eb0-b7d2-1711df6ad36f\") " pod="calico-system/calico-node-kc4h2" May 15 00:08:40.462832 kubelet[2558]: I0515 00:08:40.462694 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/262bbf25-d43c-443e-a611-5ff6be2347dc-socket-dir\") pod \"csi-node-driver-nrvq4\" (UID: \"262bbf25-d43c-443e-a611-5ff6be2347dc\") " pod="calico-system/csi-node-driver-nrvq4" May 15 00:08:40.462832 kubelet[2558]: I0515 00:08:40.462780 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/262bbf25-d43c-443e-a611-5ff6be2347dc-varrun\") pod \"csi-node-driver-nrvq4\" (UID: \"262bbf25-d43c-443e-a611-5ff6be2347dc\") " pod="calico-system/csi-node-driver-nrvq4" May 15 00:08:40.462832 kubelet[2558]: I0515 00:08:40.462813 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/262bbf25-d43c-443e-a611-5ff6be2347dc-registration-dir\") pod \"csi-node-driver-nrvq4\" (UID: \"262bbf25-d43c-443e-a611-5ff6be2347dc\") " pod="calico-system/csi-node-driver-nrvq4" May 15 00:08:40.462982 kubelet[2558]: I0515 00:08:40.462879 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/262bbf25-d43c-443e-a611-5ff6be2347dc-kubelet-dir\") pod \"csi-node-driver-nrvq4\" (UID: \"262bbf25-d43c-443e-a611-5ff6be2347dc\") " pod="calico-system/csi-node-driver-nrvq4" May 15 00:08:40.464227 kubelet[2558]: E0515 00:08:40.464062 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.464227 kubelet[2558]: W0515 00:08:40.464102 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.464227 kubelet[2558]: E0515 00:08:40.464126 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.464448 kubelet[2558]: E0515 00:08:40.464435 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.464554 kubelet[2558]: W0515 00:08:40.464523 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.464669 kubelet[2558]: E0515 00:08:40.464605 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.465519 kubelet[2558]: E0515 00:08:40.465496 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.465519 kubelet[2558]: W0515 00:08:40.465517 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.465616 kubelet[2558]: E0515 00:08:40.465536 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.466220 kubelet[2558]: E0515 00:08:40.465696 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.466220 kubelet[2558]: W0515 00:08:40.465709 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.466220 kubelet[2558]: E0515 00:08:40.465718 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.466220 kubelet[2558]: E0515 00:08:40.465851 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.466220 kubelet[2558]: W0515 00:08:40.465858 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.466220 kubelet[2558]: E0515 00:08:40.465866 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.466220 kubelet[2558]: E0515 00:08:40.466032 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.466220 kubelet[2558]: W0515 00:08:40.466039 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.466220 kubelet[2558]: E0515 00:08:40.466047 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.466220 kubelet[2558]: E0515 00:08:40.466190 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.466472 kubelet[2558]: W0515 00:08:40.466196 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.466472 kubelet[2558]: E0515 00:08:40.466205 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.466472 kubelet[2558]: I0515 00:08:40.466232 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vdgs\" (UniqueName: \"kubernetes.io/projected/262bbf25-d43c-443e-a611-5ff6be2347dc-kube-api-access-2vdgs\") pod \"csi-node-driver-nrvq4\" (UID: \"262bbf25-d43c-443e-a611-5ff6be2347dc\") " pod="calico-system/csi-node-driver-nrvq4" May 15 00:08:40.466472 kubelet[2558]: E0515 00:08:40.466361 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.466472 kubelet[2558]: W0515 00:08:40.466368 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.466472 kubelet[2558]: E0515 00:08:40.466376 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.468212 kubelet[2558]: E0515 00:08:40.466523 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.468212 kubelet[2558]: W0515 00:08:40.466533 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.468212 kubelet[2558]: E0515 00:08:40.466540 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.468212 kubelet[2558]: E0515 00:08:40.466669 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.468212 kubelet[2558]: W0515 00:08:40.466676 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.468212 kubelet[2558]: E0515 00:08:40.466797 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.468212 kubelet[2558]: W0515 00:08:40.466804 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.468212 kubelet[2558]: E0515 00:08:40.466911 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.468212 kubelet[2558]: W0515 00:08:40.466916 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.468212 kubelet[2558]: E0515 00:08:40.467042 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.468429 kubelet[2558]: W0515 00:08:40.467049 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.468429 kubelet[2558]: E0515 00:08:40.467056 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.468429 kubelet[2558]: E0515 00:08:40.467111 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.468429 kubelet[2558]: E0515 00:08:40.467133 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.468429 kubelet[2558]: E0515 00:08:40.467146 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.468429 kubelet[2558]: E0515 00:08:40.467170 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.468429 kubelet[2558]: W0515 00:08:40.467177 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.468429 kubelet[2558]: E0515 00:08:40.467185 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.468429 kubelet[2558]: E0515 00:08:40.467303 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.468429 kubelet[2558]: W0515 00:08:40.467310 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.468622 kubelet[2558]: E0515 00:08:40.467319 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.468622 kubelet[2558]: E0515 00:08:40.467453 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.468622 kubelet[2558]: W0515 00:08:40.467460 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.468622 kubelet[2558]: E0515 00:08:40.467468 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.468622 kubelet[2558]: E0515 00:08:40.467602 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.468622 kubelet[2558]: W0515 00:08:40.467609 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.468622 kubelet[2558]: E0515 00:08:40.467616 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.468906 kubelet[2558]: E0515 00:08:40.468838 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.468906 kubelet[2558]: W0515 00:08:40.468852 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.468906 kubelet[2558]: E0515 00:08:40.468866 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.477961 kubelet[2558]: E0515 00:08:40.477932 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.477961 kubelet[2558]: W0515 00:08:40.477951 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.477961 kubelet[2558]: E0515 00:08:40.477964 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.482068 kubelet[2558]: E0515 00:08:40.481745 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:40.482852 containerd[1443]: time="2025-05-15T00:08:40.482803835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bd9bc9dd8-s9z7g,Uid:6bccf4a6-921a-4a70-9f8e-6ee957a042b7,Namespace:calico-system,Attempt:0,}" May 15 00:08:40.504303 containerd[1443]: time="2025-05-15T00:08:40.504207542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:08:40.504303 containerd[1443]: time="2025-05-15T00:08:40.504260548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:08:40.504303 containerd[1443]: time="2025-05-15T00:08:40.504271669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:40.505218 containerd[1443]: time="2025-05-15T00:08:40.504356637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:40.524213 systemd[1]: Started cri-containerd-4b50184a697dc77aa3802702121137f0f547a720bf493e74c516903ae0fdf304.scope - libcontainer container 4b50184a697dc77aa3802702121137f0f547a720bf493e74c516903ae0fdf304. May 15 00:08:40.529166 kubelet[2558]: E0515 00:08:40.529135 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:40.529924 containerd[1443]: time="2025-05-15T00:08:40.529564086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kc4h2,Uid:cc434b6d-44eb-4eb0-b7d2-1711df6ad36f,Namespace:calico-system,Attempt:0,}" May 15 00:08:40.551516 containerd[1443]: time="2025-05-15T00:08:40.551426919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bd9bc9dd8-s9z7g,Uid:6bccf4a6-921a-4a70-9f8e-6ee957a042b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b50184a697dc77aa3802702121137f0f547a720bf493e74c516903ae0fdf304\"" May 15 00:08:40.552149 kubelet[2558]: E0515 00:08:40.552126 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:40.553223 containerd[1443]: time="2025-05-15T00:08:40.553190576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 00:08:40.567288 kubelet[2558]: E0515 00:08:40.567266 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.567288 kubelet[2558]: W0515 00:08:40.567286 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.567516 kubelet[2558]: E0515 00:08:40.567301 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.567566 kubelet[2558]: E0515 00:08:40.567545 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.567566 kubelet[2558]: W0515 00:08:40.567555 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.567613 kubelet[2558]: E0515 00:08:40.567578 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.570759 kubelet[2558]: E0515 00:08:40.570599 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.570759 kubelet[2558]: W0515 00:08:40.570650 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.570759 kubelet[2558]: E0515 00:08:40.570697 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.577562 kubelet[2558]: E0515 00:08:40.571075 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.577562 kubelet[2558]: W0515 00:08:40.571087 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.577562 kubelet[2558]: E0515 00:08:40.571185 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.577562 kubelet[2558]: E0515 00:08:40.571325 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.577562 kubelet[2558]: W0515 00:08:40.571335 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.577562 kubelet[2558]: E0515 00:08:40.571505 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.577562 kubelet[2558]: W0515 00:08:40.571515 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.577562 kubelet[2558]: E0515 00:08:40.571659 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.577562 kubelet[2558]: W0515 00:08:40.571674 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.577562 kubelet[2558]: E0515 00:08:40.571684 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.577562 kubelet[2558]: E0515 00:08:40.571973 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.578991 kubelet[2558]: E0515 00:08:40.571982 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.578991 kubelet[2558]: E0515 00:08:40.571999 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.578991 kubelet[2558]: W0515 00:08:40.571989 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.578991 kubelet[2558]: E0515 00:08:40.572042 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.578991 kubelet[2558]: E0515 00:08:40.572270 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.578991 kubelet[2558]: W0515 00:08:40.572297 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.578991 kubelet[2558]: E0515 00:08:40.572337 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.578991 kubelet[2558]: E0515 00:08:40.572573 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.578991 kubelet[2558]: W0515 00:08:40.572586 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.578991 kubelet[2558]: E0515 00:08:40.572599 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.579555 kubelet[2558]: E0515 00:08:40.572816 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.579555 kubelet[2558]: W0515 00:08:40.572825 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.579555 kubelet[2558]: E0515 00:08:40.572837 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.579555 kubelet[2558]: E0515 00:08:40.572972 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.579555 kubelet[2558]: W0515 00:08:40.572980 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.579555 kubelet[2558]: E0515 00:08:40.572988 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.579555 kubelet[2558]: E0515 00:08:40.573126 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.579555 kubelet[2558]: W0515 00:08:40.573134 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.579555 kubelet[2558]: E0515 00:08:40.573142 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.579555 kubelet[2558]: E0515 00:08:40.574193 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.579761 kubelet[2558]: W0515 00:08:40.574210 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.579761 kubelet[2558]: E0515 00:08:40.574230 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.579761 kubelet[2558]: E0515 00:08:40.575149 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.579761 kubelet[2558]: W0515 00:08:40.575163 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.579761 kubelet[2558]: E0515 00:08:40.575185 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.579761 kubelet[2558]: E0515 00:08:40.575415 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.579761 kubelet[2558]: W0515 00:08:40.575425 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.579761 kubelet[2558]: E0515 00:08:40.575441 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.579761 kubelet[2558]: E0515 00:08:40.575917 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.579761 kubelet[2558]: W0515 00:08:40.575929 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.581338 kubelet[2558]: E0515 00:08:40.576018 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.581338 kubelet[2558]: E0515 00:08:40.576275 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.581338 kubelet[2558]: W0515 00:08:40.576301 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.581338 kubelet[2558]: E0515 00:08:40.576326 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.581338 kubelet[2558]: E0515 00:08:40.577358 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.581338 kubelet[2558]: W0515 00:08:40.577371 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.581338 kubelet[2558]: E0515 00:08:40.577408 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.581338 kubelet[2558]: E0515 00:08:40.577844 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.581338 kubelet[2558]: W0515 00:08:40.577863 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.581338 kubelet[2558]: E0515 00:08:40.577964 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.581589 kubelet[2558]: E0515 00:08:40.578421 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.581589 kubelet[2558]: W0515 00:08:40.578436 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.581589 kubelet[2558]: E0515 00:08:40.578475 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.581589 kubelet[2558]: E0515 00:08:40.579109 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.581589 kubelet[2558]: W0515 00:08:40.579125 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.581589 kubelet[2558]: E0515 00:08:40.579338 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.581589 kubelet[2558]: E0515 00:08:40.580083 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.581589 kubelet[2558]: W0515 00:08:40.580164 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.581589 kubelet[2558]: E0515 00:08:40.580185 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.581589 kubelet[2558]: E0515 00:08:40.580755 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.581805 kubelet[2558]: W0515 00:08:40.580947 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.581805 kubelet[2558]: E0515 00:08:40.580987 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.581805 kubelet[2558]: E0515 00:08:40.581530 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.581805 kubelet[2558]: W0515 00:08:40.581543 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.581805 kubelet[2558]: E0515 00:08:40.581575 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.587284 kubelet[2558]: E0515 00:08:40.587252 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:40.587284 kubelet[2558]: W0515 00:08:40.587275 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:40.587389 kubelet[2558]: E0515 00:08:40.587291 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:40.605856 containerd[1443]: time="2025-05-15T00:08:40.605754490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:08:40.605856 containerd[1443]: time="2025-05-15T00:08:40.605823937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:08:40.606229 containerd[1443]: time="2025-05-15T00:08:40.606061641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:40.606229 containerd[1443]: time="2025-05-15T00:08:40.606178612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:40.628943 systemd[1]: Started cri-containerd-268a63eaab7b9f654663dba7facbac664b2b092250d57f08f3c491db5134b4fc.scope - libcontainer container 268a63eaab7b9f654663dba7facbac664b2b092250d57f08f3c491db5134b4fc. May 15 00:08:40.646284 containerd[1443]: time="2025-05-15T00:08:40.646243952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kc4h2,Uid:cc434b6d-44eb-4eb0-b7d2-1711df6ad36f,Namespace:calico-system,Attempt:0,} returns sandbox id \"268a63eaab7b9f654663dba7facbac664b2b092250d57f08f3c491db5134b4fc\"" May 15 00:08:40.647029 kubelet[2558]: E0515 00:08:40.646990 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:41.737330 kubelet[2558]: E0515 00:08:41.736966 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nrvq4" podUID="262bbf25-d43c-443e-a611-5ff6be2347dc" May 15 00:08:41.810850 containerd[1443]: time="2025-05-15T00:08:41.810796872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:41.811559 containerd[1443]: time="2025-05-15T00:08:41.811531143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 15 00:08:41.812494 containerd[1443]: time="2025-05-15T00:08:41.812449632Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:41.815073 containerd[1443]: time="2025-05-15T00:08:41.815034400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:41.816026 containerd[1443]: time="2025-05-15T00:08:41.815896723Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.262669743s" May 15 00:08:41.816026 containerd[1443]: time="2025-05-15T00:08:41.815932247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 15 00:08:41.818815 containerd[1443]: time="2025-05-15T00:08:41.818000846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 00:08:41.830906 containerd[1443]: time="2025-05-15T00:08:41.830752994Z" level=info msg="CreateContainer within sandbox \"4b50184a697dc77aa3802702121137f0f547a720bf493e74c516903ae0fdf304\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 00:08:41.849721 containerd[1443]: time="2025-05-15T00:08:41.849630731Z" level=info msg="CreateContainer within sandbox \"4b50184a697dc77aa3802702121137f0f547a720bf493e74c516903ae0fdf304\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7f3ac8067b5cc095a944859d80804a4a8a06233f454fd3817b4614ffd7b4bfe2\"" May 15 00:08:41.850551 containerd[1443]: time="2025-05-15T00:08:41.850479973Z" level=info msg="StartContainer for \"7f3ac8067b5cc095a944859d80804a4a8a06233f454fd3817b4614ffd7b4bfe2\"" May 15 00:08:41.875027 systemd[1]: Started cri-containerd-7f3ac8067b5cc095a944859d80804a4a8a06233f454fd3817b4614ffd7b4bfe2.scope - libcontainer container 7f3ac8067b5cc095a944859d80804a4a8a06233f454fd3817b4614ffd7b4bfe2. May 15 00:08:41.903673 containerd[1443]: time="2025-05-15T00:08:41.903634571Z" level=info msg="StartContainer for \"7f3ac8067b5cc095a944859d80804a4a8a06233f454fd3817b4614ffd7b4bfe2\" returns successfully" May 15 00:08:42.823158 kubelet[2558]: E0515 00:08:42.823118 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:42.832852 kubelet[2558]: I0515 00:08:42.832759 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bd9bc9dd8-s9z7g" podStartSLOduration=1.5687602040000002 podStartE2EDuration="2.832745996s" podCreationTimestamp="2025-05-15 00:08:40 +0000 UTC" firstStartedPulling="2025-05-15 00:08:40.552811098 +0000 UTC m=+21.894670894" lastFinishedPulling="2025-05-15 00:08:41.81679689 +0000 UTC m=+23.158656686" observedRunningTime="2025-05-15 00:08:42.832435327 +0000 UTC m=+24.174295123" watchObservedRunningTime="2025-05-15 00:08:42.832745996 +0000 UTC m=+24.174605792" May 15 00:08:42.884847 kubelet[2558]: E0515 00:08:42.884818 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.884847 kubelet[2558]: W0515 00:08:42.884839 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.884847 kubelet[2558]: E0515 00:08:42.884857 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.885174 kubelet[2558]: E0515 00:08:42.884997 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.885174 kubelet[2558]: W0515 00:08:42.885004 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.885174 kubelet[2558]: E0515 00:08:42.885012 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.885341 kubelet[2558]: E0515 00:08:42.885232 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.885341 kubelet[2558]: W0515 00:08:42.885240 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.885341 kubelet[2558]: E0515 00:08:42.885248 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.885662 kubelet[2558]: E0515 00:08:42.885397 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.885662 kubelet[2558]: W0515 00:08:42.885404 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.885662 kubelet[2558]: E0515 00:08:42.885411 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.885662 kubelet[2558]: E0515 00:08:42.885545 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.885662 kubelet[2558]: W0515 00:08:42.885552 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.885662 kubelet[2558]: E0515 00:08:42.885559 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.885978 kubelet[2558]: E0515 00:08:42.885701 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.885978 kubelet[2558]: W0515 00:08:42.885708 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.885978 kubelet[2558]: E0515 00:08:42.885714 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.885978 kubelet[2558]: E0515 00:08:42.885854 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.885978 kubelet[2558]: W0515 00:08:42.885862 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.885978 kubelet[2558]: E0515 00:08:42.885870 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.886208 kubelet[2558]: E0515 00:08:42.886038 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.886208 kubelet[2558]: W0515 00:08:42.886045 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.886208 kubelet[2558]: E0515 00:08:42.886052 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.886208 kubelet[2558]: E0515 00:08:42.886199 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.886208 kubelet[2558]: W0515 00:08:42.886207 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.886330 kubelet[2558]: E0515 00:08:42.886215 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.886374 kubelet[2558]: E0515 00:08:42.886351 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.886374 kubelet[2558]: W0515 00:08:42.886362 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.886374 kubelet[2558]: E0515 00:08:42.886369 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.886585 kubelet[2558]: E0515 00:08:42.886496 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.886585 kubelet[2558]: W0515 00:08:42.886502 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.886585 kubelet[2558]: E0515 00:08:42.886508 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.886757 kubelet[2558]: E0515 00:08:42.886631 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.886757 kubelet[2558]: W0515 00:08:42.886636 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.886757 kubelet[2558]: E0515 00:08:42.886643 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.886920 kubelet[2558]: E0515 00:08:42.886834 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.886920 kubelet[2558]: W0515 00:08:42.886842 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.886920 kubelet[2558]: E0515 00:08:42.886851 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.887014 kubelet[2558]: E0515 00:08:42.886986 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.887014 kubelet[2558]: W0515 00:08:42.886993 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.887014 kubelet[2558]: E0515 00:08:42.887001 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.887131 kubelet[2558]: E0515 00:08:42.887119 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.887131 kubelet[2558]: W0515 00:08:42.887128 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.887182 kubelet[2558]: E0515 00:08:42.887136 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.889499 kubelet[2558]: E0515 00:08:42.889439 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.889499 kubelet[2558]: W0515 00:08:42.889453 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.889499 kubelet[2558]: E0515 00:08:42.889465 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.889790 kubelet[2558]: E0515 00:08:42.889766 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.889833 kubelet[2558]: W0515 00:08:42.889781 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.889833 kubelet[2558]: E0515 00:08:42.889812 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.890007 kubelet[2558]: E0515 00:08:42.889994 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.890007 kubelet[2558]: W0515 00:08:42.890006 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.890068 kubelet[2558]: E0515 00:08:42.890020 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.890190 kubelet[2558]: E0515 00:08:42.890176 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.890231 kubelet[2558]: W0515 00:08:42.890191 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.890231 kubelet[2558]: E0515 00:08:42.890204 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.890379 kubelet[2558]: E0515 00:08:42.890365 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.890464 kubelet[2558]: W0515 00:08:42.890449 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.890517 kubelet[2558]: E0515 00:08:42.890504 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.890678 kubelet[2558]: E0515 00:08:42.890666 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.890678 kubelet[2558]: W0515 00:08:42.890678 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.890732 kubelet[2558]: E0515 00:08:42.890692 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.890867 kubelet[2558]: E0515 00:08:42.890858 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.890867 kubelet[2558]: W0515 00:08:42.890867 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.890931 kubelet[2558]: E0515 00:08:42.890892 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.891032 kubelet[2558]: E0515 00:08:42.891023 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.891032 kubelet[2558]: W0515 00:08:42.891032 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.891089 kubelet[2558]: E0515 00:08:42.891065 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.891276 kubelet[2558]: E0515 00:08:42.891261 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.891306 kubelet[2558]: W0515 00:08:42.891275 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.891306 kubelet[2558]: E0515 00:08:42.891290 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.891483 kubelet[2558]: E0515 00:08:42.891471 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.891510 kubelet[2558]: W0515 00:08:42.891483 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.891510 kubelet[2558]: E0515 00:08:42.891498 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.891678 kubelet[2558]: E0515 00:08:42.891667 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.891705 kubelet[2558]: W0515 00:08:42.891678 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.891705 kubelet[2558]: E0515 00:08:42.891691 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.891874 kubelet[2558]: E0515 00:08:42.891863 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.891917 kubelet[2558]: W0515 00:08:42.891874 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.891917 kubelet[2558]: E0515 00:08:42.891887 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.892254 kubelet[2558]: E0515 00:08:42.892239 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.892254 kubelet[2558]: W0515 00:08:42.892253 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.892309 kubelet[2558]: E0515 00:08:42.892268 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.892431 kubelet[2558]: E0515 00:08:42.892420 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.892463 kubelet[2558]: W0515 00:08:42.892431 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.892463 kubelet[2558]: E0515 00:08:42.892442 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.892585 kubelet[2558]: E0515 00:08:42.892576 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.892585 kubelet[2558]: W0515 00:08:42.892585 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.892643 kubelet[2558]: E0515 00:08:42.892596 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.892904 kubelet[2558]: E0515 00:08:42.892890 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.892943 kubelet[2558]: W0515 00:08:42.892904 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.892943 kubelet[2558]: E0515 00:08:42.892919 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.893329 kubelet[2558]: E0515 00:08:42.893215 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.893329 kubelet[2558]: W0515 00:08:42.893239 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.893329 kubelet[2558]: E0515 00:08:42.893256 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:42.893496 kubelet[2558]: E0515 00:08:42.893482 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:08:42.893586 kubelet[2558]: W0515 00:08:42.893541 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:08:42.893586 kubelet[2558]: E0515 00:08:42.893562 2558 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:08:43.127952 containerd[1443]: time="2025-05-15T00:08:43.127846801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:43.128698 containerd[1443]: time="2025-05-15T00:08:43.128522021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 15 00:08:43.129890 containerd[1443]: time="2025-05-15T00:08:43.129852819Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:43.132152 containerd[1443]: time="2025-05-15T00:08:43.132116021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:43.132844 containerd[1443]: time="2025-05-15T00:08:43.132812683Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.314773393s" May 15 00:08:43.132913 containerd[1443]: time="2025-05-15T00:08:43.132845845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 15 00:08:43.135618 containerd[1443]: time="2025-05-15T00:08:43.135591210Z" level=info msg="CreateContainer within sandbox \"268a63eaab7b9f654663dba7facbac664b2b092250d57f08f3c491db5134b4fc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 00:08:43.146662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount505553610.mount: Deactivated successfully. May 15 00:08:43.147862 containerd[1443]: time="2025-05-15T00:08:43.147826498Z" level=info msg="CreateContainer within sandbox \"268a63eaab7b9f654663dba7facbac664b2b092250d57f08f3c491db5134b4fc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"14e28344cfa64490cd155d3fde28a706249d9909f8c8cbc321bd3f16cb770069\"" May 15 00:08:43.148950 containerd[1443]: time="2025-05-15T00:08:43.148922595Z" level=info msg="StartContainer for \"14e28344cfa64490cd155d3fde28a706249d9909f8c8cbc321bd3f16cb770069\"" May 15 00:08:43.176963 systemd[1]: Started cri-containerd-14e28344cfa64490cd155d3fde28a706249d9909f8c8cbc321bd3f16cb770069.scope - libcontainer container 14e28344cfa64490cd155d3fde28a706249d9909f8c8cbc321bd3f16cb770069. May 15 00:08:43.221468 containerd[1443]: time="2025-05-15T00:08:43.221406882Z" level=info msg="StartContainer for \"14e28344cfa64490cd155d3fde28a706249d9909f8c8cbc321bd3f16cb770069\" returns successfully" May 15 00:08:43.230733 systemd[1]: cri-containerd-14e28344cfa64490cd155d3fde28a706249d9909f8c8cbc321bd3f16cb770069.scope: Deactivated successfully. May 15 00:08:43.308825 containerd[1443]: time="2025-05-15T00:08:43.304425426Z" level=info msg="shim disconnected" id=14e28344cfa64490cd155d3fde28a706249d9909f8c8cbc321bd3f16cb770069 namespace=k8s.io May 15 00:08:43.308825 containerd[1443]: time="2025-05-15T00:08:43.308824377Z" level=warning msg="cleaning up after shim disconnected" id=14e28344cfa64490cd155d3fde28a706249d9909f8c8cbc321bd3f16cb770069 namespace=k8s.io May 15 00:08:43.309029 containerd[1443]: time="2025-05-15T00:08:43.308837499Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:08:43.369694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14e28344cfa64490cd155d3fde28a706249d9909f8c8cbc321bd3f16cb770069-rootfs.mount: Deactivated successfully. May 15 00:08:43.736878 kubelet[2558]: E0515 00:08:43.736816 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nrvq4" podUID="262bbf25-d43c-443e-a611-5ff6be2347dc" May 15 00:08:43.826101 kubelet[2558]: E0515 00:08:43.825899 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:43.827459 containerd[1443]: time="2025-05-15T00:08:43.826868173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 00:08:43.828294 kubelet[2558]: I0515 00:08:43.828270 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:08:43.829574 kubelet[2558]: E0515 00:08:43.828935 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:45.736830 kubelet[2558]: E0515 00:08:45.736768 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nrvq4" podUID="262bbf25-d43c-443e-a611-5ff6be2347dc" May 15 00:08:47.163973 containerd[1443]: time="2025-05-15T00:08:47.163927529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:47.164568 containerd[1443]: time="2025-05-15T00:08:47.164493372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 15 00:08:47.165125 containerd[1443]: time="2025-05-15T00:08:47.165100259Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:47.168005 containerd[1443]: time="2025-05-15T00:08:47.167120094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:47.168005 containerd[1443]: time="2025-05-15T00:08:47.167880552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.340972935s" May 15 00:08:47.168005 containerd[1443]: time="2025-05-15T00:08:47.167911915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 15 00:08:47.170102 containerd[1443]: time="2025-05-15T00:08:47.170070600Z" level=info msg="CreateContainer within sandbox \"268a63eaab7b9f654663dba7facbac664b2b092250d57f08f3c491db5134b4fc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 00:08:47.187917 containerd[1443]: time="2025-05-15T00:08:47.187877208Z" level=info msg="CreateContainer within sandbox \"268a63eaab7b9f654663dba7facbac664b2b092250d57f08f3c491db5134b4fc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f2c8acd4fe92e0caca5589fc4ab2f6dab3a145db70eef269f5cb4bba93075d1d\"" May 15 00:08:47.188762 containerd[1443]: time="2025-05-15T00:08:47.188528058Z" level=info msg="StartContainer for \"f2c8acd4fe92e0caca5589fc4ab2f6dab3a145db70eef269f5cb4bba93075d1d\"" May 15 00:08:47.221982 systemd[1]: Started cri-containerd-f2c8acd4fe92e0caca5589fc4ab2f6dab3a145db70eef269f5cb4bba93075d1d.scope - libcontainer container f2c8acd4fe92e0caca5589fc4ab2f6dab3a145db70eef269f5cb4bba93075d1d. May 15 00:08:47.244281 containerd[1443]: time="2025-05-15T00:08:47.244239018Z" level=info msg="StartContainer for \"f2c8acd4fe92e0caca5589fc4ab2f6dab3a145db70eef269f5cb4bba93075d1d\" returns successfully" May 15 00:08:47.736664 kubelet[2558]: E0515 00:08:47.736585 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nrvq4" podUID="262bbf25-d43c-443e-a611-5ff6be2347dc" May 15 00:08:47.839316 kubelet[2558]: E0515 00:08:47.838538 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:47.986897 systemd[1]: cri-containerd-f2c8acd4fe92e0caca5589fc4ab2f6dab3a145db70eef269f5cb4bba93075d1d.scope: Deactivated successfully. May 15 00:08:48.006521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2c8acd4fe92e0caca5589fc4ab2f6dab3a145db70eef269f5cb4bba93075d1d-rootfs.mount: Deactivated successfully. May 15 00:08:48.008605 kubelet[2558]: I0515 00:08:48.008045 2558 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 15 00:08:48.042983 containerd[1443]: time="2025-05-15T00:08:48.042920305Z" level=info msg="shim disconnected" id=f2c8acd4fe92e0caca5589fc4ab2f6dab3a145db70eef269f5cb4bba93075d1d namespace=k8s.io May 15 00:08:48.042983 containerd[1443]: time="2025-05-15T00:08:48.042975829Z" level=warning msg="cleaning up after shim disconnected" id=f2c8acd4fe92e0caca5589fc4ab2f6dab3a145db70eef269f5cb4bba93075d1d namespace=k8s.io May 15 00:08:48.042983 containerd[1443]: time="2025-05-15T00:08:48.042985190Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:08:48.057849 kubelet[2558]: I0515 00:08:48.057280 2558 topology_manager.go:215] "Topology Admit Handler" podUID="8f6d68a1-7bcc-4d68-9390-ac560659ee14" podNamespace="kube-system" podName="coredns-7db6d8ff4d-f6wff" May 15 00:08:48.058086 kubelet[2558]: I0515 00:08:48.058058 2558 topology_manager.go:215] "Topology Admit Handler" podUID="2b7a008e-9f04-4b23-afda-16075b376325" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jxzt2" May 15 00:08:48.060643 kubelet[2558]: I0515 00:08:48.060126 2558 topology_manager.go:215] "Topology Admit Handler" podUID="5c1a3ff2-9a98-4275-87ea-6992a522449a" podNamespace="calico-apiserver" podName="calico-apiserver-6876c9b49-s24wt" May 15 00:08:48.062428 kubelet[2558]: I0515 00:08:48.062159 2558 topology_manager.go:215] "Topology Admit Handler" podUID="b55a08bc-9d10-4d47-ac0a-de72fa4ab1ce" podNamespace="calico-system" podName="calico-kube-controllers-57d8fbfc9f-6zbq8" May 15 00:08:48.062428 kubelet[2558]: I0515 00:08:48.062308 2558 topology_manager.go:215] "Topology Admit Handler" podUID="da1e193e-d1e2-4c51-883a-4a7628d9c3e9" podNamespace="calico-apiserver" podName="calico-apiserver-6876c9b49-8jp5s" May 15 00:08:48.080059 systemd[1]: Created slice kubepods-burstable-pod8f6d68a1_7bcc_4d68_9390_ac560659ee14.slice - libcontainer container kubepods-burstable-pod8f6d68a1_7bcc_4d68_9390_ac560659ee14.slice. May 15 00:08:48.093867 systemd[1]: Created slice kubepods-besteffort-pod5c1a3ff2_9a98_4275_87ea_6992a522449a.slice - libcontainer container kubepods-besteffort-pod5c1a3ff2_9a98_4275_87ea_6992a522449a.slice. May 15 00:08:48.110363 systemd[1]: Created slice kubepods-besteffort-podb55a08bc_9d10_4d47_ac0a_de72fa4ab1ce.slice - libcontainer container kubepods-besteffort-podb55a08bc_9d10_4d47_ac0a_de72fa4ab1ce.slice. May 15 00:08:48.117601 systemd[1]: Created slice kubepods-burstable-pod2b7a008e_9f04_4b23_afda_16075b376325.slice - libcontainer container kubepods-burstable-pod2b7a008e_9f04_4b23_afda_16075b376325.slice. May 15 00:08:48.127927 systemd[1]: Created slice kubepods-besteffort-podda1e193e_d1e2_4c51_883a_4a7628d9c3e9.slice - libcontainer container kubepods-besteffort-podda1e193e_d1e2_4c51_883a_4a7628d9c3e9.slice. May 15 00:08:48.224455 kubelet[2558]: I0515 00:08:48.224258 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b7a008e-9f04-4b23-afda-16075b376325-config-volume\") pod \"coredns-7db6d8ff4d-jxzt2\" (UID: \"2b7a008e-9f04-4b23-afda-16075b376325\") " pod="kube-system/coredns-7db6d8ff4d-jxzt2" May 15 00:08:48.224455 kubelet[2558]: I0515 00:08:48.224314 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b55a08bc-9d10-4d47-ac0a-de72fa4ab1ce-tigera-ca-bundle\") pod \"calico-kube-controllers-57d8fbfc9f-6zbq8\" (UID: \"b55a08bc-9d10-4d47-ac0a-de72fa4ab1ce\") " pod="calico-system/calico-kube-controllers-57d8fbfc9f-6zbq8" May 15 00:08:48.224455 kubelet[2558]: I0515 00:08:48.224338 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqhvs\" (UniqueName: \"kubernetes.io/projected/8f6d68a1-7bcc-4d68-9390-ac560659ee14-kube-api-access-bqhvs\") pod \"coredns-7db6d8ff4d-f6wff\" (UID: \"8f6d68a1-7bcc-4d68-9390-ac560659ee14\") " pod="kube-system/coredns-7db6d8ff4d-f6wff" May 15 00:08:48.224455 kubelet[2558]: I0515 00:08:48.224361 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7cbc\" (UniqueName: \"kubernetes.io/projected/b55a08bc-9d10-4d47-ac0a-de72fa4ab1ce-kube-api-access-h7cbc\") pod \"calico-kube-controllers-57d8fbfc9f-6zbq8\" (UID: \"b55a08bc-9d10-4d47-ac0a-de72fa4ab1ce\") " pod="calico-system/calico-kube-controllers-57d8fbfc9f-6zbq8" May 15 00:08:48.224455 kubelet[2558]: I0515 00:08:48.224384 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f6d68a1-7bcc-4d68-9390-ac560659ee14-config-volume\") pod \"coredns-7db6d8ff4d-f6wff\" (UID: \"8f6d68a1-7bcc-4d68-9390-ac560659ee14\") " pod="kube-system/coredns-7db6d8ff4d-f6wff" May 15 00:08:48.224706 kubelet[2558]: I0515 00:08:48.224401 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jlcx\" (UniqueName: \"kubernetes.io/projected/5c1a3ff2-9a98-4275-87ea-6992a522449a-kube-api-access-7jlcx\") pod \"calico-apiserver-6876c9b49-s24wt\" (UID: \"5c1a3ff2-9a98-4275-87ea-6992a522449a\") " pod="calico-apiserver/calico-apiserver-6876c9b49-s24wt" May 15 00:08:48.224706 kubelet[2558]: I0515 00:08:48.224419 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/da1e193e-d1e2-4c51-883a-4a7628d9c3e9-calico-apiserver-certs\") pod \"calico-apiserver-6876c9b49-8jp5s\" (UID: \"da1e193e-d1e2-4c51-883a-4a7628d9c3e9\") " pod="calico-apiserver/calico-apiserver-6876c9b49-8jp5s" May 15 00:08:48.224706 kubelet[2558]: I0515 00:08:48.224441 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkqvw\" (UniqueName: \"kubernetes.io/projected/2b7a008e-9f04-4b23-afda-16075b376325-kube-api-access-pkqvw\") pod \"coredns-7db6d8ff4d-jxzt2\" (UID: \"2b7a008e-9f04-4b23-afda-16075b376325\") " pod="kube-system/coredns-7db6d8ff4d-jxzt2" May 15 00:08:48.224706 kubelet[2558]: I0515 00:08:48.224460 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5c1a3ff2-9a98-4275-87ea-6992a522449a-calico-apiserver-certs\") pod \"calico-apiserver-6876c9b49-s24wt\" (UID: \"5c1a3ff2-9a98-4275-87ea-6992a522449a\") " pod="calico-apiserver/calico-apiserver-6876c9b49-s24wt" May 15 00:08:48.224706 kubelet[2558]: I0515 00:08:48.224480 2558 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fpkv\" (UniqueName: \"kubernetes.io/projected/da1e193e-d1e2-4c51-883a-4a7628d9c3e9-kube-api-access-2fpkv\") pod \"calico-apiserver-6876c9b49-8jp5s\" (UID: \"da1e193e-d1e2-4c51-883a-4a7628d9c3e9\") " pod="calico-apiserver/calico-apiserver-6876c9b49-8jp5s" May 15 00:08:48.387219 kubelet[2558]: E0515 00:08:48.387185 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:48.388170 containerd[1443]: time="2025-05-15T00:08:48.388113895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f6wff,Uid:8f6d68a1-7bcc-4d68-9390-ac560659ee14,Namespace:kube-system,Attempt:0,}" May 15 00:08:48.399234 containerd[1443]: time="2025-05-15T00:08:48.399186037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6876c9b49-s24wt,Uid:5c1a3ff2-9a98-4275-87ea-6992a522449a,Namespace:calico-apiserver,Attempt:0,}" May 15 00:08:48.431491 kubelet[2558]: E0515 00:08:48.427491 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:48.436246 containerd[1443]: time="2025-05-15T00:08:48.435352883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jxzt2,Uid:2b7a008e-9f04-4b23-afda-16075b376325,Namespace:kube-system,Attempt:0,}" May 15 00:08:48.436246 containerd[1443]: time="2025-05-15T00:08:48.435412087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6876c9b49-8jp5s,Uid:da1e193e-d1e2-4c51-883a-4a7628d9c3e9,Namespace:calico-apiserver,Attempt:0,}" May 15 00:08:48.436246 containerd[1443]: time="2025-05-15T00:08:48.435758833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57d8fbfc9f-6zbq8,Uid:b55a08bc-9d10-4d47-ac0a-de72fa4ab1ce,Namespace:calico-system,Attempt:0,}" May 15 00:08:48.721076 containerd[1443]: time="2025-05-15T00:08:48.720945567Z" level=error msg="Failed to destroy network for sandbox \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.721521 containerd[1443]: time="2025-05-15T00:08:48.721479167Z" level=error msg="encountered an error cleaning up failed sandbox \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.721597 containerd[1443]: time="2025-05-15T00:08:48.721536131Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f6wff,Uid:8f6d68a1-7bcc-4d68-9390-ac560659ee14,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.721983 containerd[1443]: time="2025-05-15T00:08:48.721946962Z" level=error msg="Failed to destroy network for sandbox \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.722381 containerd[1443]: time="2025-05-15T00:08:48.722269266Z" level=error msg="encountered an error cleaning up failed sandbox \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.722381 containerd[1443]: time="2025-05-15T00:08:48.722334110Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6876c9b49-8jp5s,Uid:da1e193e-d1e2-4c51-883a-4a7628d9c3e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.723456 kubelet[2558]: E0515 00:08:48.723305 2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.723456 kubelet[2558]: E0515 00:08:48.723382 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-f6wff" May 15 00:08:48.723456 kubelet[2558]: E0515 00:08:48.723403 2558 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-f6wff" May 15 00:08:48.723611 kubelet[2558]: E0515 00:08:48.723448 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-f6wff_kube-system(8f6d68a1-7bcc-4d68-9390-ac560659ee14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-f6wff_kube-system(8f6d68a1-7bcc-4d68-9390-ac560659ee14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-f6wff" podUID="8f6d68a1-7bcc-4d68-9390-ac560659ee14" May 15 00:08:48.725111 kubelet[2558]: E0515 00:08:48.724897 2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.725111 kubelet[2558]: E0515 00:08:48.724966 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6876c9b49-8jp5s" May 15 00:08:48.725111 kubelet[2558]: E0515 00:08:48.724984 2558 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6876c9b49-8jp5s" May 15 00:08:48.725427 kubelet[2558]: E0515 00:08:48.725034 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6876c9b49-8jp5s_calico-apiserver(da1e193e-d1e2-4c51-883a-4a7628d9c3e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6876c9b49-8jp5s_calico-apiserver(da1e193e-d1e2-4c51-883a-4a7628d9c3e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6876c9b49-8jp5s" podUID="da1e193e-d1e2-4c51-883a-4a7628d9c3e9" May 15 00:08:48.728523 containerd[1443]: time="2025-05-15T00:08:48.728481807Z" level=error msg="Failed to destroy network for sandbox \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.729304 containerd[1443]: time="2025-05-15T00:08:48.729269185Z" level=error msg="encountered an error cleaning up failed sandbox \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.730134 containerd[1443]: time="2025-05-15T00:08:48.730091806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6876c9b49-s24wt,Uid:5c1a3ff2-9a98-4275-87ea-6992a522449a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.730550 kubelet[2558]: E0515 00:08:48.730450 2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.730550 kubelet[2558]: E0515 00:08:48.730494 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6876c9b49-s24wt" May 15 00:08:48.730550 kubelet[2558]: E0515 00:08:48.730516 2558 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6876c9b49-s24wt" May 15 00:08:48.730832 kubelet[2558]: E0515 00:08:48.730660 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6876c9b49-s24wt_calico-apiserver(5c1a3ff2-9a98-4275-87ea-6992a522449a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6876c9b49-s24wt_calico-apiserver(5c1a3ff2-9a98-4275-87ea-6992a522449a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6876c9b49-s24wt" podUID="5c1a3ff2-9a98-4275-87ea-6992a522449a" May 15 00:08:48.732287 containerd[1443]: time="2025-05-15T00:08:48.732226205Z" level=error msg="Failed to destroy network for sandbox \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.732838 containerd[1443]: time="2025-05-15T00:08:48.732559470Z" level=error msg="encountered an error cleaning up failed sandbox \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.732838 containerd[1443]: time="2025-05-15T00:08:48.732661157Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jxzt2,Uid:2b7a008e-9f04-4b23-afda-16075b376325,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.732933 kubelet[2558]: E0515 00:08:48.732887 2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.732963 kubelet[2558]: E0515 00:08:48.732949 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jxzt2" May 15 00:08:48.732994 kubelet[2558]: E0515 00:08:48.732969 2558 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jxzt2" May 15 00:08:48.733994 kubelet[2558]: E0515 00:08:48.733023 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jxzt2_kube-system(2b7a008e-9f04-4b23-afda-16075b376325)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jxzt2_kube-system(2b7a008e-9f04-4b23-afda-16075b376325)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jxzt2" podUID="2b7a008e-9f04-4b23-afda-16075b376325" May 15 00:08:48.734315 containerd[1443]: time="2025-05-15T00:08:48.734280597Z" level=error msg="Failed to destroy network for sandbox \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.734704 containerd[1443]: time="2025-05-15T00:08:48.734677147Z" level=error msg="encountered an error cleaning up failed sandbox \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.734766 containerd[1443]: time="2025-05-15T00:08:48.734738431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57d8fbfc9f-6zbq8,Uid:b55a08bc-9d10-4d47-ac0a-de72fa4ab1ce,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.735091 kubelet[2558]: E0515 00:08:48.734965 2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.735091 kubelet[2558]: E0515 00:08:48.735026 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57d8fbfc9f-6zbq8" May 15 00:08:48.735091 kubelet[2558]: E0515 00:08:48.735042 2558 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57d8fbfc9f-6zbq8" May 15 00:08:48.735202 kubelet[2558]: E0515 00:08:48.735127 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57d8fbfc9f-6zbq8_calico-system(b55a08bc-9d10-4d47-ac0a-de72fa4ab1ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57d8fbfc9f-6zbq8_calico-system(b55a08bc-9d10-4d47-ac0a-de72fa4ab1ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57d8fbfc9f-6zbq8" podUID="b55a08bc-9d10-4d47-ac0a-de72fa4ab1ce" May 15 00:08:48.842852 kubelet[2558]: I0515 00:08:48.842820 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" May 15 00:08:48.845259 containerd[1443]: time="2025-05-15T00:08:48.844638231Z" level=info msg="StopPodSandbox for \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\"" May 15 00:08:48.845259 containerd[1443]: time="2025-05-15T00:08:48.844843367Z" level=info msg="Ensure that sandbox 42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a in task-service has been cleanup successfully" May 15 00:08:48.847836 kubelet[2558]: E0515 00:08:48.847804 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:48.849758 containerd[1443]: time="2025-05-15T00:08:48.849724009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 00:08:48.850650 kubelet[2558]: I0515 00:08:48.850240 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" May 15 00:08:48.851419 containerd[1443]: time="2025-05-15T00:08:48.850653998Z" level=info msg="StopPodSandbox for \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\"" May 15 00:08:48.851535 kubelet[2558]: I0515 00:08:48.851512 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" May 15 00:08:48.851593 containerd[1443]: time="2025-05-15T00:08:48.851550825Z" level=info msg="Ensure that sandbox f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba in task-service has been cleanup successfully" May 15 00:08:48.852545 containerd[1443]: time="2025-05-15T00:08:48.852504695Z" level=info msg="StopPodSandbox for \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\"" May 15 00:08:48.853068 containerd[1443]: time="2025-05-15T00:08:48.852849361Z" level=info msg="Ensure that sandbox 62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4 in task-service has been cleanup successfully" May 15 00:08:48.859194 kubelet[2558]: I0515 00:08:48.859159 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" May 15 00:08:48.860446 containerd[1443]: time="2025-05-15T00:08:48.860402922Z" level=info msg="StopPodSandbox for \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\"" May 15 00:08:48.861338 containerd[1443]: time="2025-05-15T00:08:48.861220663Z" level=info msg="Ensure that sandbox a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6 in task-service has been cleanup successfully" May 15 00:08:48.861423 kubelet[2558]: I0515 00:08:48.861396 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" May 15 00:08:48.862337 containerd[1443]: time="2025-05-15T00:08:48.862305423Z" level=info msg="StopPodSandbox for \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\"" May 15 00:08:48.862720 containerd[1443]: time="2025-05-15T00:08:48.862689292Z" level=info msg="Ensure that sandbox cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961 in task-service has been cleanup successfully" May 15 00:08:48.913145 containerd[1443]: time="2025-05-15T00:08:48.913084353Z" level=error msg="StopPodSandbox for \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\" failed" error="failed to destroy network for sandbox \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.916702 kubelet[2558]: E0515 00:08:48.916589 2558 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" May 15 00:08:48.916875 kubelet[2558]: E0515 00:08:48.916673 2558 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba"} May 15 00:08:48.916942 kubelet[2558]: E0515 00:08:48.916905 2558 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b55a08bc-9d10-4d47-ac0a-de72fa4ab1ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:08:48.917136 kubelet[2558]: E0515 00:08:48.917102 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b55a08bc-9d10-4d47-ac0a-de72fa4ab1ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57d8fbfc9f-6zbq8" podUID="b55a08bc-9d10-4d47-ac0a-de72fa4ab1ce" May 15 00:08:48.931051 containerd[1443]: time="2025-05-15T00:08:48.931001484Z" level=error msg="StopPodSandbox for \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\" failed" error="failed to destroy network for sandbox \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.931418 kubelet[2558]: E0515 00:08:48.931207 2558 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" May 15 00:08:48.931418 kubelet[2558]: E0515 00:08:48.931253 2558 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4"} May 15 00:08:48.931418 kubelet[2558]: E0515 00:08:48.931284 2558 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b7a008e-9f04-4b23-afda-16075b376325\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:08:48.931418 kubelet[2558]: E0515 00:08:48.931303 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b7a008e-9f04-4b23-afda-16075b376325\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jxzt2" podUID="2b7a008e-9f04-4b23-afda-16075b376325" May 15 00:08:48.941915 containerd[1443]: time="2025-05-15T00:08:48.941770523Z" level=error msg="StopPodSandbox for \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\" failed" error="failed to destroy network for sandbox \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.941915 containerd[1443]: time="2025-05-15T00:08:48.941888692Z" level=error msg="StopPodSandbox for \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\" failed" error="failed to destroy network for sandbox \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.944249 kubelet[2558]: E0515 00:08:48.944143 2558 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" May 15 00:08:48.944249 kubelet[2558]: E0515 00:08:48.944206 2558 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6"} May 15 00:08:48.944570 kubelet[2558]: E0515 00:08:48.944427 2558 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5c1a3ff2-9a98-4275-87ea-6992a522449a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:08:48.944570 kubelet[2558]: E0515 00:08:48.944474 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5c1a3ff2-9a98-4275-87ea-6992a522449a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6876c9b49-s24wt" podUID="5c1a3ff2-9a98-4275-87ea-6992a522449a" May 15 00:08:48.944570 kubelet[2558]: E0515 00:08:48.944203 2558 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" May 15 00:08:48.944570 kubelet[2558]: E0515 00:08:48.944510 2558 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a"} May 15 00:08:48.944794 kubelet[2558]: E0515 00:08:48.944528 2558 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da1e193e-d1e2-4c51-883a-4a7628d9c3e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:08:48.944794 kubelet[2558]: E0515 00:08:48.944553 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da1e193e-d1e2-4c51-883a-4a7628d9c3e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6876c9b49-8jp5s" podUID="da1e193e-d1e2-4c51-883a-4a7628d9c3e9" May 15 00:08:48.962006 containerd[1443]: time="2025-05-15T00:08:48.961951222Z" level=error msg="StopPodSandbox for \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\" failed" error="failed to destroy network for sandbox \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:48.962343 kubelet[2558]: E0515 00:08:48.962290 2558 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" May 15 00:08:48.962442 kubelet[2558]: E0515 00:08:48.962353 2558 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961"} May 15 00:08:48.962442 kubelet[2558]: E0515 00:08:48.962388 2558 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8f6d68a1-7bcc-4d68-9390-ac560659ee14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:08:48.962442 kubelet[2558]: E0515 00:08:48.962410 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8f6d68a1-7bcc-4d68-9390-ac560659ee14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-f6wff" podUID="8f6d68a1-7bcc-4d68-9390-ac560659ee14" May 15 00:08:49.042085 systemd[1]: Started sshd@7-10.0.0.17:22-10.0.0.1:38988.service - OpenSSH per-connection server daemon (10.0.0.1:38988). May 15 00:08:49.085305 sshd[3622]: Accepted publickey for core from 10.0.0.1 port 38988 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:08:49.087020 sshd[3622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:49.091798 systemd-logind[1429]: New session 8 of user core. May 15 00:08:49.098995 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 00:08:49.218262 sshd[3622]: pam_unix(sshd:session): session closed for user core May 15 00:08:49.221797 systemd-logind[1429]: Session 8 logged out. Waiting for processes to exit. May 15 00:08:49.222150 systemd[1]: sshd@7-10.0.0.17:22-10.0.0.1:38988.service: Deactivated successfully. May 15 00:08:49.223989 systemd[1]: session-8.scope: Deactivated successfully. May 15 00:08:49.225619 systemd-logind[1429]: Removed session 8. May 15 00:08:49.741448 systemd[1]: Created slice kubepods-besteffort-pod262bbf25_d43c_443e_a611_5ff6be2347dc.slice - libcontainer container kubepods-besteffort-pod262bbf25_d43c_443e_a611_5ff6be2347dc.slice. May 15 00:08:49.748541 containerd[1443]: time="2025-05-15T00:08:49.748070188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nrvq4,Uid:262bbf25-d43c-443e-a611-5ff6be2347dc,Namespace:calico-system,Attempt:0,}" May 15 00:08:49.821349 containerd[1443]: time="2025-05-15T00:08:49.821139397Z" level=error msg="Failed to destroy network for sandbox \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:49.821578 containerd[1443]: time="2025-05-15T00:08:49.821488622Z" level=error msg="encountered an error cleaning up failed sandbox \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:49.821628 containerd[1443]: time="2025-05-15T00:08:49.821570908Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nrvq4,Uid:262bbf25-d43c-443e-a611-5ff6be2347dc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:49.822301 kubelet[2558]: E0515 00:08:49.821820 2558 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:49.822301 kubelet[2558]: E0515 00:08:49.821896 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nrvq4" May 15 00:08:49.822301 kubelet[2558]: E0515 00:08:49.821919 2558 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nrvq4" May 15 00:08:49.822579 kubelet[2558]: E0515 00:08:49.821966 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nrvq4_calico-system(262bbf25-d43c-443e-a611-5ff6be2347dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nrvq4_calico-system(262bbf25-d43c-443e-a611-5ff6be2347dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nrvq4" podUID="262bbf25-d43c-443e-a611-5ff6be2347dc" May 15 00:08:49.824011 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6-shm.mount: Deactivated successfully. May 15 00:08:49.865894 kubelet[2558]: I0515 00:08:49.865693 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" May 15 00:08:49.867750 containerd[1443]: time="2025-05-15T00:08:49.866409769Z" level=info msg="StopPodSandbox for \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\"" May 15 00:08:49.867750 containerd[1443]: time="2025-05-15T00:08:49.866576741Z" level=info msg="Ensure that sandbox c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6 in task-service has been cleanup successfully" May 15 00:08:49.943527 containerd[1443]: time="2025-05-15T00:08:49.941955676Z" level=error msg="StopPodSandbox for \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\" failed" error="failed to destroy network for sandbox \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:08:49.947703 kubelet[2558]: E0515 00:08:49.942991 2558 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" May 15 00:08:49.947703 kubelet[2558]: E0515 00:08:49.943044 2558 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6"} May 15 00:08:49.947703 kubelet[2558]: E0515 00:08:49.943082 2558 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"262bbf25-d43c-443e-a611-5ff6be2347dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:08:49.947703 kubelet[2558]: E0515 00:08:49.943105 2558 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"262bbf25-d43c-443e-a611-5ff6be2347dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nrvq4" podUID="262bbf25-d43c-443e-a611-5ff6be2347dc" May 15 00:08:52.035480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3648451408.mount: Deactivated successfully. May 15 00:08:52.261032 containerd[1443]: time="2025-05-15T00:08:52.260871803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:52.262176 containerd[1443]: time="2025-05-15T00:08:52.261442721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 15 00:08:52.262482 containerd[1443]: time="2025-05-15T00:08:52.262451027Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:52.265854 containerd[1443]: time="2025-05-15T00:08:52.265812447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:52.272230 containerd[1443]: time="2025-05-15T00:08:52.266567136Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 3.416801164s" May 15 00:08:52.272548 containerd[1443]: time="2025-05-15T00:08:52.272404598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 15 00:08:52.285590 containerd[1443]: time="2025-05-15T00:08:52.285400129Z" level=info msg="CreateContainer within sandbox \"268a63eaab7b9f654663dba7facbac664b2b092250d57f08f3c491db5134b4fc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 00:08:52.313513 containerd[1443]: time="2025-05-15T00:08:52.313451766Z" level=info msg="CreateContainer within sandbox \"268a63eaab7b9f654663dba7facbac664b2b092250d57f08f3c491db5134b4fc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"48a3279db5085ecce309975258a317c0b91ecad38593b0e9a0d44dcfdb55c8fb\"" May 15 00:08:52.314076 containerd[1443]: time="2025-05-15T00:08:52.314033644Z" level=info msg="StartContainer for \"48a3279db5085ecce309975258a317c0b91ecad38593b0e9a0d44dcfdb55c8fb\"" May 15 00:08:52.372008 systemd[1]: Started cri-containerd-48a3279db5085ecce309975258a317c0b91ecad38593b0e9a0d44dcfdb55c8fb.scope - libcontainer container 48a3279db5085ecce309975258a317c0b91ecad38593b0e9a0d44dcfdb55c8fb. May 15 00:08:52.408936 containerd[1443]: time="2025-05-15T00:08:52.408872813Z" level=info msg="StartContainer for \"48a3279db5085ecce309975258a317c0b91ecad38593b0e9a0d44dcfdb55c8fb\" returns successfully" May 15 00:08:52.621669 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 15 00:08:52.621843 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 15 00:08:52.876916 kubelet[2558]: E0515 00:08:52.876651 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:52.896842 kubelet[2558]: I0515 00:08:52.896756 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kc4h2" podStartSLOduration=1.2710249 podStartE2EDuration="12.896739114s" podCreationTimestamp="2025-05-15 00:08:40 +0000 UTC" firstStartedPulling="2025-05-15 00:08:40.647487557 +0000 UTC m=+21.989347353" lastFinishedPulling="2025-05-15 00:08:52.273201771 +0000 UTC m=+33.615061567" observedRunningTime="2025-05-15 00:08:52.895605959 +0000 UTC m=+34.237465755" watchObservedRunningTime="2025-05-15 00:08:52.896739114 +0000 UTC m=+34.238598870" May 15 00:08:53.878943 kubelet[2558]: E0515 00:08:53.878496 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:54.232771 systemd[1]: Started sshd@8-10.0.0.17:22-10.0.0.1:42676.service - OpenSSH per-connection server daemon (10.0.0.1:42676). May 15 00:08:54.274652 sshd[3946]: Accepted publickey for core from 10.0.0.1 port 42676 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:08:54.276400 sshd[3946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:54.280382 systemd-logind[1429]: New session 9 of user core. May 15 00:08:54.286992 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 00:08:54.410519 sshd[3946]: pam_unix(sshd:session): session closed for user core May 15 00:08:54.414689 systemd[1]: sshd@8-10.0.0.17:22-10.0.0.1:42676.service: Deactivated successfully. May 15 00:08:54.417132 systemd[1]: session-9.scope: Deactivated successfully. May 15 00:08:54.419993 systemd-logind[1429]: Session 9 logged out. Waiting for processes to exit. May 15 00:08:54.420992 systemd-logind[1429]: Removed session 9. May 15 00:08:54.879947 kubelet[2558]: E0515 00:08:54.879910 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:59.422753 systemd[1]: Started sshd@9-10.0.0.17:22-10.0.0.1:42690.service - OpenSSH per-connection server daemon (10.0.0.1:42690). May 15 00:08:59.461541 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 42690 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:08:59.462928 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:59.469185 systemd-logind[1429]: New session 10 of user core. May 15 00:08:59.474994 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 00:08:59.595481 sshd[4108]: pam_unix(sshd:session): session closed for user core May 15 00:08:59.603508 systemd[1]: sshd@9-10.0.0.17:22-10.0.0.1:42690.service: Deactivated successfully. May 15 00:08:59.605096 systemd[1]: session-10.scope: Deactivated successfully. May 15 00:08:59.606957 systemd-logind[1429]: Session 10 logged out. Waiting for processes to exit. May 15 00:08:59.613104 systemd[1]: Started sshd@10-10.0.0.17:22-10.0.0.1:42692.service - OpenSSH per-connection server daemon (10.0.0.1:42692). May 15 00:08:59.615462 systemd-logind[1429]: Removed session 10. May 15 00:08:59.655132 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 42692 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:08:59.656639 sshd[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:59.660546 systemd-logind[1429]: New session 11 of user core. May 15 00:08:59.670010 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 00:08:59.815941 sshd[4123]: pam_unix(sshd:session): session closed for user core May 15 00:08:59.824910 systemd[1]: sshd@10-10.0.0.17:22-10.0.0.1:42692.service: Deactivated successfully. May 15 00:08:59.828207 systemd[1]: session-11.scope: Deactivated successfully. May 15 00:08:59.830734 systemd-logind[1429]: Session 11 logged out. Waiting for processes to exit. May 15 00:08:59.838289 systemd[1]: Started sshd@11-10.0.0.17:22-10.0.0.1:42696.service - OpenSSH per-connection server daemon (10.0.0.1:42696). May 15 00:08:59.839985 systemd-logind[1429]: Removed session 11. May 15 00:08:59.877469 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 42696 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:08:59.879025 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:59.883439 systemd-logind[1429]: New session 12 of user core. May 15 00:08:59.893999 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 00:09:00.001881 sshd[4135]: pam_unix(sshd:session): session closed for user core May 15 00:09:00.005414 systemd[1]: sshd@11-10.0.0.17:22-10.0.0.1:42696.service: Deactivated successfully. May 15 00:09:00.007203 systemd[1]: session-12.scope: Deactivated successfully. May 15 00:09:00.007958 systemd-logind[1429]: Session 12 logged out. Waiting for processes to exit. May 15 00:09:00.008850 systemd-logind[1429]: Removed session 12. May 15 00:09:00.737506 containerd[1443]: time="2025-05-15T00:09:00.737265621Z" level=info msg="StopPodSandbox for \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\"" May 15 00:09:00.928291 containerd[1443]: 2025-05-15 00:09:00.832 [INFO][4190] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" May 15 00:09:00.928291 containerd[1443]: 2025-05-15 00:09:00.834 [INFO][4190] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" iface="eth0" netns="/var/run/netns/cni-5348f9c6-9c87-a7e2-abac-801762a11805" May 15 00:09:00.928291 containerd[1443]: 2025-05-15 00:09:00.834 [INFO][4190] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" iface="eth0" netns="/var/run/netns/cni-5348f9c6-9c87-a7e2-abac-801762a11805" May 15 00:09:00.928291 containerd[1443]: 2025-05-15 00:09:00.837 [INFO][4190] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" iface="eth0" netns="/var/run/netns/cni-5348f9c6-9c87-a7e2-abac-801762a11805" May 15 00:09:00.928291 containerd[1443]: 2025-05-15 00:09:00.837 [INFO][4190] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" May 15 00:09:00.928291 containerd[1443]: 2025-05-15 00:09:00.837 [INFO][4190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" May 15 00:09:00.928291 containerd[1443]: 2025-05-15 00:09:00.914 [INFO][4198] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" HandleID="k8s-pod-network.f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" Workload="localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0" May 15 00:09:00.928291 containerd[1443]: 2025-05-15 00:09:00.914 [INFO][4198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:00.928291 containerd[1443]: 2025-05-15 00:09:00.914 [INFO][4198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:00.928291 containerd[1443]: 2025-05-15 00:09:00.923 [WARNING][4198] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" HandleID="k8s-pod-network.f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" Workload="localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0" May 15 00:09:00.928291 containerd[1443]: 2025-05-15 00:09:00.923 [INFO][4198] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" HandleID="k8s-pod-network.f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" Workload="localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0" May 15 00:09:00.928291 containerd[1443]: 2025-05-15 00:09:00.924 [INFO][4198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:00.928291 containerd[1443]: 2025-05-15 00:09:00.926 [INFO][4190] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" May 15 00:09:00.928723 containerd[1443]: time="2025-05-15T00:09:00.928444759Z" level=info msg="TearDown network for sandbox \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\" successfully" May 15 00:09:00.928723 containerd[1443]: time="2025-05-15T00:09:00.928472241Z" level=info msg="StopPodSandbox for \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\" returns successfully" May 15 00:09:00.930436 systemd[1]: run-netns-cni\x2d5348f9c6\x2d9c87\x2da7e2\x2dabac\x2d801762a11805.mount: Deactivated successfully. May 15 00:09:00.931265 containerd[1443]: time="2025-05-15T00:09:00.931234148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57d8fbfc9f-6zbq8,Uid:b55a08bc-9d10-4d47-ac0a-de72fa4ab1ce,Namespace:calico-system,Attempt:1,}" May 15 00:09:01.111918 systemd-networkd[1384]: cali497f76a107a: Link UP May 15 00:09:01.113186 systemd-networkd[1384]: cali497f76a107a: Gained carrier May 15 00:09:01.149187 containerd[1443]: 2025-05-15 00:09:01.022 [INFO][4207] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:09:01.149187 containerd[1443]: 2025-05-15 00:09:01.035 [INFO][4207] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0 calico-kube-controllers-57d8fbfc9f- calico-system b55a08bc-9d10-4d47-ac0a-de72fa4ab1ce 911 0 2025-05-15 00:08:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57d8fbfc9f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-57d8fbfc9f-6zbq8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali497f76a107a [] []}} ContainerID="cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69" Namespace="calico-system" Pod="calico-kube-controllers-57d8fbfc9f-6zbq8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-" May 15 00:09:01.149187 containerd[1443]: 2025-05-15 00:09:01.036 [INFO][4207] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69" Namespace="calico-system" Pod="calico-kube-controllers-57d8fbfc9f-6zbq8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0" May 15 00:09:01.149187 containerd[1443]: 2025-05-15 00:09:01.064 [INFO][4221] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69" HandleID="k8s-pod-network.cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69" Workload="localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0" May 15 00:09:01.149187 containerd[1443]: 2025-05-15 00:09:01.077 [INFO][4221] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69" HandleID="k8s-pod-network.cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69" Workload="localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004eae80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-57d8fbfc9f-6zbq8", "timestamp":"2025-05-15 00:09:01.064821499 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:09:01.149187 containerd[1443]: 2025-05-15 00:09:01.077 [INFO][4221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:01.149187 containerd[1443]: 2025-05-15 00:09:01.077 [INFO][4221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:01.149187 containerd[1443]: 2025-05-15 00:09:01.078 [INFO][4221] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:09:01.149187 containerd[1443]: 2025-05-15 00:09:01.079 [INFO][4221] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69" host="localhost" May 15 00:09:01.149187 containerd[1443]: 2025-05-15 00:09:01.084 [INFO][4221] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:09:01.149187 containerd[1443]: 2025-05-15 00:09:01.088 [INFO][4221] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:09:01.149187 containerd[1443]: 2025-05-15 00:09:01.090 [INFO][4221] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:09:01.149187 containerd[1443]: 2025-05-15 00:09:01.092 [INFO][4221] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:09:01.149187 containerd[1443]: 2025-05-15 00:09:01.092 [INFO][4221] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69" host="localhost" May 15 00:09:01.149187 containerd[1443]: 2025-05-15 00:09:01.093 [INFO][4221] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69 May 15 00:09:01.149187 containerd[1443]: 2025-05-15 00:09:01.096 [INFO][4221] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69" host="localhost" May 15 00:09:01.149187 containerd[1443]: 2025-05-15 00:09:01.101 [INFO][4221] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69" host="localhost" May 15 00:09:01.149187 containerd[1443]: 2025-05-15 00:09:01.101 [INFO][4221] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69" host="localhost" May 15 00:09:01.149187 containerd[1443]: 2025-05-15 00:09:01.101 [INFO][4221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:01.149187 containerd[1443]: 2025-05-15 00:09:01.101 [INFO][4221] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69" HandleID="k8s-pod-network.cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69" Workload="localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0" May 15 00:09:01.149851 containerd[1443]: 2025-05-15 00:09:01.103 [INFO][4207] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69" Namespace="calico-system" Pod="calico-kube-controllers-57d8fbfc9f-6zbq8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0", GenerateName:"calico-kube-controllers-57d8fbfc9f-", Namespace:"calico-system", SelfLink:"", UID:"b55a08bc-9d10-4d47-ac0a-de72fa4ab1ce", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57d8fbfc9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-57d8fbfc9f-6zbq8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali497f76a107a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:01.149851 containerd[1443]: 2025-05-15 00:09:01.103 [INFO][4207] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69" Namespace="calico-system" Pod="calico-kube-controllers-57d8fbfc9f-6zbq8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0" May 15 00:09:01.149851 containerd[1443]: 2025-05-15 00:09:01.103 [INFO][4207] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali497f76a107a ContainerID="cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69" Namespace="calico-system" Pod="calico-kube-controllers-57d8fbfc9f-6zbq8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0" May 15 00:09:01.149851 containerd[1443]: 2025-05-15 00:09:01.112 [INFO][4207] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69" Namespace="calico-system" Pod="calico-kube-controllers-57d8fbfc9f-6zbq8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0" May 15 00:09:01.149851 containerd[1443]: 2025-05-15 00:09:01.112 [INFO][4207] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69" Namespace="calico-system" Pod="calico-kube-controllers-57d8fbfc9f-6zbq8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0", GenerateName:"calico-kube-controllers-57d8fbfc9f-", Namespace:"calico-system", SelfLink:"", UID:"b55a08bc-9d10-4d47-ac0a-de72fa4ab1ce", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57d8fbfc9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69", Pod:"calico-kube-controllers-57d8fbfc9f-6zbq8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali497f76a107a", MAC:"9a:1a:5b:a0:27:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:01.149851 containerd[1443]: 2025-05-15 00:09:01.143 [INFO][4207] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69" Namespace="calico-system" Pod="calico-kube-controllers-57d8fbfc9f-6zbq8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0" May 15 00:09:01.184893 containerd[1443]: time="2025-05-15T00:09:01.184629729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:09:01.184893 containerd[1443]: time="2025-05-15T00:09:01.184705213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:09:01.184893 containerd[1443]: time="2025-05-15T00:09:01.184717014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:01.184893 containerd[1443]: time="2025-05-15T00:09:01.184841620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:01.207000 systemd[1]: Started cri-containerd-cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69.scope - libcontainer container cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69. May 15 00:09:01.216946 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:09:01.235580 containerd[1443]: time="2025-05-15T00:09:01.235503712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57d8fbfc9f-6zbq8,Uid:b55a08bc-9d10-4d47-ac0a-de72fa4ab1ce,Namespace:calico-system,Attempt:1,} returns sandbox id \"cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69\"" May 15 00:09:01.242049 containerd[1443]: time="2025-05-15T00:09:01.242007492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 00:09:01.738174 containerd[1443]: time="2025-05-15T00:09:01.737829043Z" level=info msg="StopPodSandbox for \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\"" May 15 00:09:01.738174 containerd[1443]: time="2025-05-15T00:09:01.737885446Z" level=info msg="StopPodSandbox for \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\"" May 15 00:09:01.839837 containerd[1443]: 2025-05-15 00:09:01.790 [INFO][4336] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" May 15 00:09:01.839837 containerd[1443]: 2025-05-15 00:09:01.791 [INFO][4336] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" iface="eth0" netns="/var/run/netns/cni-8cebd61b-5bee-806c-65f0-bc628530f6bb" May 15 00:09:01.839837 containerd[1443]: 2025-05-15 00:09:01.792 [INFO][4336] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" iface="eth0" netns="/var/run/netns/cni-8cebd61b-5bee-806c-65f0-bc628530f6bb" May 15 00:09:01.839837 containerd[1443]: 2025-05-15 00:09:01.792 [INFO][4336] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" iface="eth0" netns="/var/run/netns/cni-8cebd61b-5bee-806c-65f0-bc628530f6bb" May 15 00:09:01.839837 containerd[1443]: 2025-05-15 00:09:01.792 [INFO][4336] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" May 15 00:09:01.839837 containerd[1443]: 2025-05-15 00:09:01.792 [INFO][4336] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" May 15 00:09:01.839837 containerd[1443]: 2025-05-15 00:09:01.824 [INFO][4353] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" HandleID="k8s-pod-network.42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" Workload="localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0" May 15 00:09:01.839837 containerd[1443]: 2025-05-15 00:09:01.824 [INFO][4353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:01.839837 containerd[1443]: 2025-05-15 00:09:01.825 [INFO][4353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:01.839837 containerd[1443]: 2025-05-15 00:09:01.834 [WARNING][4353] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" HandleID="k8s-pod-network.42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" Workload="localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0" May 15 00:09:01.839837 containerd[1443]: 2025-05-15 00:09:01.834 [INFO][4353] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" HandleID="k8s-pod-network.42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" Workload="localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0" May 15 00:09:01.839837 containerd[1443]: 2025-05-15 00:09:01.836 [INFO][4353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:01.839837 containerd[1443]: 2025-05-15 00:09:01.838 [INFO][4336] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" May 15 00:09:01.840392 containerd[1443]: time="2025-05-15T00:09:01.839977749Z" level=info msg="TearDown network for sandbox \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\" successfully" May 15 00:09:01.840392 containerd[1443]: time="2025-05-15T00:09:01.840004071Z" level=info msg="StopPodSandbox for \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\" returns successfully" May 15 00:09:01.841730 containerd[1443]: time="2025-05-15T00:09:01.841608035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6876c9b49-8jp5s,Uid:da1e193e-d1e2-4c51-883a-4a7628d9c3e9,Namespace:calico-apiserver,Attempt:1,}" May 15 00:09:01.880803 containerd[1443]: 2025-05-15 00:09:01.818 [INFO][4337] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" May 15 00:09:01.880803 containerd[1443]: 2025-05-15 00:09:01.818 [INFO][4337] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" iface="eth0" netns="/var/run/netns/cni-a1c32af4-32d2-69b4-82fd-9ad03a20435c" May 15 00:09:01.880803 containerd[1443]: 2025-05-15 00:09:01.818 [INFO][4337] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" iface="eth0" netns="/var/run/netns/cni-a1c32af4-32d2-69b4-82fd-9ad03a20435c" May 15 00:09:01.880803 containerd[1443]: 2025-05-15 00:09:01.819 [INFO][4337] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" iface="eth0" netns="/var/run/netns/cni-a1c32af4-32d2-69b4-82fd-9ad03a20435c" May 15 00:09:01.880803 containerd[1443]: 2025-05-15 00:09:01.819 [INFO][4337] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" May 15 00:09:01.880803 containerd[1443]: 2025-05-15 00:09:01.819 [INFO][4337] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" May 15 00:09:01.880803 containerd[1443]: 2025-05-15 00:09:01.861 [INFO][4360] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" HandleID="k8s-pod-network.a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" Workload="localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0" May 15 00:09:01.880803 containerd[1443]: 2025-05-15 00:09:01.862 [INFO][4360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:01.880803 containerd[1443]: 2025-05-15 00:09:01.862 [INFO][4360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:01.880803 containerd[1443]: 2025-05-15 00:09:01.874 [WARNING][4360] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" HandleID="k8s-pod-network.a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" Workload="localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0" May 15 00:09:01.880803 containerd[1443]: 2025-05-15 00:09:01.874 [INFO][4360] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" HandleID="k8s-pod-network.a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" Workload="localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0" May 15 00:09:01.880803 containerd[1443]: 2025-05-15 00:09:01.876 [INFO][4360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:01.880803 containerd[1443]: 2025-05-15 00:09:01.878 [INFO][4337] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" May 15 00:09:01.881277 containerd[1443]: time="2025-05-15T00:09:01.881224428Z" level=info msg="TearDown network for sandbox \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\" successfully" May 15 00:09:01.881277 containerd[1443]: time="2025-05-15T00:09:01.881266630Z" level=info msg="StopPodSandbox for \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\" returns successfully" May 15 00:09:01.882530 containerd[1443]: time="2025-05-15T00:09:01.882000389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6876c9b49-s24wt,Uid:5c1a3ff2-9a98-4275-87ea-6992a522449a,Namespace:calico-apiserver,Attempt:1,}" May 15 00:09:01.933199 systemd[1]: run-netns-cni\x2d8cebd61b\x2d5bee\x2d806c\x2d65f0\x2dbc628530f6bb.mount: Deactivated successfully. May 15 00:09:01.933295 systemd[1]: run-netns-cni\x2da1c32af4\x2d32d2\x2d69b4\x2d82fd\x2d9ad03a20435c.mount: Deactivated successfully. May 15 00:09:02.002316 systemd-networkd[1384]: calia7d5d680593: Link UP May 15 00:09:02.003276 systemd-networkd[1384]: calia7d5d680593: Gained carrier May 15 00:09:02.019466 containerd[1443]: 2025-05-15 00:09:01.886 [INFO][4372] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:09:02.019466 containerd[1443]: 2025-05-15 00:09:01.903 [INFO][4372] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0 calico-apiserver-6876c9b49- calico-apiserver da1e193e-d1e2-4c51-883a-4a7628d9c3e9 925 0 2025-05-15 00:08:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6876c9b49 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6876c9b49-8jp5s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia7d5d680593 [] []}} ContainerID="f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd" Namespace="calico-apiserver" Pod="calico-apiserver-6876c9b49-8jp5s" WorkloadEndpoint="localhost-k8s-calico--apiserver--6876c9b49--8jp5s-" May 15 00:09:02.019466 containerd[1443]: 2025-05-15 00:09:01.903 [INFO][4372] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd" Namespace="calico-apiserver" Pod="calico-apiserver-6876c9b49-8jp5s" WorkloadEndpoint="localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0" May 15 00:09:02.019466 containerd[1443]: 2025-05-15 00:09:01.944 [INFO][4398] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd" HandleID="k8s-pod-network.f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd" Workload="localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0" May 15 00:09:02.019466 containerd[1443]: 2025-05-15 00:09:01.957 [INFO][4398] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd" HandleID="k8s-pod-network.f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd" Workload="localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002db200), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6876c9b49-8jp5s", "timestamp":"2025-05-15 00:09:01.944199044 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:09:02.019466 containerd[1443]: 2025-05-15 00:09:01.957 [INFO][4398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:02.019466 containerd[1443]: 2025-05-15 00:09:01.957 [INFO][4398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:02.019466 containerd[1443]: 2025-05-15 00:09:01.957 [INFO][4398] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:09:02.019466 containerd[1443]: 2025-05-15 00:09:01.960 [INFO][4398] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd" host="localhost" May 15 00:09:02.019466 containerd[1443]: 2025-05-15 00:09:01.968 [INFO][4398] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:09:02.019466 containerd[1443]: 2025-05-15 00:09:01.975 [INFO][4398] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:09:02.019466 containerd[1443]: 2025-05-15 00:09:01.979 [INFO][4398] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:09:02.019466 containerd[1443]: 2025-05-15 00:09:01.982 [INFO][4398] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:09:02.019466 containerd[1443]: 2025-05-15 00:09:01.982 [INFO][4398] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd" host="localhost" May 15 00:09:02.019466 containerd[1443]: 2025-05-15 00:09:01.985 [INFO][4398] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd May 15 00:09:02.019466 containerd[1443]: 2025-05-15 00:09:01.991 [INFO][4398] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd" host="localhost" May 15 00:09:02.019466 containerd[1443]: 2025-05-15 00:09:01.996 [INFO][4398] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd" host="localhost" May 15 00:09:02.019466 containerd[1443]: 2025-05-15 00:09:01.996 [INFO][4398] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd" host="localhost" May 15 00:09:02.019466 containerd[1443]: 2025-05-15 00:09:01.996 [INFO][4398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:02.019466 containerd[1443]: 2025-05-15 00:09:01.996 [INFO][4398] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd" HandleID="k8s-pod-network.f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd" Workload="localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0" May 15 00:09:02.021519 containerd[1443]: 2025-05-15 00:09:01.999 [INFO][4372] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd" Namespace="calico-apiserver" Pod="calico-apiserver-6876c9b49-8jp5s" WorkloadEndpoint="localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0", GenerateName:"calico-apiserver-6876c9b49-", Namespace:"calico-apiserver", SelfLink:"", UID:"da1e193e-d1e2-4c51-883a-4a7628d9c3e9", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6876c9b49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6876c9b49-8jp5s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7d5d680593", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:02.021519 containerd[1443]: 2025-05-15 00:09:01.999 [INFO][4372] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd" Namespace="calico-apiserver" Pod="calico-apiserver-6876c9b49-8jp5s" WorkloadEndpoint="localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0" May 15 00:09:02.021519 containerd[1443]: 2025-05-15 00:09:01.999 [INFO][4372] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7d5d680593 ContainerID="f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd" Namespace="calico-apiserver" Pod="calico-apiserver-6876c9b49-8jp5s" WorkloadEndpoint="localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0" May 15 00:09:02.021519 containerd[1443]: 2025-05-15 00:09:02.002 [INFO][4372] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd" Namespace="calico-apiserver" Pod="calico-apiserver-6876c9b49-8jp5s" WorkloadEndpoint="localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0" May 15 00:09:02.021519 containerd[1443]: 2025-05-15 00:09:02.003 [INFO][4372] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd" Namespace="calico-apiserver" Pod="calico-apiserver-6876c9b49-8jp5s" WorkloadEndpoint="localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0", GenerateName:"calico-apiserver-6876c9b49-", Namespace:"calico-apiserver", SelfLink:"", UID:"da1e193e-d1e2-4c51-883a-4a7628d9c3e9", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6876c9b49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd", Pod:"calico-apiserver-6876c9b49-8jp5s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7d5d680593", MAC:"1e:60:af:06:0e:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:02.021519 containerd[1443]: 2025-05-15 00:09:02.015 [INFO][4372] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd" Namespace="calico-apiserver" Pod="calico-apiserver-6876c9b49-8jp5s" WorkloadEndpoint="localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0" May 15 00:09:02.040184 systemd-networkd[1384]: calif4e03c90393: Link UP May 15 00:09:02.042923 systemd-networkd[1384]: calif4e03c90393: Gained carrier May 15 00:09:02.045922 containerd[1443]: time="2025-05-15T00:09:02.045679829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:09:02.045922 containerd[1443]: time="2025-05-15T00:09:02.045743112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:09:02.045922 containerd[1443]: time="2025-05-15T00:09:02.045758033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:02.045922 containerd[1443]: time="2025-05-15T00:09:02.045872119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:02.054907 containerd[1443]: 2025-05-15 00:09:01.916 [INFO][4386] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:09:02.054907 containerd[1443]: 2025-05-15 00:09:01.936 [INFO][4386] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0 calico-apiserver-6876c9b49- calico-apiserver 5c1a3ff2-9a98-4275-87ea-6992a522449a 927 0 2025-05-15 00:08:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6876c9b49 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6876c9b49-s24wt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif4e03c90393 [] []}} ContainerID="826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2" Namespace="calico-apiserver" Pod="calico-apiserver-6876c9b49-s24wt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6876c9b49--s24wt-" May 15 00:09:02.054907 containerd[1443]: 2025-05-15 00:09:01.936 [INFO][4386] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2" Namespace="calico-apiserver" Pod="calico-apiserver-6876c9b49-s24wt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0" May 15 00:09:02.054907 containerd[1443]: 2025-05-15 00:09:01.967 [INFO][4408] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2" HandleID="k8s-pod-network.826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2" Workload="localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0" May 15 00:09:02.054907 containerd[1443]: 2025-05-15 00:09:01.982 [INFO][4408] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2" HandleID="k8s-pod-network.826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2" Workload="localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f34e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6876c9b49-s24wt", "timestamp":"2025-05-15 00:09:01.966981717 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:09:02.054907 containerd[1443]: 2025-05-15 00:09:01.982 [INFO][4408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:02.054907 containerd[1443]: 2025-05-15 00:09:01.996 [INFO][4408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:02.054907 containerd[1443]: 2025-05-15 00:09:01.996 [INFO][4408] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:09:02.054907 containerd[1443]: 2025-05-15 00:09:01.998 [INFO][4408] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2" host="localhost" May 15 00:09:02.054907 containerd[1443]: 2025-05-15 00:09:02.005 [INFO][4408] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:09:02.054907 containerd[1443]: 2025-05-15 00:09:02.017 [INFO][4408] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:09:02.054907 containerd[1443]: 2025-05-15 00:09:02.020 [INFO][4408] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:09:02.054907 containerd[1443]: 2025-05-15 00:09:02.024 [INFO][4408] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:09:02.054907 containerd[1443]: 2025-05-15 00:09:02.024 [INFO][4408] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2" host="localhost" May 15 00:09:02.054907 containerd[1443]: 2025-05-15 00:09:02.026 [INFO][4408] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2 May 15 00:09:02.054907 containerd[1443]: 2025-05-15 00:09:02.029 [INFO][4408] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2" host="localhost" May 15 00:09:02.054907 containerd[1443]: 2025-05-15 00:09:02.035 [INFO][4408] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2" host="localhost" May 15 00:09:02.054907 containerd[1443]: 2025-05-15 00:09:02.035 [INFO][4408] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2" host="localhost" May 15 00:09:02.054907 containerd[1443]: 2025-05-15 00:09:02.035 [INFO][4408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:02.054907 containerd[1443]: 2025-05-15 00:09:02.035 [INFO][4408] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2" HandleID="k8s-pod-network.826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2" Workload="localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0" May 15 00:09:02.055617 containerd[1443]: 2025-05-15 00:09:02.037 [INFO][4386] cni-plugin/k8s.go 386: Populated endpoint ContainerID="826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2" Namespace="calico-apiserver" Pod="calico-apiserver-6876c9b49-s24wt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0", GenerateName:"calico-apiserver-6876c9b49-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c1a3ff2-9a98-4275-87ea-6992a522449a", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6876c9b49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6876c9b49-s24wt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif4e03c90393", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:02.055617 containerd[1443]: 2025-05-15 00:09:02.037 [INFO][4386] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2" Namespace="calico-apiserver" Pod="calico-apiserver-6876c9b49-s24wt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0" May 15 00:09:02.055617 containerd[1443]: 2025-05-15 00:09:02.037 [INFO][4386] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4e03c90393 ContainerID="826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2" Namespace="calico-apiserver" Pod="calico-apiserver-6876c9b49-s24wt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0" May 15 00:09:02.055617 containerd[1443]: 2025-05-15 00:09:02.041 [INFO][4386] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2" Namespace="calico-apiserver" Pod="calico-apiserver-6876c9b49-s24wt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0" May 15 00:09:02.055617 containerd[1443]: 2025-05-15 00:09:02.041 [INFO][4386] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2" Namespace="calico-apiserver" Pod="calico-apiserver-6876c9b49-s24wt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0", GenerateName:"calico-apiserver-6876c9b49-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c1a3ff2-9a98-4275-87ea-6992a522449a", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6876c9b49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2", Pod:"calico-apiserver-6876c9b49-s24wt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif4e03c90393", MAC:"16:0c:8f:7a:b7:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:02.055617 containerd[1443]: 2025-05-15 00:09:02.051 [INFO][4386] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2" Namespace="calico-apiserver" Pod="calico-apiserver-6876c9b49-s24wt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0" May 15 00:09:02.072010 systemd[1]: Started cri-containerd-f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd.scope - libcontainer container f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd. May 15 00:09:02.079954 containerd[1443]: time="2025-05-15T00:09:02.079825020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:09:02.080109 containerd[1443]: time="2025-05-15T00:09:02.079936306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:09:02.080229 containerd[1443]: time="2025-05-15T00:09:02.080161038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:02.080442 containerd[1443]: time="2025-05-15T00:09:02.080366488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:02.088187 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:09:02.105963 systemd[1]: Started cri-containerd-826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2.scope - libcontainer container 826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2. May 15 00:09:02.115327 containerd[1443]: time="2025-05-15T00:09:02.115289159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6876c9b49-8jp5s,Uid:da1e193e-d1e2-4c51-883a-4a7628d9c3e9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd\"" May 15 00:09:02.120115 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:09:02.146310 containerd[1443]: time="2025-05-15T00:09:02.146246187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6876c9b49-s24wt,Uid:5c1a3ff2-9a98-4275-87ea-6992a522449a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2\"" May 15 00:09:02.738664 containerd[1443]: time="2025-05-15T00:09:02.737926459Z" level=info msg="StopPodSandbox for \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\"" May 15 00:09:02.771877 containerd[1443]: time="2025-05-15T00:09:02.771834038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:09:02.772840 containerd[1443]: time="2025-05-15T00:09:02.772636639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 15 00:09:02.774398 containerd[1443]: time="2025-05-15T00:09:02.774354327Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:09:02.777012 containerd[1443]: time="2025-05-15T00:09:02.776969901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:09:02.778084 containerd[1443]: time="2025-05-15T00:09:02.778049757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.535991301s" May 15 00:09:02.778130 containerd[1443]: time="2025-05-15T00:09:02.778084679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 15 00:09:02.780329 containerd[1443]: time="2025-05-15T00:09:02.779604797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 00:09:02.799003 containerd[1443]: time="2025-05-15T00:09:02.798954989Z" level=info msg="CreateContainer within sandbox \"cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 15 00:09:02.810477 containerd[1443]: time="2025-05-15T00:09:02.810417737Z" level=info msg="CreateContainer within sandbox \"cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0940bf8195454aad4ea2f37b969b5215768280a3bc502ff49bc805d3a07d320c\"" May 15 00:09:02.811054 containerd[1443]: time="2025-05-15T00:09:02.811030089Z" level=info msg="StartContainer for \"0940bf8195454aad4ea2f37b969b5215768280a3bc502ff49bc805d3a07d320c\"" May 15 00:09:02.823928 containerd[1443]: 2025-05-15 00:09:02.785 [INFO][4561] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" May 15 00:09:02.823928 containerd[1443]: 2025-05-15 00:09:02.785 [INFO][4561] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" iface="eth0" netns="/var/run/netns/cni-e97ca681-b3e9-96f5-8712-5befda56af72" May 15 00:09:02.823928 containerd[1443]: 2025-05-15 00:09:02.785 [INFO][4561] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" iface="eth0" netns="/var/run/netns/cni-e97ca681-b3e9-96f5-8712-5befda56af72" May 15 00:09:02.823928 containerd[1443]: 2025-05-15 00:09:02.785 [INFO][4561] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" iface="eth0" netns="/var/run/netns/cni-e97ca681-b3e9-96f5-8712-5befda56af72" May 15 00:09:02.823928 containerd[1443]: 2025-05-15 00:09:02.786 [INFO][4561] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" May 15 00:09:02.823928 containerd[1443]: 2025-05-15 00:09:02.786 [INFO][4561] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" May 15 00:09:02.823928 containerd[1443]: 2025-05-15 00:09:02.809 [INFO][4570] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" HandleID="k8s-pod-network.62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" Workload="localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0" May 15 00:09:02.823928 containerd[1443]: 2025-05-15 00:09:02.809 [INFO][4570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:02.823928 containerd[1443]: 2025-05-15 00:09:02.809 [INFO][4570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:02.823928 containerd[1443]: 2025-05-15 00:09:02.817 [WARNING][4570] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" HandleID="k8s-pod-network.62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" Workload="localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0" May 15 00:09:02.823928 containerd[1443]: 2025-05-15 00:09:02.817 [INFO][4570] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" HandleID="k8s-pod-network.62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" Workload="localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0" May 15 00:09:02.823928 containerd[1443]: 2025-05-15 00:09:02.818 [INFO][4570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:02.823928 containerd[1443]: 2025-05-15 00:09:02.822 [INFO][4561] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" May 15 00:09:02.824644 containerd[1443]: time="2025-05-15T00:09:02.824445377Z" level=info msg="TearDown network for sandbox \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\" successfully" May 15 00:09:02.824644 containerd[1443]: time="2025-05-15T00:09:02.824479819Z" level=info msg="StopPodSandbox for \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\" returns successfully" May 15 00:09:02.825190 kubelet[2558]: E0515 00:09:02.824907 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:02.825978 containerd[1443]: time="2025-05-15T00:09:02.825924933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jxzt2,Uid:2b7a008e-9f04-4b23-afda-16075b376325,Namespace:kube-system,Attempt:1,}" May 15 00:09:02.837958 systemd[1]: Started cri-containerd-0940bf8195454aad4ea2f37b969b5215768280a3bc502ff49bc805d3a07d320c.scope - libcontainer container 0940bf8195454aad4ea2f37b969b5215768280a3bc502ff49bc805d3a07d320c. May 15 00:09:02.876221 containerd[1443]: time="2025-05-15T00:09:02.876157949Z" level=info msg="StartContainer for \"0940bf8195454aad4ea2f37b969b5215768280a3bc502ff49bc805d3a07d320c\" returns successfully" May 15 00:09:02.935233 systemd[1]: run-netns-cni\x2de97ca681\x2db3e9\x2d96f5\x2d8712\x2d5befda56af72.mount: Deactivated successfully. May 15 00:09:03.007747 systemd-networkd[1384]: cali0b82fd6ea96: Link UP May 15 00:09:03.008006 systemd-networkd[1384]: cali0b82fd6ea96: Gained carrier May 15 00:09:03.021259 kubelet[2558]: I0515 00:09:03.018164 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-57d8fbfc9f-6zbq8" podStartSLOduration=21.480486948 podStartE2EDuration="23.018143016s" podCreationTimestamp="2025-05-15 00:08:40 +0000 UTC" firstStartedPulling="2025-05-15 00:09:01.241550828 +0000 UTC m=+42.583410624" lastFinishedPulling="2025-05-15 00:09:02.779206896 +0000 UTC m=+44.121066692" observedRunningTime="2025-05-15 00:09:02.92100537 +0000 UTC m=+44.262865166" watchObservedRunningTime="2025-05-15 00:09:03.018143016 +0000 UTC m=+44.360002772" May 15 00:09:03.027696 containerd[1443]: 2025-05-15 00:09:02.871 [INFO][4603] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:09:03.027696 containerd[1443]: 2025-05-15 00:09:02.886 [INFO][4603] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0 coredns-7db6d8ff4d- kube-system 2b7a008e-9f04-4b23-afda-16075b376325 947 0 2025-05-15 00:08:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-jxzt2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0b82fd6ea96 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jxzt2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jxzt2-" May 15 00:09:03.027696 containerd[1443]: 2025-05-15 00:09:02.886 [INFO][4603] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jxzt2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0" May 15 00:09:03.027696 containerd[1443]: 2025-05-15 00:09:02.928 [INFO][4628] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99" HandleID="k8s-pod-network.38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99" Workload="localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0" May 15 00:09:03.027696 containerd[1443]: 2025-05-15 00:09:02.953 [INFO][4628] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99" HandleID="k8s-pod-network.38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99" Workload="localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ad1c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-jxzt2", "timestamp":"2025-05-15 00:09:02.928383748 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:09:03.027696 containerd[1443]: 2025-05-15 00:09:02.953 [INFO][4628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:03.027696 containerd[1443]: 2025-05-15 00:09:02.953 [INFO][4628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:03.027696 containerd[1443]: 2025-05-15 00:09:02.953 [INFO][4628] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:09:03.027696 containerd[1443]: 2025-05-15 00:09:02.956 [INFO][4628] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99" host="localhost" May 15 00:09:03.027696 containerd[1443]: 2025-05-15 00:09:02.968 [INFO][4628] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:09:03.027696 containerd[1443]: 2025-05-15 00:09:02.975 [INFO][4628] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:09:03.027696 containerd[1443]: 2025-05-15 00:09:02.978 [INFO][4628] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:09:03.027696 containerd[1443]: 2025-05-15 00:09:02.980 [INFO][4628] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:09:03.027696 containerd[1443]: 2025-05-15 00:09:02.980 [INFO][4628] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99" host="localhost" May 15 00:09:03.027696 containerd[1443]: 2025-05-15 00:09:02.984 [INFO][4628] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99 May 15 00:09:03.027696 containerd[1443]: 2025-05-15 00:09:02.995 [INFO][4628] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99" host="localhost" May 15 00:09:03.027696 containerd[1443]: 2025-05-15 00:09:03.002 [INFO][4628] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99" host="localhost" May 15 00:09:03.027696 containerd[1443]: 2025-05-15 00:09:03.003 [INFO][4628] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99" host="localhost" May 15 00:09:03.027696 containerd[1443]: 2025-05-15 00:09:03.003 [INFO][4628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:03.027696 containerd[1443]: 2025-05-15 00:09:03.003 [INFO][4628] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99" HandleID="k8s-pod-network.38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99" Workload="localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0" May 15 00:09:03.028293 containerd[1443]: 2025-05-15 00:09:03.004 [INFO][4603] cni-plugin/k8s.go 386: Populated endpoint ContainerID="38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jxzt2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2b7a008e-9f04-4b23-afda-16075b376325", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-jxzt2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0b82fd6ea96", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:03.028293 containerd[1443]: 2025-05-15 00:09:03.005 [INFO][4603] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jxzt2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0" May 15 00:09:03.028293 containerd[1443]: 2025-05-15 00:09:03.005 [INFO][4603] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b82fd6ea96 ContainerID="38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jxzt2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0" May 15 00:09:03.028293 containerd[1443]: 2025-05-15 00:09:03.007 [INFO][4603] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jxzt2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0" May 15 00:09:03.028293 containerd[1443]: 2025-05-15 00:09:03.008 [INFO][4603] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jxzt2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2b7a008e-9f04-4b23-afda-16075b376325", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99", Pod:"coredns-7db6d8ff4d-jxzt2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0b82fd6ea96", MAC:"26:69:72:71:af:ca", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:03.028293 containerd[1443]: 2025-05-15 00:09:03.016 [INFO][4603] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jxzt2" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0" May 15 00:09:03.056187 containerd[1443]: time="2025-05-15T00:09:03.055468294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:09:03.056187 containerd[1443]: time="2025-05-15T00:09:03.056016562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:09:03.056187 containerd[1443]: time="2025-05-15T00:09:03.056030082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:03.056187 containerd[1443]: time="2025-05-15T00:09:03.056115167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:03.076965 systemd[1]: Started cri-containerd-38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99.scope - libcontainer container 38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99. May 15 00:09:03.094005 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:09:03.123607 containerd[1443]: time="2025-05-15T00:09:03.123565041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jxzt2,Uid:2b7a008e-9f04-4b23-afda-16075b376325,Namespace:kube-system,Attempt:1,} returns sandbox id \"38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99\"" May 15 00:09:03.124497 kubelet[2558]: E0515 00:09:03.124470 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:03.129137 containerd[1443]: time="2025-05-15T00:09:03.129094879Z" level=info msg="CreateContainer within sandbox \"38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:09:03.144054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount406899499.mount: Deactivated successfully. May 15 00:09:03.145049 containerd[1443]: time="2025-05-15T00:09:03.144625700Z" level=info msg="CreateContainer within sandbox \"38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ab47d39b5629aae956246187c15b95877eb924f9dc6ff5466c07308d4903d9e\"" May 15 00:09:03.145440 containerd[1443]: time="2025-05-15T00:09:03.145398139Z" level=info msg="StartContainer for \"1ab47d39b5629aae956246187c15b95877eb924f9dc6ff5466c07308d4903d9e\"" May 15 00:09:03.160696 systemd-networkd[1384]: cali497f76a107a: Gained IPv6LL May 15 00:09:03.160959 systemd-networkd[1384]: calia7d5d680593: Gained IPv6LL May 15 00:09:03.170964 systemd[1]: Started cri-containerd-1ab47d39b5629aae956246187c15b95877eb924f9dc6ff5466c07308d4903d9e.scope - libcontainer container 1ab47d39b5629aae956246187c15b95877eb924f9dc6ff5466c07308d4903d9e. May 15 00:09:03.192924 containerd[1443]: time="2025-05-15T00:09:03.192770563Z" level=info msg="StartContainer for \"1ab47d39b5629aae956246187c15b95877eb924f9dc6ff5466c07308d4903d9e\" returns successfully" May 15 00:09:03.222916 systemd-networkd[1384]: calif4e03c90393: Gained IPv6LL May 15 00:09:03.741262 containerd[1443]: time="2025-05-15T00:09:03.741188839Z" level=info msg="StopPodSandbox for \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\"" May 15 00:09:03.742160 containerd[1443]: time="2025-05-15T00:09:03.741529456Z" level=info msg="StopPodSandbox for \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\"" May 15 00:09:03.876196 containerd[1443]: 2025-05-15 00:09:03.801 [INFO][4788] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" May 15 00:09:03.876196 containerd[1443]: 2025-05-15 00:09:03.801 [INFO][4788] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" iface="eth0" netns="/var/run/netns/cni-59f99eed-491d-b85d-918a-86a7e047418d" May 15 00:09:03.876196 containerd[1443]: 2025-05-15 00:09:03.801 [INFO][4788] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" iface="eth0" netns="/var/run/netns/cni-59f99eed-491d-b85d-918a-86a7e047418d" May 15 00:09:03.876196 containerd[1443]: 2025-05-15 00:09:03.804 [INFO][4788] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" iface="eth0" netns="/var/run/netns/cni-59f99eed-491d-b85d-918a-86a7e047418d" May 15 00:09:03.876196 containerd[1443]: 2025-05-15 00:09:03.804 [INFO][4788] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" May 15 00:09:03.876196 containerd[1443]: 2025-05-15 00:09:03.804 [INFO][4788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" May 15 00:09:03.876196 containerd[1443]: 2025-05-15 00:09:03.861 [INFO][4799] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" HandleID="k8s-pod-network.c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" Workload="localhost-k8s-csi--node--driver--nrvq4-eth0" May 15 00:09:03.876196 containerd[1443]: 2025-05-15 00:09:03.861 [INFO][4799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:03.876196 containerd[1443]: 2025-05-15 00:09:03.861 [INFO][4799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:03.876196 containerd[1443]: 2025-05-15 00:09:03.872 [WARNING][4799] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" HandleID="k8s-pod-network.c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" Workload="localhost-k8s-csi--node--driver--nrvq4-eth0" May 15 00:09:03.876196 containerd[1443]: 2025-05-15 00:09:03.872 [INFO][4799] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" HandleID="k8s-pod-network.c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" Workload="localhost-k8s-csi--node--driver--nrvq4-eth0" May 15 00:09:03.876196 containerd[1443]: 2025-05-15 00:09:03.873 [INFO][4799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:03.876196 containerd[1443]: 2025-05-15 00:09:03.874 [INFO][4788] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" May 15 00:09:03.876902 containerd[1443]: time="2025-05-15T00:09:03.876872147Z" level=info msg="TearDown network for sandbox \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\" successfully" May 15 00:09:03.876942 containerd[1443]: time="2025-05-15T00:09:03.876903348Z" level=info msg="StopPodSandbox for \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\" returns successfully" May 15 00:09:03.878740 containerd[1443]: time="2025-05-15T00:09:03.878376342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nrvq4,Uid:262bbf25-d43c-443e-a611-5ff6be2347dc,Namespace:calico-system,Attempt:1,}" May 15 00:09:03.889182 containerd[1443]: 2025-05-15 00:09:03.805 [INFO][4778] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" May 15 00:09:03.889182 containerd[1443]: 2025-05-15 00:09:03.805 [INFO][4778] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" iface="eth0" netns="/var/run/netns/cni-b4cf1a3e-a7ab-8fa1-f07a-282b1d4dcf7a" May 15 00:09:03.889182 containerd[1443]: 2025-05-15 00:09:03.805 [INFO][4778] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" iface="eth0" netns="/var/run/netns/cni-b4cf1a3e-a7ab-8fa1-f07a-282b1d4dcf7a" May 15 00:09:03.889182 containerd[1443]: 2025-05-15 00:09:03.805 [INFO][4778] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" iface="eth0" netns="/var/run/netns/cni-b4cf1a3e-a7ab-8fa1-f07a-282b1d4dcf7a" May 15 00:09:03.889182 containerd[1443]: 2025-05-15 00:09:03.805 [INFO][4778] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" May 15 00:09:03.889182 containerd[1443]: 2025-05-15 00:09:03.805 [INFO][4778] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" May 15 00:09:03.889182 containerd[1443]: 2025-05-15 00:09:03.861 [INFO][4801] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" HandleID="k8s-pod-network.cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" Workload="localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0" May 15 00:09:03.889182 containerd[1443]: 2025-05-15 00:09:03.861 [INFO][4801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:03.889182 containerd[1443]: 2025-05-15 00:09:03.873 [INFO][4801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:03.889182 containerd[1443]: 2025-05-15 00:09:03.883 [WARNING][4801] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" HandleID="k8s-pod-network.cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" Workload="localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0" May 15 00:09:03.889182 containerd[1443]: 2025-05-15 00:09:03.883 [INFO][4801] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" HandleID="k8s-pod-network.cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" Workload="localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0" May 15 00:09:03.889182 containerd[1443]: 2025-05-15 00:09:03.885 [INFO][4801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:03.889182 containerd[1443]: 2025-05-15 00:09:03.886 [INFO][4778] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" May 15 00:09:03.889695 containerd[1443]: time="2025-05-15T00:09:03.889314453Z" level=info msg="TearDown network for sandbox \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\" successfully" May 15 00:09:03.889695 containerd[1443]: time="2025-05-15T00:09:03.889338214Z" level=info msg="StopPodSandbox for \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\" returns successfully" May 15 00:09:03.889749 kubelet[2558]: E0515 00:09:03.889685 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:03.891221 containerd[1443]: time="2025-05-15T00:09:03.891194747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f6wff,Uid:8f6d68a1-7bcc-4d68-9390-ac560659ee14,Namespace:kube-system,Attempt:1,}" May 15 00:09:03.918501 kubelet[2558]: I0515 00:09:03.917237 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:09:03.920518 kubelet[2558]: E0515 00:09:03.919513 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:03.943523 systemd[1]: run-netns-cni\x2d59f99eed\x2d491d\x2db85d\x2d918a\x2d86a7e047418d.mount: Deactivated successfully. May 15 00:09:03.943734 systemd[1]: run-netns-cni\x2db4cf1a3e\x2da7ab\x2d8fa1\x2df07a\x2d282b1d4dcf7a.mount: Deactivated successfully. May 15 00:09:03.957347 kubelet[2558]: I0515 00:09:03.957286 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jxzt2" podStartSLOduration=30.957267392 podStartE2EDuration="30.957267392s" podCreationTimestamp="2025-05-15 00:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:09:03.932404061 +0000 UTC m=+45.274263857" watchObservedRunningTime="2025-05-15 00:09:03.957267392 +0000 UTC m=+45.299127188" May 15 00:09:04.160624 systemd-networkd[1384]: cali89a5c225f25: Link UP May 15 00:09:04.160854 systemd-networkd[1384]: cali89a5c225f25: Gained carrier May 15 00:09:04.178106 containerd[1443]: 2025-05-15 00:09:03.904 [INFO][4818] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:09:04.178106 containerd[1443]: 2025-05-15 00:09:03.923 [INFO][4818] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--nrvq4-eth0 csi-node-driver- calico-system 262bbf25-d43c-443e-a611-5ff6be2347dc 966 0 2025-05-15 00:08:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-nrvq4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali89a5c225f25 [] []}} ContainerID="fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370" Namespace="calico-system" Pod="csi-node-driver-nrvq4" WorkloadEndpoint="localhost-k8s-csi--node--driver--nrvq4-" May 15 00:09:04.178106 containerd[1443]: 2025-05-15 00:09:03.923 [INFO][4818] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370" Namespace="calico-system" Pod="csi-node-driver-nrvq4" WorkloadEndpoint="localhost-k8s-csi--node--driver--nrvq4-eth0" May 15 00:09:04.178106 containerd[1443]: 2025-05-15 00:09:03.986 [INFO][4848] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370" HandleID="k8s-pod-network.fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370" Workload="localhost-k8s-csi--node--driver--nrvq4-eth0" May 15 00:09:04.178106 containerd[1443]: 2025-05-15 00:09:04.009 [INFO][4848] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370" HandleID="k8s-pod-network.fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370" Workload="localhost-k8s-csi--node--driver--nrvq4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000132110), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-nrvq4", "timestamp":"2025-05-15 00:09:03.986466221 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:09:04.178106 containerd[1443]: 2025-05-15 00:09:04.009 [INFO][4848] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:04.178106 containerd[1443]: 2025-05-15 00:09:04.009 [INFO][4848] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:04.178106 containerd[1443]: 2025-05-15 00:09:04.009 [INFO][4848] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:09:04.178106 containerd[1443]: 2025-05-15 00:09:04.011 [INFO][4848] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370" host="localhost" May 15 00:09:04.178106 containerd[1443]: 2025-05-15 00:09:04.015 [INFO][4848] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:09:04.178106 containerd[1443]: 2025-05-15 00:09:04.019 [INFO][4848] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:09:04.178106 containerd[1443]: 2025-05-15 00:09:04.021 [INFO][4848] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:09:04.178106 containerd[1443]: 2025-05-15 00:09:04.023 [INFO][4848] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:09:04.178106 containerd[1443]: 2025-05-15 00:09:04.023 [INFO][4848] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370" host="localhost" May 15 00:09:04.178106 containerd[1443]: 2025-05-15 00:09:04.025 [INFO][4848] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370 May 15 00:09:04.178106 containerd[1443]: 2025-05-15 00:09:04.103 [INFO][4848] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370" host="localhost" May 15 00:09:04.178106 containerd[1443]: 2025-05-15 00:09:04.155 [INFO][4848] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370" host="localhost" May 15 00:09:04.178106 containerd[1443]: 2025-05-15 00:09:04.155 [INFO][4848] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370" host="localhost" May 15 00:09:04.178106 containerd[1443]: 2025-05-15 00:09:04.155 [INFO][4848] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:04.178106 containerd[1443]: 2025-05-15 00:09:04.155 [INFO][4848] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370" HandleID="k8s-pod-network.fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370" Workload="localhost-k8s-csi--node--driver--nrvq4-eth0" May 15 00:09:04.178710 containerd[1443]: 2025-05-15 00:09:04.158 [INFO][4818] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370" Namespace="calico-system" Pod="csi-node-driver-nrvq4" WorkloadEndpoint="localhost-k8s-csi--node--driver--nrvq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nrvq4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"262bbf25-d43c-443e-a611-5ff6be2347dc", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-nrvq4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali89a5c225f25", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:04.178710 containerd[1443]: 2025-05-15 00:09:04.158 [INFO][4818] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370" Namespace="calico-system" Pod="csi-node-driver-nrvq4" WorkloadEndpoint="localhost-k8s-csi--node--driver--nrvq4-eth0" May 15 00:09:04.178710 containerd[1443]: 2025-05-15 00:09:04.158 [INFO][4818] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89a5c225f25 ContainerID="fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370" Namespace="calico-system" Pod="csi-node-driver-nrvq4" WorkloadEndpoint="localhost-k8s-csi--node--driver--nrvq4-eth0" May 15 00:09:04.178710 containerd[1443]: 2025-05-15 00:09:04.160 [INFO][4818] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370" Namespace="calico-system" Pod="csi-node-driver-nrvq4" WorkloadEndpoint="localhost-k8s-csi--node--driver--nrvq4-eth0" May 15 00:09:04.178710 containerd[1443]: 2025-05-15 00:09:04.161 [INFO][4818] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370" Namespace="calico-system" Pod="csi-node-driver-nrvq4" WorkloadEndpoint="localhost-k8s-csi--node--driver--nrvq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nrvq4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"262bbf25-d43c-443e-a611-5ff6be2347dc", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370", Pod:"csi-node-driver-nrvq4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali89a5c225f25", MAC:"96:f9:b4:af:05:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:04.178710 containerd[1443]: 2025-05-15 00:09:04.173 [INFO][4818] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370" Namespace="calico-system" Pod="csi-node-driver-nrvq4" WorkloadEndpoint="localhost-k8s-csi--node--driver--nrvq4-eth0" May 15 00:09:04.219036 containerd[1443]: time="2025-05-15T00:09:04.218918519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:09:04.219036 containerd[1443]: time="2025-05-15T00:09:04.218983882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:09:04.219488 containerd[1443]: time="2025-05-15T00:09:04.218995522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:04.219685 containerd[1443]: time="2025-05-15T00:09:04.219591872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:04.230817 systemd-networkd[1384]: calif6db1f071e4: Link UP May 15 00:09:04.230968 systemd-networkd[1384]: calif6db1f071e4: Gained carrier May 15 00:09:04.243547 containerd[1443]: 2025-05-15 00:09:03.938 [INFO][4830] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:09:04.243547 containerd[1443]: 2025-05-15 00:09:03.964 [INFO][4830] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0 coredns-7db6d8ff4d- kube-system 8f6d68a1-7bcc-4d68-9390-ac560659ee14 967 0 2025-05-15 00:08:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-f6wff eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif6db1f071e4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f6wff" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--f6wff-" May 15 00:09:04.243547 containerd[1443]: 2025-05-15 00:09:03.964 [INFO][4830] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f6wff" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0" May 15 00:09:04.243547 containerd[1443]: 2025-05-15 00:09:03.998 [INFO][4855] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1" HandleID="k8s-pod-network.c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1" Workload="localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0" May 15 00:09:04.243547 containerd[1443]: 2025-05-15 00:09:04.012 [INFO][4855] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1" HandleID="k8s-pod-network.c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1" Workload="localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027b890), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-f6wff", "timestamp":"2025-05-15 00:09:03.998279096 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:09:04.243547 containerd[1443]: 2025-05-15 00:09:04.012 [INFO][4855] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:04.243547 containerd[1443]: 2025-05-15 00:09:04.155 [INFO][4855] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:04.243547 containerd[1443]: 2025-05-15 00:09:04.155 [INFO][4855] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:09:04.243547 containerd[1443]: 2025-05-15 00:09:04.164 [INFO][4855] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1" host="localhost" May 15 00:09:04.243547 containerd[1443]: 2025-05-15 00:09:04.174 [INFO][4855] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:09:04.243547 containerd[1443]: 2025-05-15 00:09:04.193 [INFO][4855] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:09:04.243547 containerd[1443]: 2025-05-15 00:09:04.202 [INFO][4855] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:09:04.243547 containerd[1443]: 2025-05-15 00:09:04.205 [INFO][4855] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:09:04.243547 containerd[1443]: 2025-05-15 00:09:04.206 [INFO][4855] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1" host="localhost" May 15 00:09:04.243547 containerd[1443]: 2025-05-15 00:09:04.208 [INFO][4855] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1 May 15 00:09:04.243547 containerd[1443]: 2025-05-15 00:09:04.215 [INFO][4855] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1" host="localhost" May 15 00:09:04.243547 containerd[1443]: 2025-05-15 00:09:04.226 [INFO][4855] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1" host="localhost" May 15 00:09:04.243547 containerd[1443]: 2025-05-15 00:09:04.226 [INFO][4855] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1" host="localhost" May 15 00:09:04.243547 containerd[1443]: 2025-05-15 00:09:04.226 [INFO][4855] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:04.243547 containerd[1443]: 2025-05-15 00:09:04.226 [INFO][4855] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1" HandleID="k8s-pod-network.c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1" Workload="localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0" May 15 00:09:04.244707 containerd[1443]: 2025-05-15 00:09:04.229 [INFO][4830] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f6wff" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8f6d68a1-7bcc-4d68-9390-ac560659ee14", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-f6wff", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif6db1f071e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:04.244707 containerd[1443]: 2025-05-15 00:09:04.229 [INFO][4830] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f6wff" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0" May 15 00:09:04.244707 containerd[1443]: 2025-05-15 00:09:04.229 [INFO][4830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6db1f071e4 ContainerID="c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f6wff" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0" May 15 00:09:04.244707 containerd[1443]: 2025-05-15 00:09:04.230 [INFO][4830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f6wff" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0" May 15 00:09:04.244707 containerd[1443]: 2025-05-15 00:09:04.230 [INFO][4830] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f6wff" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8f6d68a1-7bcc-4d68-9390-ac560659ee14", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1", Pod:"coredns-7db6d8ff4d-f6wff", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif6db1f071e4", MAC:"a6:42:95:c1:91:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:04.244707 containerd[1443]: 2025-05-15 00:09:04.241 [INFO][4830] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f6wff" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0" May 15 00:09:04.265988 systemd[1]: Started cri-containerd-fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370.scope - libcontainer container fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370. May 15 00:09:04.280075 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:09:04.291069 containerd[1443]: time="2025-05-15T00:09:04.289552608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:09:04.291069 containerd[1443]: time="2025-05-15T00:09:04.290124876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:09:04.291069 containerd[1443]: time="2025-05-15T00:09:04.290137397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:04.291928 containerd[1443]: time="2025-05-15T00:09:04.290224881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:04.314982 systemd[1]: Started cri-containerd-c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1.scope - libcontainer container c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1. May 15 00:09:04.327086 containerd[1443]: time="2025-05-15T00:09:04.327045580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nrvq4,Uid:262bbf25-d43c-443e-a611-5ff6be2347dc,Namespace:calico-system,Attempt:1,} returns sandbox id \"fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370\"" May 15 00:09:04.330821 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:09:04.349564 containerd[1443]: time="2025-05-15T00:09:04.349437807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f6wff,Uid:8f6d68a1-7bcc-4d68-9390-ac560659ee14,Namespace:kube-system,Attempt:1,} returns sandbox id \"c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1\"" May 15 00:09:04.350229 kubelet[2558]: E0515 00:09:04.350206 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:04.352562 containerd[1443]: time="2025-05-15T00:09:04.352526799Z" level=info msg="CreateContainer within sandbox \"c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:09:04.364722 containerd[1443]: time="2025-05-15T00:09:04.364620957Z" level=info msg="CreateContainer within sandbox \"c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a3ce0e07a8c4f5a0150ba06b71afa158096dbfcd00281eed16d0975980564d3\"" May 15 00:09:04.365849 containerd[1443]: time="2025-05-15T00:09:04.365079459Z" level=info msg="StartContainer for \"8a3ce0e07a8c4f5a0150ba06b71afa158096dbfcd00281eed16d0975980564d3\"" May 15 00:09:04.398980 systemd[1]: Started cri-containerd-8a3ce0e07a8c4f5a0150ba06b71afa158096dbfcd00281eed16d0975980564d3.scope - libcontainer container 8a3ce0e07a8c4f5a0150ba06b71afa158096dbfcd00281eed16d0975980564d3. May 15 00:09:04.427684 containerd[1443]: time="2025-05-15T00:09:04.427568747Z" level=info msg="StartContainer for \"8a3ce0e07a8c4f5a0150ba06b71afa158096dbfcd00281eed16d0975980564d3\" returns successfully" May 15 00:09:04.759986 systemd-networkd[1384]: cali0b82fd6ea96: Gained IPv6LL May 15 00:09:04.856632 containerd[1443]: time="2025-05-15T00:09:04.856142200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:09:04.856632 containerd[1443]: time="2025-05-15T00:09:04.856621663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 15 00:09:04.857629 containerd[1443]: time="2025-05-15T00:09:04.857597352Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:09:04.859950 containerd[1443]: time="2025-05-15T00:09:04.859913026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:09:04.860707 containerd[1443]: time="2025-05-15T00:09:04.860669703Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 2.081015984s" May 15 00:09:04.860757 containerd[1443]: time="2025-05-15T00:09:04.860706385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 15 00:09:04.862055 containerd[1443]: time="2025-05-15T00:09:04.861848842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 00:09:04.864148 containerd[1443]: time="2025-05-15T00:09:04.864115954Z" level=info msg="CreateContainer within sandbox \"f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 00:09:04.872809 containerd[1443]: time="2025-05-15T00:09:04.872679857Z" level=info msg="CreateContainer within sandbox \"f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4aeaae24e6e0a68ace0bbd4386ef6664eb3faf71d6a4a36dd2e3c7b3c1acbc16\"" May 15 00:09:04.873197 containerd[1443]: time="2025-05-15T00:09:04.873165281Z" level=info msg="StartContainer for \"4aeaae24e6e0a68ace0bbd4386ef6664eb3faf71d6a4a36dd2e3c7b3c1acbc16\"" May 15 00:09:04.912956 systemd[1]: Started cri-containerd-4aeaae24e6e0a68ace0bbd4386ef6664eb3faf71d6a4a36dd2e3c7b3c1acbc16.scope - libcontainer container 4aeaae24e6e0a68ace0bbd4386ef6664eb3faf71d6a4a36dd2e3c7b3c1acbc16. May 15 00:09:04.924301 kubelet[2558]: E0515 00:09:04.923878 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:04.924301 kubelet[2558]: E0515 00:09:04.924151 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:04.937531 kubelet[2558]: I0515 00:09:04.937393 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-f6wff" podStartSLOduration=31.937377373 podStartE2EDuration="31.937377373s" podCreationTimestamp="2025-05-15 00:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:09:04.936859428 +0000 UTC m=+46.278719224" watchObservedRunningTime="2025-05-15 00:09:04.937377373 +0000 UTC m=+46.279237129" May 15 00:09:04.971608 containerd[1443]: time="2025-05-15T00:09:04.971564782Z" level=info msg="StartContainer for \"4aeaae24e6e0a68ace0bbd4386ef6664eb3faf71d6a4a36dd2e3c7b3c1acbc16\" returns successfully" May 15 00:09:05.022723 systemd[1]: Started sshd@12-10.0.0.17:22-10.0.0.1:50808.service - OpenSSH per-connection server daemon (10.0.0.1:50808). May 15 00:09:05.080114 sshd[5083]: Accepted publickey for core from 10.0.0.1 port 50808 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:09:05.082026 sshd[5083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:05.087396 systemd-logind[1429]: New session 13 of user core. May 15 00:09:05.092963 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 00:09:05.134027 containerd[1443]: time="2025-05-15T00:09:05.133973651Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:09:05.134468 containerd[1443]: time="2025-05-15T00:09:05.134421273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 15 00:09:05.137102 containerd[1443]: time="2025-05-15T00:09:05.137064281Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 275.180718ms" May 15 00:09:05.137102 containerd[1443]: time="2025-05-15T00:09:05.137102043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 15 00:09:05.138305 containerd[1443]: time="2025-05-15T00:09:05.138034248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 15 00:09:05.139899 containerd[1443]: time="2025-05-15T00:09:05.139864017Z" level=info msg="CreateContainer within sandbox \"826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 00:09:05.158759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount424524181.mount: Deactivated successfully. May 15 00:09:05.159075 containerd[1443]: time="2025-05-15T00:09:05.159026228Z" level=info msg="CreateContainer within sandbox \"826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1974b59a67767df0200ae12570118ee8b39d223bdc962b9fac8d9ae058fd19c3\"" May 15 00:09:05.159704 containerd[1443]: time="2025-05-15T00:09:05.159671059Z" level=info msg="StartContainer for \"1974b59a67767df0200ae12570118ee8b39d223bdc962b9fac8d9ae058fd19c3\"" May 15 00:09:05.201986 systemd[1]: Started cri-containerd-1974b59a67767df0200ae12570118ee8b39d223bdc962b9fac8d9ae058fd19c3.scope - libcontainer container 1974b59a67767df0200ae12570118ee8b39d223bdc962b9fac8d9ae058fd19c3. May 15 00:09:05.258766 containerd[1443]: time="2025-05-15T00:09:05.258726187Z" level=info msg="StartContainer for \"1974b59a67767df0200ae12570118ee8b39d223bdc962b9fac8d9ae058fd19c3\" returns successfully" May 15 00:09:05.297107 sshd[5083]: pam_unix(sshd:session): session closed for user core May 15 00:09:05.302290 systemd[1]: sshd@12-10.0.0.17:22-10.0.0.1:50808.service: Deactivated successfully. May 15 00:09:05.305193 systemd[1]: session-13.scope: Deactivated successfully. May 15 00:09:05.308544 systemd-logind[1429]: Session 13 logged out. Waiting for processes to exit. May 15 00:09:05.309535 systemd-logind[1429]: Removed session 13. May 15 00:09:05.327810 kubelet[2558]: I0515 00:09:05.327427 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:09:05.333117 kubelet[2558]: E0515 00:09:05.332923 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:05.399888 systemd-networkd[1384]: calif6db1f071e4: Gained IPv6LL May 15 00:09:05.846885 systemd-networkd[1384]: cali89a5c225f25: Gained IPv6LL May 15 00:09:05.932896 kubelet[2558]: E0515 00:09:05.932141 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:05.932896 kubelet[2558]: E0515 00:09:05.932739 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:05.932896 kubelet[2558]: E0515 00:09:05.932814 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:05.944834 kubelet[2558]: I0515 00:09:05.944346 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6876c9b49-8jp5s" podStartSLOduration=24.199243543 podStartE2EDuration="26.94433247s" podCreationTimestamp="2025-05-15 00:08:39 +0000 UTC" firstStartedPulling="2025-05-15 00:09:02.116547864 +0000 UTC m=+43.458407660" lastFinishedPulling="2025-05-15 00:09:04.861636791 +0000 UTC m=+46.203496587" observedRunningTime="2025-05-15 00:09:05.944139821 +0000 UTC m=+47.285999617" watchObservedRunningTime="2025-05-15 00:09:05.94433247 +0000 UTC m=+47.286192266" May 15 00:09:05.960018 kubelet[2558]: I0515 00:09:05.959881 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6876c9b49-s24wt" podStartSLOduration=23.969026373 podStartE2EDuration="26.959440604s" podCreationTimestamp="2025-05-15 00:08:39 +0000 UTC" firstStartedPulling="2025-05-15 00:09:02.147383766 +0000 UTC m=+43.489243562" lastFinishedPulling="2025-05-15 00:09:05.137797997 +0000 UTC m=+46.479657793" observedRunningTime="2025-05-15 00:09:05.959327198 +0000 UTC m=+47.301187034" watchObservedRunningTime="2025-05-15 00:09:05.959440604 +0000 UTC m=+47.301300440" May 15 00:09:06.163396 kubelet[2558]: I0515 00:09:06.163281 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:09:06.490228 kernel: bpftool[5245]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 15 00:09:06.509559 containerd[1443]: time="2025-05-15T00:09:06.509508897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:09:06.511034 containerd[1443]: time="2025-05-15T00:09:06.510994928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 15 00:09:06.511672 containerd[1443]: time="2025-05-15T00:09:06.511646719Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:09:06.514254 containerd[1443]: time="2025-05-15T00:09:06.514200761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:09:06.515707 containerd[1443]: time="2025-05-15T00:09:06.515667111Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.377599581s" May 15 00:09:06.515767 containerd[1443]: time="2025-05-15T00:09:06.515708633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 15 00:09:06.517848 containerd[1443]: time="2025-05-15T00:09:06.517816734Z" level=info msg="CreateContainer within sandbox \"fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 15 00:09:06.542911 containerd[1443]: time="2025-05-15T00:09:06.542861689Z" level=info msg="CreateContainer within sandbox \"fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b188892504a00f7816f046d18e10332ebbdf55b9c4a16f0f309f8e35a469130a\"" May 15 00:09:06.543608 containerd[1443]: time="2025-05-15T00:09:06.543580924Z" level=info msg="StartContainer for \"b188892504a00f7816f046d18e10332ebbdf55b9c4a16f0f309f8e35a469130a\"" May 15 00:09:06.575953 systemd[1]: Started cri-containerd-b188892504a00f7816f046d18e10332ebbdf55b9c4a16f0f309f8e35a469130a.scope - libcontainer container b188892504a00f7816f046d18e10332ebbdf55b9c4a16f0f309f8e35a469130a. May 15 00:09:06.603034 containerd[1443]: time="2025-05-15T00:09:06.602983960Z" level=info msg="StartContainer for \"b188892504a00f7816f046d18e10332ebbdf55b9c4a16f0f309f8e35a469130a\" returns successfully" May 15 00:09:06.606110 containerd[1443]: time="2025-05-15T00:09:06.606075947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 00:09:06.669288 systemd-networkd[1384]: vxlan.calico: Link UP May 15 00:09:06.669300 systemd-networkd[1384]: vxlan.calico: Gained carrier May 15 00:09:06.936254 kubelet[2558]: I0515 00:09:06.936221 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:09:06.936751 kubelet[2558]: I0515 00:09:06.936390 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:09:06.936828 kubelet[2558]: E0515 00:09:06.936805 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:07.740103 containerd[1443]: time="2025-05-15T00:09:07.740050965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:09:07.741181 containerd[1443]: time="2025-05-15T00:09:07.741142617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 15 00:09:07.742078 containerd[1443]: time="2025-05-15T00:09:07.742048379Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:09:07.744698 containerd[1443]: time="2025-05-15T00:09:07.744647061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:09:07.745433 containerd[1443]: time="2025-05-15T00:09:07.745172606Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.139059377s" May 15 00:09:07.745433 containerd[1443]: time="2025-05-15T00:09:07.745203807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 15 00:09:07.748859 containerd[1443]: time="2025-05-15T00:09:07.748772575Z" level=info msg="CreateContainer within sandbox \"fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 15 00:09:07.760698 containerd[1443]: time="2025-05-15T00:09:07.760642213Z" level=info msg="CreateContainer within sandbox \"fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"37b7d5a4c6dc2929547d167f430d145cfbd6b1ad6bb941c37c14a65f28ab2c92\"" May 15 00:09:07.761201 containerd[1443]: time="2025-05-15T00:09:07.761166077Z" level=info msg="StartContainer for \"37b7d5a4c6dc2929547d167f430d145cfbd6b1ad6bb941c37c14a65f28ab2c92\"" May 15 00:09:07.787944 systemd[1]: Started cri-containerd-37b7d5a4c6dc2929547d167f430d145cfbd6b1ad6bb941c37c14a65f28ab2c92.scope - libcontainer container 37b7d5a4c6dc2929547d167f430d145cfbd6b1ad6bb941c37c14a65f28ab2c92. May 15 00:09:07.810795 containerd[1443]: time="2025-05-15T00:09:07.810732566Z" level=info msg="StartContainer for \"37b7d5a4c6dc2929547d167f430d145cfbd6b1ad6bb941c37c14a65f28ab2c92\" returns successfully" May 15 00:09:07.953479 kubelet[2558]: I0515 00:09:07.953414 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nrvq4" podStartSLOduration=24.535623943 podStartE2EDuration="27.953392549s" podCreationTimestamp="2025-05-15 00:08:40 +0000 UTC" firstStartedPulling="2025-05-15 00:09:04.32825388 +0000 UTC m=+45.670113676" lastFinishedPulling="2025-05-15 00:09:07.746022486 +0000 UTC m=+49.087882282" observedRunningTime="2025-05-15 00:09:07.951406256 +0000 UTC m=+49.293266052" watchObservedRunningTime="2025-05-15 00:09:07.953392549 +0000 UTC m=+49.295252345" May 15 00:09:08.535009 systemd-networkd[1384]: vxlan.calico: Gained IPv6LL May 15 00:09:08.827314 kubelet[2558]: I0515 00:09:08.827213 2558 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 15 00:09:08.827314 kubelet[2558]: I0515 00:09:08.827255 2558 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 15 00:09:10.310347 systemd[1]: Started sshd@13-10.0.0.17:22-10.0.0.1:50816.service - OpenSSH per-connection server daemon (10.0.0.1:50816). May 15 00:09:10.379234 sshd[5402]: Accepted publickey for core from 10.0.0.1 port 50816 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:09:10.381106 sshd[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:10.385096 systemd-logind[1429]: New session 14 of user core. May 15 00:09:10.393973 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 00:09:10.583155 sshd[5402]: pam_unix(sshd:session): session closed for user core May 15 00:09:10.587232 systemd-logind[1429]: Session 14 logged out. Waiting for processes to exit. May 15 00:09:10.587567 systemd[1]: sshd@13-10.0.0.17:22-10.0.0.1:50816.service: Deactivated successfully. May 15 00:09:10.589425 systemd[1]: session-14.scope: Deactivated successfully. May 15 00:09:10.590206 systemd-logind[1429]: Removed session 14. May 15 00:09:13.460955 kubelet[2558]: I0515 00:09:13.460856 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:09:15.594880 systemd[1]: Started sshd@14-10.0.0.17:22-10.0.0.1:59026.service - OpenSSH per-connection server daemon (10.0.0.1:59026). May 15 00:09:15.666215 sshd[5428]: Accepted publickey for core from 10.0.0.1 port 59026 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:09:15.666750 sshd[5428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:15.671369 systemd-logind[1429]: New session 15 of user core. May 15 00:09:15.679079 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 00:09:15.819582 sshd[5428]: pam_unix(sshd:session): session closed for user core May 15 00:09:15.825061 systemd[1]: sshd@14-10.0.0.17:22-10.0.0.1:59026.service: Deactivated successfully. May 15 00:09:15.827475 systemd[1]: session-15.scope: Deactivated successfully. May 15 00:09:15.828476 systemd-logind[1429]: Session 15 logged out. Waiting for processes to exit. May 15 00:09:15.832612 systemd-logind[1429]: Removed session 15. May 15 00:09:18.729176 containerd[1443]: time="2025-05-15T00:09:18.728824793Z" level=info msg="StopPodSandbox for \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\"" May 15 00:09:18.827440 containerd[1443]: 2025-05-15 00:09:18.778 [WARNING][5465] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8f6d68a1-7bcc-4d68-9390-ac560659ee14", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1", Pod:"coredns-7db6d8ff4d-f6wff", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif6db1f071e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:18.827440 containerd[1443]: 2025-05-15 00:09:18.778 [INFO][5465] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" May 15 00:09:18.827440 containerd[1443]: 2025-05-15 00:09:18.778 [INFO][5465] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" iface="eth0" netns="" May 15 00:09:18.827440 containerd[1443]: 2025-05-15 00:09:18.779 [INFO][5465] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" May 15 00:09:18.827440 containerd[1443]: 2025-05-15 00:09:18.779 [INFO][5465] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" May 15 00:09:18.827440 containerd[1443]: 2025-05-15 00:09:18.813 [INFO][5475] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" HandleID="k8s-pod-network.cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" Workload="localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0" May 15 00:09:18.827440 containerd[1443]: 2025-05-15 00:09:18.813 [INFO][5475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:18.827440 containerd[1443]: 2025-05-15 00:09:18.813 [INFO][5475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:18.827440 containerd[1443]: 2025-05-15 00:09:18.821 [WARNING][5475] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" HandleID="k8s-pod-network.cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" Workload="localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0" May 15 00:09:18.827440 containerd[1443]: 2025-05-15 00:09:18.821 [INFO][5475] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" HandleID="k8s-pod-network.cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" Workload="localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0" May 15 00:09:18.827440 containerd[1443]: 2025-05-15 00:09:18.823 [INFO][5475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:18.827440 containerd[1443]: 2025-05-15 00:09:18.825 [INFO][5465] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" May 15 00:09:18.828007 containerd[1443]: time="2025-05-15T00:09:18.827476660Z" level=info msg="TearDown network for sandbox \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\" successfully" May 15 00:09:18.828007 containerd[1443]: time="2025-05-15T00:09:18.827502701Z" level=info msg="StopPodSandbox for \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\" returns successfully" May 15 00:09:18.828155 containerd[1443]: time="2025-05-15T00:09:18.828096646Z" level=info msg="RemovePodSandbox for \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\"" May 15 00:09:18.828207 containerd[1443]: time="2025-05-15T00:09:18.828160368Z" level=info msg="Forcibly stopping sandbox \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\"" May 15 00:09:18.908137 containerd[1443]: 2025-05-15 00:09:18.867 [WARNING][5498] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8f6d68a1-7bcc-4d68-9390-ac560659ee14", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3d6d83b6a25dac4f67323673a04dc5c89f61b2b66a6f71c69c1be7d6e32c9b1", Pod:"coredns-7db6d8ff4d-f6wff", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif6db1f071e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:18.908137 containerd[1443]: 2025-05-15 00:09:18.867 [INFO][5498] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" May 15 00:09:18.908137 containerd[1443]: 2025-05-15 00:09:18.867 [INFO][5498] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" iface="eth0" netns="" May 15 00:09:18.908137 containerd[1443]: 2025-05-15 00:09:18.867 [INFO][5498] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" May 15 00:09:18.908137 containerd[1443]: 2025-05-15 00:09:18.867 [INFO][5498] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" May 15 00:09:18.908137 containerd[1443]: 2025-05-15 00:09:18.890 [INFO][5506] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" HandleID="k8s-pod-network.cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" Workload="localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0" May 15 00:09:18.908137 containerd[1443]: 2025-05-15 00:09:18.890 [INFO][5506] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:18.908137 containerd[1443]: 2025-05-15 00:09:18.890 [INFO][5506] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:18.908137 containerd[1443]: 2025-05-15 00:09:18.899 [WARNING][5506] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" HandleID="k8s-pod-network.cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" Workload="localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0" May 15 00:09:18.908137 containerd[1443]: 2025-05-15 00:09:18.899 [INFO][5506] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" HandleID="k8s-pod-network.cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" Workload="localhost-k8s-coredns--7db6d8ff4d--f6wff-eth0" May 15 00:09:18.908137 containerd[1443]: 2025-05-15 00:09:18.900 [INFO][5506] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:18.908137 containerd[1443]: 2025-05-15 00:09:18.902 [INFO][5498] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961" May 15 00:09:18.908137 containerd[1443]: time="2025-05-15T00:09:18.908097864Z" level=info msg="TearDown network for sandbox \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\" successfully" May 15 00:09:18.919314 containerd[1443]: time="2025-05-15T00:09:18.919264404Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:09:18.919443 containerd[1443]: time="2025-05-15T00:09:18.919338088Z" level=info msg="RemovePodSandbox \"cf4fb1fa6de9c0a6c5433cdbb893e2182676239e029e911bc645543d3f9a5961\" returns successfully" May 15 00:09:18.919872 containerd[1443]: time="2025-05-15T00:09:18.919851149Z" level=info msg="StopPodSandbox for \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\"" May 15 00:09:19.010097 containerd[1443]: 2025-05-15 00:09:18.969 [WARNING][5528] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0", GenerateName:"calico-kube-controllers-57d8fbfc9f-", Namespace:"calico-system", SelfLink:"", UID:"b55a08bc-9d10-4d47-ac0a-de72fa4ab1ce", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57d8fbfc9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69", Pod:"calico-kube-controllers-57d8fbfc9f-6zbq8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali497f76a107a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:19.010097 containerd[1443]: 2025-05-15 00:09:18.969 [INFO][5528] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" May 15 00:09:19.010097 containerd[1443]: 2025-05-15 00:09:18.969 [INFO][5528] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" iface="eth0" netns="" May 15 00:09:19.010097 containerd[1443]: 2025-05-15 00:09:18.969 [INFO][5528] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" May 15 00:09:19.010097 containerd[1443]: 2025-05-15 00:09:18.969 [INFO][5528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" May 15 00:09:19.010097 containerd[1443]: 2025-05-15 00:09:18.994 [INFO][5538] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" HandleID="k8s-pod-network.f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" Workload="localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0" May 15 00:09:19.010097 containerd[1443]: 2025-05-15 00:09:18.995 [INFO][5538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:19.010097 containerd[1443]: 2025-05-15 00:09:18.995 [INFO][5538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:19.010097 containerd[1443]: 2025-05-15 00:09:19.003 [WARNING][5538] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" HandleID="k8s-pod-network.f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" Workload="localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0" May 15 00:09:19.010097 containerd[1443]: 2025-05-15 00:09:19.003 [INFO][5538] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" HandleID="k8s-pod-network.f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" Workload="localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0" May 15 00:09:19.010097 containerd[1443]: 2025-05-15 00:09:19.005 [INFO][5538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:19.010097 containerd[1443]: 2025-05-15 00:09:19.008 [INFO][5528] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" May 15 00:09:19.010097 containerd[1443]: time="2025-05-15T00:09:19.010047825Z" level=info msg="TearDown network for sandbox \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\" successfully" May 15 00:09:19.010097 containerd[1443]: time="2025-05-15T00:09:19.010085466Z" level=info msg="StopPodSandbox for \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\" returns successfully" May 15 00:09:19.010631 containerd[1443]: time="2025-05-15T00:09:19.010601407Z" level=info msg="RemovePodSandbox for \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\"" May 15 00:09:19.010672 containerd[1443]: time="2025-05-15T00:09:19.010640289Z" level=info msg="Forcibly stopping sandbox \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\"" May 15 00:09:19.087338 containerd[1443]: 2025-05-15 00:09:19.052 [WARNING][5560] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0", GenerateName:"calico-kube-controllers-57d8fbfc9f-", Namespace:"calico-system", SelfLink:"", UID:"b55a08bc-9d10-4d47-ac0a-de72fa4ab1ce", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57d8fbfc9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd566e9fac0cc75c5c0e49e0b36cf54d28af6625f6a152a7b9b7c3f566b8ab69", Pod:"calico-kube-controllers-57d8fbfc9f-6zbq8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali497f76a107a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:19.087338 containerd[1443]: 2025-05-15 00:09:19.053 [INFO][5560] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" May 15 00:09:19.087338 containerd[1443]: 2025-05-15 00:09:19.053 [INFO][5560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" iface="eth0" netns="" May 15 00:09:19.087338 containerd[1443]: 2025-05-15 00:09:19.053 [INFO][5560] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" May 15 00:09:19.087338 containerd[1443]: 2025-05-15 00:09:19.053 [INFO][5560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" May 15 00:09:19.087338 containerd[1443]: 2025-05-15 00:09:19.073 [INFO][5569] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" HandleID="k8s-pod-network.f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" Workload="localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0" May 15 00:09:19.087338 containerd[1443]: 2025-05-15 00:09:19.074 [INFO][5569] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:19.087338 containerd[1443]: 2025-05-15 00:09:19.074 [INFO][5569] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:19.087338 containerd[1443]: 2025-05-15 00:09:19.082 [WARNING][5569] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" HandleID="k8s-pod-network.f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" Workload="localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0" May 15 00:09:19.087338 containerd[1443]: 2025-05-15 00:09:19.082 [INFO][5569] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" HandleID="k8s-pod-network.f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" Workload="localhost-k8s-calico--kube--controllers--57d8fbfc9f--6zbq8-eth0" May 15 00:09:19.087338 containerd[1443]: 2025-05-15 00:09:19.084 [INFO][5569] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:19.087338 containerd[1443]: 2025-05-15 00:09:19.085 [INFO][5560] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba" May 15 00:09:19.087844 containerd[1443]: time="2025-05-15T00:09:19.087381226Z" level=info msg="TearDown network for sandbox \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\" successfully" May 15 00:09:19.093250 containerd[1443]: time="2025-05-15T00:09:19.093187224Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:09:19.093368 containerd[1443]: time="2025-05-15T00:09:19.093301268Z" level=info msg="RemovePodSandbox \"f946f76c8d9549bd30773e6b0eb58f1eb386858e80bf89f7dcb4ea8f6c7295ba\" returns successfully" May 15 00:09:19.093772 containerd[1443]: time="2025-05-15T00:09:19.093749887Z" level=info msg="StopPodSandbox for \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\"" May 15 00:09:19.164828 containerd[1443]: 2025-05-15 00:09:19.132 [WARNING][5592] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0", GenerateName:"calico-apiserver-6876c9b49-", Namespace:"calico-apiserver", SelfLink:"", UID:"da1e193e-d1e2-4c51-883a-4a7628d9c3e9", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6876c9b49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd", Pod:"calico-apiserver-6876c9b49-8jp5s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7d5d680593", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:19.164828 containerd[1443]: 2025-05-15 00:09:19.133 [INFO][5592] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" May 15 00:09:19.164828 containerd[1443]: 2025-05-15 00:09:19.133 [INFO][5592] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" iface="eth0" netns="" May 15 00:09:19.164828 containerd[1443]: 2025-05-15 00:09:19.133 [INFO][5592] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" May 15 00:09:19.164828 containerd[1443]: 2025-05-15 00:09:19.133 [INFO][5592] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" May 15 00:09:19.164828 containerd[1443]: 2025-05-15 00:09:19.152 [INFO][5600] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" HandleID="k8s-pod-network.42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" Workload="localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0" May 15 00:09:19.164828 containerd[1443]: 2025-05-15 00:09:19.152 [INFO][5600] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:19.164828 containerd[1443]: 2025-05-15 00:09:19.152 [INFO][5600] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:19.164828 containerd[1443]: 2025-05-15 00:09:19.160 [WARNING][5600] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" HandleID="k8s-pod-network.42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" Workload="localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0" May 15 00:09:19.164828 containerd[1443]: 2025-05-15 00:09:19.160 [INFO][5600] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" HandleID="k8s-pod-network.42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" Workload="localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0" May 15 00:09:19.164828 containerd[1443]: 2025-05-15 00:09:19.162 [INFO][5600] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:19.164828 containerd[1443]: 2025-05-15 00:09:19.163 [INFO][5592] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" May 15 00:09:19.164828 containerd[1443]: time="2025-05-15T00:09:19.164806832Z" level=info msg="TearDown network for sandbox \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\" successfully" May 15 00:09:19.164828 containerd[1443]: time="2025-05-15T00:09:19.164834273Z" level=info msg="StopPodSandbox for \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\" returns successfully" May 15 00:09:19.165362 containerd[1443]: time="2025-05-15T00:09:19.165313772Z" level=info msg="RemovePodSandbox for \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\"" May 15 00:09:19.165362 containerd[1443]: time="2025-05-15T00:09:19.165349694Z" level=info msg="Forcibly stopping sandbox \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\"" May 15 00:09:19.241455 containerd[1443]: 2025-05-15 00:09:19.200 [WARNING][5622] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0", GenerateName:"calico-apiserver-6876c9b49-", Namespace:"calico-apiserver", SelfLink:"", UID:"da1e193e-d1e2-4c51-883a-4a7628d9c3e9", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6876c9b49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6b7a358098dc44c785cc0ea662f731d0e7f4ef6d80a70f17dfcd551931e0dfd", Pod:"calico-apiserver-6876c9b49-8jp5s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7d5d680593", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:19.241455 containerd[1443]: 2025-05-15 00:09:19.201 [INFO][5622] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" May 15 00:09:19.241455 containerd[1443]: 2025-05-15 00:09:19.201 [INFO][5622] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" iface="eth0" netns="" May 15 00:09:19.241455 containerd[1443]: 2025-05-15 00:09:19.201 [INFO][5622] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" May 15 00:09:19.241455 containerd[1443]: 2025-05-15 00:09:19.201 [INFO][5622] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" May 15 00:09:19.241455 containerd[1443]: 2025-05-15 00:09:19.220 [INFO][5631] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" HandleID="k8s-pod-network.42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" Workload="localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0" May 15 00:09:19.241455 containerd[1443]: 2025-05-15 00:09:19.220 [INFO][5631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:19.241455 containerd[1443]: 2025-05-15 00:09:19.220 [INFO][5631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:19.241455 containerd[1443]: 2025-05-15 00:09:19.232 [WARNING][5631] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" HandleID="k8s-pod-network.42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" Workload="localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0" May 15 00:09:19.241455 containerd[1443]: 2025-05-15 00:09:19.232 [INFO][5631] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" HandleID="k8s-pod-network.42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" Workload="localhost-k8s-calico--apiserver--6876c9b49--8jp5s-eth0" May 15 00:09:19.241455 containerd[1443]: 2025-05-15 00:09:19.237 [INFO][5631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:19.241455 containerd[1443]: 2025-05-15 00:09:19.238 [INFO][5622] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a" May 15 00:09:19.241915 containerd[1443]: time="2025-05-15T00:09:19.241493967Z" level=info msg="TearDown network for sandbox \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\" successfully" May 15 00:09:19.250101 containerd[1443]: time="2025-05-15T00:09:19.250008955Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:09:19.250211 containerd[1443]: time="2025-05-15T00:09:19.250133840Z" level=info msg="RemovePodSandbox \"42c872658b7ed16a333db174a2dabab6d476135f97bd0e61ff2ebc91893a8d4a\" returns successfully" May 15 00:09:19.250702 containerd[1443]: time="2025-05-15T00:09:19.250657022Z" level=info msg="StopPodSandbox for \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\"" May 15 00:09:19.320224 containerd[1443]: 2025-05-15 00:09:19.285 [WARNING][5653] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2b7a008e-9f04-4b23-afda-16075b376325", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99", Pod:"coredns-7db6d8ff4d-jxzt2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0b82fd6ea96", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:19.320224 containerd[1443]: 2025-05-15 00:09:19.285 [INFO][5653] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" May 15 00:09:19.320224 containerd[1443]: 2025-05-15 00:09:19.286 [INFO][5653] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" iface="eth0" netns="" May 15 00:09:19.320224 containerd[1443]: 2025-05-15 00:09:19.286 [INFO][5653] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" May 15 00:09:19.320224 containerd[1443]: 2025-05-15 00:09:19.286 [INFO][5653] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" May 15 00:09:19.320224 containerd[1443]: 2025-05-15 00:09:19.307 [INFO][5661] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" HandleID="k8s-pod-network.62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" Workload="localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0" May 15 00:09:19.320224 containerd[1443]: 2025-05-15 00:09:19.307 [INFO][5661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:19.320224 containerd[1443]: 2025-05-15 00:09:19.307 [INFO][5661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:19.320224 containerd[1443]: 2025-05-15 00:09:19.315 [WARNING][5661] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" HandleID="k8s-pod-network.62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" Workload="localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0" May 15 00:09:19.320224 containerd[1443]: 2025-05-15 00:09:19.315 [INFO][5661] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" HandleID="k8s-pod-network.62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" Workload="localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0" May 15 00:09:19.320224 containerd[1443]: 2025-05-15 00:09:19.317 [INFO][5661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:19.320224 containerd[1443]: 2025-05-15 00:09:19.318 [INFO][5653] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" May 15 00:09:19.320622 containerd[1443]: time="2025-05-15T00:09:19.320260307Z" level=info msg="TearDown network for sandbox \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\" successfully" May 15 00:09:19.320622 containerd[1443]: time="2025-05-15T00:09:19.320287268Z" level=info msg="StopPodSandbox for \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\" returns successfully" May 15 00:09:19.320810 containerd[1443]: time="2025-05-15T00:09:19.320759087Z" level=info msg="RemovePodSandbox for \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\"" May 15 00:09:19.320810 containerd[1443]: time="2025-05-15T00:09:19.320806969Z" level=info msg="Forcibly stopping sandbox \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\"" May 15 00:09:19.389921 containerd[1443]: 2025-05-15 00:09:19.357 [WARNING][5683] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2b7a008e-9f04-4b23-afda-16075b376325", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38f61b4389251d06edf38f89498115af1d11ae5584c76a4a1be9256d10727a99", Pod:"coredns-7db6d8ff4d-jxzt2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0b82fd6ea96", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:19.389921 containerd[1443]: 2025-05-15 00:09:19.357 [INFO][5683] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" May 15 00:09:19.389921 containerd[1443]: 2025-05-15 00:09:19.357 [INFO][5683] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" iface="eth0" netns="" May 15 00:09:19.389921 containerd[1443]: 2025-05-15 00:09:19.357 [INFO][5683] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" May 15 00:09:19.389921 containerd[1443]: 2025-05-15 00:09:19.357 [INFO][5683] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" May 15 00:09:19.389921 containerd[1443]: 2025-05-15 00:09:19.376 [INFO][5692] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" HandleID="k8s-pod-network.62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" Workload="localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0" May 15 00:09:19.389921 containerd[1443]: 2025-05-15 00:09:19.376 [INFO][5692] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:19.389921 containerd[1443]: 2025-05-15 00:09:19.377 [INFO][5692] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:19.389921 containerd[1443]: 2025-05-15 00:09:19.385 [WARNING][5692] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" HandleID="k8s-pod-network.62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" Workload="localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0" May 15 00:09:19.389921 containerd[1443]: 2025-05-15 00:09:19.385 [INFO][5692] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" HandleID="k8s-pod-network.62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" Workload="localhost-k8s-coredns--7db6d8ff4d--jxzt2-eth0" May 15 00:09:19.389921 containerd[1443]: 2025-05-15 00:09:19.386 [INFO][5692] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:19.389921 containerd[1443]: 2025-05-15 00:09:19.388 [INFO][5683] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4" May 15 00:09:19.390342 containerd[1443]: time="2025-05-15T00:09:19.389977797Z" level=info msg="TearDown network for sandbox \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\" successfully" May 15 00:09:19.393363 containerd[1443]: time="2025-05-15T00:09:19.393323494Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:09:19.393431 containerd[1443]: time="2025-05-15T00:09:19.393390497Z" level=info msg="RemovePodSandbox \"62775aaee39f310104718df5e05b10aa0a1651620cd787dfc5aa26bf2403b6b4\" returns successfully" May 15 00:09:19.393898 containerd[1443]: time="2025-05-15T00:09:19.393873797Z" level=info msg="StopPodSandbox for \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\"" May 15 00:09:19.463378 containerd[1443]: 2025-05-15 00:09:19.429 [WARNING][5714] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0", GenerateName:"calico-apiserver-6876c9b49-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c1a3ff2-9a98-4275-87ea-6992a522449a", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6876c9b49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2", Pod:"calico-apiserver-6876c9b49-s24wt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif4e03c90393", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:19.463378 containerd[1443]: 2025-05-15 00:09:19.429 [INFO][5714] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" May 15 00:09:19.463378 containerd[1443]: 2025-05-15 00:09:19.429 [INFO][5714] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" iface="eth0" netns="" May 15 00:09:19.463378 containerd[1443]: 2025-05-15 00:09:19.429 [INFO][5714] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" May 15 00:09:19.463378 containerd[1443]: 2025-05-15 00:09:19.429 [INFO][5714] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" May 15 00:09:19.463378 containerd[1443]: 2025-05-15 00:09:19.447 [INFO][5723] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" HandleID="k8s-pod-network.a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" Workload="localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0" May 15 00:09:19.463378 containerd[1443]: 2025-05-15 00:09:19.447 [INFO][5723] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:19.463378 containerd[1443]: 2025-05-15 00:09:19.447 [INFO][5723] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:19.463378 containerd[1443]: 2025-05-15 00:09:19.457 [WARNING][5723] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" HandleID="k8s-pod-network.a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" Workload="localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0" May 15 00:09:19.463378 containerd[1443]: 2025-05-15 00:09:19.457 [INFO][5723] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" HandleID="k8s-pod-network.a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" Workload="localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0" May 15 00:09:19.463378 containerd[1443]: 2025-05-15 00:09:19.459 [INFO][5723] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:19.463378 containerd[1443]: 2025-05-15 00:09:19.461 [INFO][5714] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" May 15 00:09:19.463819 containerd[1443]: time="2025-05-15T00:09:19.463413880Z" level=info msg="TearDown network for sandbox \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\" successfully" May 15 00:09:19.463819 containerd[1443]: time="2025-05-15T00:09:19.463445121Z" level=info msg="StopPodSandbox for \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\" returns successfully" May 15 00:09:19.463921 containerd[1443]: time="2025-05-15T00:09:19.463892739Z" level=info msg="RemovePodSandbox for \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\"" May 15 00:09:19.463948 containerd[1443]: time="2025-05-15T00:09:19.463929581Z" level=info msg="Forcibly stopping sandbox \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\"" May 15 00:09:19.528205 containerd[1443]: 2025-05-15 00:09:19.497 [WARNING][5746] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0", GenerateName:"calico-apiserver-6876c9b49-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c1a3ff2-9a98-4275-87ea-6992a522449a", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6876c9b49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"826d974a7c9a56c00030dd23900e0d09370247277e2708e3438cc05c5e6400b2", Pod:"calico-apiserver-6876c9b49-s24wt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif4e03c90393", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:19.528205 containerd[1443]: 2025-05-15 00:09:19.497 [INFO][5746] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" May 15 00:09:19.528205 containerd[1443]: 2025-05-15 00:09:19.497 [INFO][5746] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" iface="eth0" netns="" May 15 00:09:19.528205 containerd[1443]: 2025-05-15 00:09:19.497 [INFO][5746] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" May 15 00:09:19.528205 containerd[1443]: 2025-05-15 00:09:19.497 [INFO][5746] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" May 15 00:09:19.528205 containerd[1443]: 2025-05-15 00:09:19.515 [INFO][5754] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" HandleID="k8s-pod-network.a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" Workload="localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0" May 15 00:09:19.528205 containerd[1443]: 2025-05-15 00:09:19.515 [INFO][5754] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:19.528205 containerd[1443]: 2025-05-15 00:09:19.515 [INFO][5754] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:19.528205 containerd[1443]: 2025-05-15 00:09:19.523 [WARNING][5754] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" HandleID="k8s-pod-network.a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" Workload="localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0" May 15 00:09:19.528205 containerd[1443]: 2025-05-15 00:09:19.523 [INFO][5754] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" HandleID="k8s-pod-network.a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" Workload="localhost-k8s-calico--apiserver--6876c9b49--s24wt-eth0" May 15 00:09:19.528205 containerd[1443]: 2025-05-15 00:09:19.525 [INFO][5754] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:19.528205 containerd[1443]: 2025-05-15 00:09:19.526 [INFO][5746] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6" May 15 00:09:19.528582 containerd[1443]: time="2025-05-15T00:09:19.528240850Z" level=info msg="TearDown network for sandbox \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\" successfully" May 15 00:09:19.530879 containerd[1443]: time="2025-05-15T00:09:19.530848277Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:09:19.530930 containerd[1443]: time="2025-05-15T00:09:19.530905079Z" level=info msg="RemovePodSandbox \"a48af50a8997dc55236d41d21ceb3a9e2977d5f374988e0f92db90c1d0a90ca6\" returns successfully" May 15 00:09:19.531427 containerd[1443]: time="2025-05-15T00:09:19.531400499Z" level=info msg="StopPodSandbox for \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\"" May 15 00:09:19.593896 containerd[1443]: 2025-05-15 00:09:19.563 [WARNING][5778] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nrvq4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"262bbf25-d43c-443e-a611-5ff6be2347dc", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370", Pod:"csi-node-driver-nrvq4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali89a5c225f25", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:19.593896 containerd[1443]: 2025-05-15 00:09:19.563 [INFO][5778] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" May 15 00:09:19.593896 containerd[1443]: 2025-05-15 00:09:19.563 [INFO][5778] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" iface="eth0" netns="" May 15 00:09:19.593896 containerd[1443]: 2025-05-15 00:09:19.563 [INFO][5778] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" May 15 00:09:19.593896 containerd[1443]: 2025-05-15 00:09:19.563 [INFO][5778] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" May 15 00:09:19.593896 containerd[1443]: 2025-05-15 00:09:19.581 [INFO][5786] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" HandleID="k8s-pod-network.c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" Workload="localhost-k8s-csi--node--driver--nrvq4-eth0" May 15 00:09:19.593896 containerd[1443]: 2025-05-15 00:09:19.581 [INFO][5786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:19.593896 containerd[1443]: 2025-05-15 00:09:19.581 [INFO][5786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:19.593896 containerd[1443]: 2025-05-15 00:09:19.589 [WARNING][5786] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" HandleID="k8s-pod-network.c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" Workload="localhost-k8s-csi--node--driver--nrvq4-eth0" May 15 00:09:19.593896 containerd[1443]: 2025-05-15 00:09:19.589 [INFO][5786] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" HandleID="k8s-pod-network.c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" Workload="localhost-k8s-csi--node--driver--nrvq4-eth0" May 15 00:09:19.593896 containerd[1443]: 2025-05-15 00:09:19.591 [INFO][5786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:19.593896 containerd[1443]: 2025-05-15 00:09:19.592 [INFO][5778] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" May 15 00:09:19.593896 containerd[1443]: time="2025-05-15T00:09:19.593867333Z" level=info msg="TearDown network for sandbox \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\" successfully" May 15 00:09:19.593896 containerd[1443]: time="2025-05-15T00:09:19.593891734Z" level=info msg="StopPodSandbox for \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\" returns successfully" May 15 00:09:19.594374 containerd[1443]: time="2025-05-15T00:09:19.594336752Z" level=info msg="RemovePodSandbox for \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\"" May 15 00:09:19.594408 containerd[1443]: time="2025-05-15T00:09:19.594375274Z" level=info msg="Forcibly stopping sandbox \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\"" May 15 00:09:19.664096 containerd[1443]: 2025-05-15 00:09:19.633 [WARNING][5808] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nrvq4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"262bbf25-d43c-443e-a611-5ff6be2347dc", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 8, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd57114b02956f814addae92d3ff93eeeab3ad92d0a6e10bc0f53c96f863a370", Pod:"csi-node-driver-nrvq4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali89a5c225f25", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:09:19.664096 containerd[1443]: 2025-05-15 00:09:19.633 [INFO][5808] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" May 15 00:09:19.664096 containerd[1443]: 2025-05-15 00:09:19.633 [INFO][5808] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" iface="eth0" netns="" May 15 00:09:19.664096 containerd[1443]: 2025-05-15 00:09:19.633 [INFO][5808] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" May 15 00:09:19.664096 containerd[1443]: 2025-05-15 00:09:19.633 [INFO][5808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" May 15 00:09:19.664096 containerd[1443]: 2025-05-15 00:09:19.651 [INFO][5816] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" HandleID="k8s-pod-network.c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" Workload="localhost-k8s-csi--node--driver--nrvq4-eth0" May 15 00:09:19.664096 containerd[1443]: 2025-05-15 00:09:19.651 [INFO][5816] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:09:19.664096 containerd[1443]: 2025-05-15 00:09:19.651 [INFO][5816] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:09:19.664096 containerd[1443]: 2025-05-15 00:09:19.659 [WARNING][5816] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" HandleID="k8s-pod-network.c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" Workload="localhost-k8s-csi--node--driver--nrvq4-eth0" May 15 00:09:19.664096 containerd[1443]: 2025-05-15 00:09:19.660 [INFO][5816] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" HandleID="k8s-pod-network.c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" Workload="localhost-k8s-csi--node--driver--nrvq4-eth0" May 15 00:09:19.664096 containerd[1443]: 2025-05-15 00:09:19.661 [INFO][5816] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:09:19.664096 containerd[1443]: 2025-05-15 00:09:19.662 [INFO][5808] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6" May 15 00:09:19.664505 containerd[1443]: time="2025-05-15T00:09:19.664142806Z" level=info msg="TearDown network for sandbox \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\" successfully" May 15 00:09:19.666887 containerd[1443]: time="2025-05-15T00:09:19.666852077Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:09:19.666952 containerd[1443]: time="2025-05-15T00:09:19.666914039Z" level=info msg="RemovePodSandbox \"c3c69ffba811b4e584af2a2bf063edd0af517ab666b60ec5042ccbaa4a0ecff6\" returns successfully" May 15 00:09:20.830963 systemd[1]: Started sshd@15-10.0.0.17:22-10.0.0.1:59030.service - OpenSSH per-connection server daemon (10.0.0.1:59030). May 15 00:09:20.887876 sshd[5825]: Accepted publickey for core from 10.0.0.1 port 59030 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:09:20.889132 sshd[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:20.893223 systemd-logind[1429]: New session 16 of user core. May 15 00:09:20.904768 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 00:09:21.061542 sshd[5825]: pam_unix(sshd:session): session closed for user core May 15 00:09:21.071422 systemd[1]: sshd@15-10.0.0.17:22-10.0.0.1:59030.service: Deactivated successfully. May 15 00:09:21.073302 systemd[1]: session-16.scope: Deactivated successfully. May 15 00:09:21.076197 systemd-logind[1429]: Session 16 logged out. Waiting for processes to exit. May 15 00:09:21.081156 systemd[1]: Started sshd@16-10.0.0.17:22-10.0.0.1:59034.service - OpenSSH per-connection server daemon (10.0.0.1:59034). May 15 00:09:21.083258 systemd-logind[1429]: Removed session 16. May 15 00:09:21.113706 sshd[5839]: Accepted publickey for core from 10.0.0.1 port 59034 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:09:21.115059 sshd[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:21.118951 systemd-logind[1429]: New session 17 of user core. May 15 00:09:21.127957 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 00:09:21.386737 sshd[5839]: pam_unix(sshd:session): session closed for user core May 15 00:09:21.395478 systemd[1]: sshd@16-10.0.0.17:22-10.0.0.1:59034.service: Deactivated successfully. May 15 00:09:21.398023 systemd[1]: session-17.scope: Deactivated successfully. May 15 00:09:21.399425 systemd-logind[1429]: Session 17 logged out. Waiting for processes to exit. May 15 00:09:21.408143 systemd[1]: Started sshd@17-10.0.0.17:22-10.0.0.1:59046.service - OpenSSH per-connection server daemon (10.0.0.1:59046). May 15 00:09:21.409456 systemd-logind[1429]: Removed session 17. May 15 00:09:21.458561 sshd[5851]: Accepted publickey for core from 10.0.0.1 port 59046 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:09:21.460981 sshd[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:21.465007 systemd-logind[1429]: New session 18 of user core. May 15 00:09:21.475990 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 00:09:23.102568 sshd[5851]: pam_unix(sshd:session): session closed for user core May 15 00:09:23.113975 systemd[1]: sshd@17-10.0.0.17:22-10.0.0.1:59046.service: Deactivated successfully. May 15 00:09:23.119412 systemd[1]: session-18.scope: Deactivated successfully. May 15 00:09:23.121547 systemd-logind[1429]: Session 18 logged out. Waiting for processes to exit. May 15 00:09:23.129348 systemd[1]: Started sshd@18-10.0.0.17:22-10.0.0.1:38742.service - OpenSSH per-connection server daemon (10.0.0.1:38742). May 15 00:09:23.131165 systemd-logind[1429]: Removed session 18. May 15 00:09:23.173022 sshd[5870]: Accepted publickey for core from 10.0.0.1 port 38742 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:09:23.174295 sshd[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:23.178269 systemd-logind[1429]: New session 19 of user core. May 15 00:09:23.186038 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 00:09:23.447960 sshd[5870]: pam_unix(sshd:session): session closed for user core May 15 00:09:23.456953 systemd[1]: sshd@18-10.0.0.17:22-10.0.0.1:38742.service: Deactivated successfully. May 15 00:09:23.458675 systemd[1]: session-19.scope: Deactivated successfully. May 15 00:09:23.460882 systemd-logind[1429]: Session 19 logged out. Waiting for processes to exit. May 15 00:09:23.468341 systemd[1]: Started sshd@19-10.0.0.17:22-10.0.0.1:38756.service - OpenSSH per-connection server daemon (10.0.0.1:38756). May 15 00:09:23.470477 systemd-logind[1429]: Removed session 19. May 15 00:09:23.499459 sshd[5882]: Accepted publickey for core from 10.0.0.1 port 38756 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:09:23.500961 sshd[5882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:23.506847 systemd-logind[1429]: New session 20 of user core. May 15 00:09:23.512952 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 00:09:23.656951 sshd[5882]: pam_unix(sshd:session): session closed for user core May 15 00:09:23.658701 kubelet[2558]: E0515 00:09:23.658662 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:23.662058 systemd[1]: sshd@19-10.0.0.17:22-10.0.0.1:38756.service: Deactivated successfully. May 15 00:09:23.665384 systemd[1]: session-20.scope: Deactivated successfully. May 15 00:09:23.665976 systemd-logind[1429]: Session 20 logged out. Waiting for processes to exit. May 15 00:09:23.667187 systemd-logind[1429]: Removed session 20. May 15 00:09:28.668737 systemd[1]: Started sshd@20-10.0.0.17:22-10.0.0.1:38758.service - OpenSSH per-connection server daemon (10.0.0.1:38758). May 15 00:09:28.703248 sshd[5930]: Accepted publickey for core from 10.0.0.1 port 38758 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:09:28.704554 sshd[5930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:28.708239 systemd-logind[1429]: New session 21 of user core. May 15 00:09:28.718922 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 00:09:28.865255 sshd[5930]: pam_unix(sshd:session): session closed for user core May 15 00:09:28.868395 systemd-logind[1429]: Session 21 logged out. Waiting for processes to exit. May 15 00:09:28.868583 systemd[1]: sshd@20-10.0.0.17:22-10.0.0.1:38758.service: Deactivated successfully. May 15 00:09:28.870355 systemd[1]: session-21.scope: Deactivated successfully. May 15 00:09:28.871433 systemd-logind[1429]: Removed session 21. May 15 00:09:33.880101 systemd[1]: Started sshd@21-10.0.0.17:22-10.0.0.1:45242.service - OpenSSH per-connection server daemon (10.0.0.1:45242). May 15 00:09:33.935806 sshd[5948]: Accepted publickey for core from 10.0.0.1 port 45242 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:09:33.936728 sshd[5948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:33.942973 systemd-logind[1429]: New session 22 of user core. May 15 00:09:33.953001 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 00:09:34.115045 sshd[5948]: pam_unix(sshd:session): session closed for user core May 15 00:09:34.118302 systemd[1]: sshd@21-10.0.0.17:22-10.0.0.1:45242.service: Deactivated successfully. May 15 00:09:34.120755 systemd[1]: session-22.scope: Deactivated successfully. May 15 00:09:34.121335 systemd-logind[1429]: Session 22 logged out. Waiting for processes to exit. May 15 00:09:34.122284 systemd-logind[1429]: Removed session 22. May 15 00:09:36.737591 kubelet[2558]: E0515 00:09:36.737544 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:39.126054 systemd[1]: Started sshd@22-10.0.0.17:22-10.0.0.1:45244.service - OpenSSH per-connection server daemon (10.0.0.1:45244). May 15 00:09:39.162289 sshd[5983]: Accepted publickey for core from 10.0.0.1 port 45244 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:09:39.163581 sshd[5983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:39.168732 systemd-logind[1429]: New session 23 of user core. May 15 00:09:39.177995 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 00:09:39.305565 sshd[5983]: pam_unix(sshd:session): session closed for user core May 15 00:09:39.308873 systemd[1]: sshd@22-10.0.0.17:22-10.0.0.1:45244.service: Deactivated successfully. May 15 00:09:39.310634 systemd[1]: session-23.scope: Deactivated successfully. May 15 00:09:39.312383 systemd-logind[1429]: Session 23 logged out. Waiting for processes to exit. May 15 00:09:39.313187 systemd-logind[1429]: Removed session 23.