May 15 00:06:08.907355 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 00:06:08.907378 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed May 14 22:17:23 -00 2025 May 15 00:06:08.907388 kernel: KASLR enabled May 15 00:06:08.907394 kernel: efi: EFI v2.7 by EDK II May 15 00:06:08.907400 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 May 15 00:06:08.907405 kernel: random: crng init done May 15 00:06:08.907412 kernel: secureboot: Secure boot disabled May 15 00:06:08.907418 kernel: ACPI: Early table checksum verification disabled May 15 00:06:08.907424 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 15 00:06:08.907431 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 15 00:06:08.907437 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:06:08.907443 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:06:08.907448 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:06:08.907454 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:06:08.907461 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:06:08.907469 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:06:08.907475 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:06:08.907481 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:06:08.907487 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:06:08.907493 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 15 00:06:08.907499 kernel: NUMA: Failed to initialise from firmware May 15 00:06:08.907505 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 15 00:06:08.907511 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 15 00:06:08.907517 kernel: Zone ranges: May 15 00:06:08.907523 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 15 00:06:08.907530 kernel: DMA32 empty May 15 00:06:08.907536 kernel: Normal empty May 15 00:06:08.907542 kernel: Movable zone start for each node May 15 00:06:08.907548 kernel: Early memory node ranges May 15 00:06:08.907554 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] May 15 00:06:08.907561 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] May 15 00:06:08.907567 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] May 15 00:06:08.907573 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 15 00:06:08.907579 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 15 00:06:08.907596 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 15 00:06:08.907603 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 15 00:06:08.907609 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 15 00:06:08.907617 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 15 00:06:08.907623 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 15 00:06:08.907629 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 15 00:06:08.907638 kernel: psci: probing for conduit method from ACPI. May 15 00:06:08.907645 kernel: psci: PSCIv1.1 detected in firmware. May 15 00:06:08.907651 kernel: psci: Using standard PSCI v0.2 function IDs May 15 00:06:08.907659 kernel: psci: Trusted OS migration not required May 15 00:06:08.907666 kernel: psci: SMC Calling Convention v1.1 May 15 00:06:08.907672 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 00:06:08.907678 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 15 00:06:08.907685 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 15 00:06:08.907691 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 15 00:06:08.907698 kernel: Detected PIPT I-cache on CPU0 May 15 00:06:08.907704 kernel: CPU features: detected: GIC system register CPU interface May 15 00:06:08.907711 kernel: CPU features: detected: Hardware dirty bit management May 15 00:06:08.907717 kernel: CPU features: detected: Spectre-v4 May 15 00:06:08.907725 kernel: CPU features: detected: Spectre-BHB May 15 00:06:08.907731 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 00:06:08.907738 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 00:06:08.907744 kernel: CPU features: detected: ARM erratum 1418040 May 15 00:06:08.907750 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 00:06:08.907757 kernel: alternatives: applying boot alternatives May 15 00:06:08.907764 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e480c7900a171de0fa6cd5a3274267ba91118ae5fbe1e4dae15bc86928fa4899 May 15 00:06:08.907771 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 00:06:08.907777 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 00:06:08.907784 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 00:06:08.907790 kernel: Fallback order for Node 0: 0 May 15 00:06:08.907798 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 15 00:06:08.907804 kernel: Policy zone: DMA May 15 00:06:08.907810 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 00:06:08.907817 kernel: software IO TLB: area num 4. May 15 00:06:08.907832 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 15 00:06:08.907839 kernel: Memory: 2387348K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38464K init, 897K bss, 184940K reserved, 0K cma-reserved) May 15 00:06:08.907846 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 00:06:08.907852 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 00:06:08.907859 kernel: rcu: RCU event tracing is enabled. May 15 00:06:08.907865 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 00:06:08.907872 kernel: Trampoline variant of Tasks RCU enabled. May 15 00:06:08.907878 kernel: Tracing variant of Tasks RCU enabled. May 15 00:06:08.907887 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 00:06:08.907893 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 00:06:08.907900 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 00:06:08.907906 kernel: GICv3: 256 SPIs implemented May 15 00:06:08.907912 kernel: GICv3: 0 Extended SPIs implemented May 15 00:06:08.907919 kernel: Root IRQ handler: gic_handle_irq May 15 00:06:08.907925 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 15 00:06:08.907931 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 00:06:08.907937 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 00:06:08.907944 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 15 00:06:08.907950 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 15 00:06:08.907958 kernel: GICv3: using LPI property table @0x00000000400f0000 May 15 00:06:08.907965 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 15 00:06:08.907971 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 00:06:08.907987 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:06:08.907994 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 00:06:08.908001 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 00:06:08.908007 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 00:06:08.908014 kernel: arm-pv: using stolen time PV May 15 00:06:08.908020 kernel: Console: colour dummy device 80x25 May 15 00:06:08.908030 kernel: ACPI: Core revision 20230628 May 15 00:06:08.908037 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 00:06:08.908045 kernel: pid_max: default: 32768 minimum: 301 May 15 00:06:08.908052 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 00:06:08.908058 kernel: landlock: Up and running. May 15 00:06:08.908065 kernel: SELinux: Initializing. May 15 00:06:08.908071 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:06:08.908078 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:06:08.908084 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 15 00:06:08.908091 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 00:06:08.908098 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 00:06:08.908109 kernel: rcu: Hierarchical SRCU implementation. May 15 00:06:08.908116 kernel: rcu: Max phase no-delay instances is 400. May 15 00:06:08.908123 kernel: Platform MSI: ITS@0x8080000 domain created May 15 00:06:08.908129 kernel: PCI/MSI: ITS@0x8080000 domain created May 15 00:06:08.908136 kernel: Remapping and enabling EFI services. May 15 00:06:08.908142 kernel: smp: Bringing up secondary CPUs ... May 15 00:06:08.908149 kernel: Detected PIPT I-cache on CPU1 May 15 00:06:08.908155 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 00:06:08.908162 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 15 00:06:08.908170 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:06:08.908177 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 00:06:08.908188 kernel: Detected PIPT I-cache on CPU2 May 15 00:06:08.908196 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 15 00:06:08.908203 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 15 00:06:08.908210 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:06:08.908217 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 15 00:06:08.908224 kernel: Detected PIPT I-cache on CPU3 May 15 00:06:08.908230 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 15 00:06:08.908238 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 15 00:06:08.908246 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:06:08.908253 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 15 00:06:08.908259 kernel: smp: Brought up 1 node, 4 CPUs May 15 00:06:08.908266 kernel: SMP: Total of 4 processors activated. May 15 00:06:08.908273 kernel: CPU features: detected: 32-bit EL0 Support May 15 00:06:08.908280 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 00:06:08.908287 kernel: CPU features: detected: Common not Private translations May 15 00:06:08.908295 kernel: CPU features: detected: CRC32 instructions May 15 00:06:08.908302 kernel: CPU features: detected: Enhanced Virtualization Traps May 15 00:06:08.908309 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 00:06:08.908321 kernel: CPU features: detected: LSE atomic instructions May 15 00:06:08.908328 kernel: CPU features: detected: Privileged Access Never May 15 00:06:08.908335 kernel: CPU features: detected: RAS Extension Support May 15 00:06:08.908341 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 00:06:08.908348 kernel: CPU: All CPU(s) started at EL1 May 15 00:06:08.908355 kernel: alternatives: applying system-wide alternatives May 15 00:06:08.908364 kernel: devtmpfs: initialized May 15 00:06:08.908371 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 00:06:08.908378 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 00:06:08.908385 kernel: pinctrl core: initialized pinctrl subsystem May 15 00:06:08.908392 kernel: SMBIOS 3.0.0 present. May 15 00:06:08.908401 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 15 00:06:08.908408 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 00:06:08.908415 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 00:06:08.908422 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 00:06:08.908431 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 00:06:08.908438 kernel: audit: initializing netlink subsys (disabled) May 15 00:06:08.908445 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 15 00:06:08.908452 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 00:06:08.908459 kernel: cpuidle: using governor menu May 15 00:06:08.908465 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 00:06:08.908472 kernel: ASID allocator initialised with 32768 entries May 15 00:06:08.908479 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 00:06:08.908486 kernel: Serial: AMBA PL011 UART driver May 15 00:06:08.908494 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 15 00:06:08.908501 kernel: Modules: 0 pages in range for non-PLT usage May 15 00:06:08.908508 kernel: Modules: 509232 pages in range for PLT usage May 15 00:06:08.908515 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 00:06:08.908522 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 15 00:06:08.908529 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 15 00:06:08.908536 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 15 00:06:08.908543 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 00:06:08.908550 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 15 00:06:08.908559 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 15 00:06:08.908566 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 15 00:06:08.908572 kernel: ACPI: Added _OSI(Module Device) May 15 00:06:08.908579 kernel: ACPI: Added _OSI(Processor Device) May 15 00:06:08.908586 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 00:06:08.908593 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 00:06:08.908600 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 00:06:08.908607 kernel: ACPI: Interpreter enabled May 15 00:06:08.908614 kernel: ACPI: Using GIC for interrupt routing May 15 00:06:08.908622 kernel: ACPI: MCFG table detected, 1 entries May 15 00:06:08.908630 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 00:06:08.908647 kernel: printk: console [ttyAMA0] enabled May 15 00:06:08.908654 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 00:06:08.908792 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:06:08.908873 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 00:06:08.908939 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 00:06:08.909018 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 00:06:08.909087 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 00:06:08.909097 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 00:06:08.909109 kernel: PCI host bridge to bus 0000:00 May 15 00:06:08.909183 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 00:06:08.909243 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 00:06:08.909301 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 00:06:08.909358 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 00:06:08.909440 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 15 00:06:08.909514 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 15 00:06:08.909581 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 15 00:06:08.909645 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 15 00:06:08.909710 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 15 00:06:08.909774 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 15 00:06:08.909848 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 15 00:06:08.909917 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 15 00:06:08.909992 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 00:06:08.910052 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 00:06:08.910131 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 00:06:08.910141 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 00:06:08.910148 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 00:06:08.910156 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 00:06:08.910165 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 00:06:08.910172 kernel: iommu: Default domain type: Translated May 15 00:06:08.910179 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 00:06:08.910187 kernel: efivars: Registered efivars operations May 15 00:06:08.910194 kernel: vgaarb: loaded May 15 00:06:08.910201 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 00:06:08.910208 kernel: VFS: Disk quotas dquot_6.6.0 May 15 00:06:08.910215 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 00:06:08.910221 kernel: pnp: PnP ACPI init May 15 00:06:08.910300 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 00:06:08.910310 kernel: pnp: PnP ACPI: found 1 devices May 15 00:06:08.910317 kernel: NET: Registered PF_INET protocol family May 15 00:06:08.910324 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 00:06:08.910332 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 00:06:08.910339 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 00:06:08.910346 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 00:06:08.910353 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 00:06:08.910362 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 00:06:08.910369 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:06:08.910376 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:06:08.910383 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 00:06:08.910390 kernel: PCI: CLS 0 bytes, default 64 May 15 00:06:08.910397 kernel: kvm [1]: HYP mode not available May 15 00:06:08.910403 kernel: Initialise system trusted keyrings May 15 00:06:08.910410 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 00:06:08.910417 kernel: Key type asymmetric registered May 15 00:06:08.910425 kernel: Asymmetric key parser 'x509' registered May 15 00:06:08.910432 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 00:06:08.910439 kernel: io scheduler mq-deadline registered May 15 00:06:08.910446 kernel: io scheduler kyber registered May 15 00:06:08.910453 kernel: io scheduler bfq registered May 15 00:06:08.910460 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 00:06:08.910467 kernel: ACPI: button: Power Button [PWRB] May 15 00:06:08.910475 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 00:06:08.910539 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 15 00:06:08.910550 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 00:06:08.910557 kernel: thunder_xcv, ver 1.0 May 15 00:06:08.910564 kernel: thunder_bgx, ver 1.0 May 15 00:06:08.910571 kernel: nicpf, ver 1.0 May 15 00:06:08.910578 kernel: nicvf, ver 1.0 May 15 00:06:08.910664 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 00:06:08.910752 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T00:06:08 UTC (1747267568) May 15 00:06:08.910762 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 00:06:08.910772 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 00:06:08.910779 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 15 00:06:08.910786 kernel: watchdog: Hard watchdog permanently disabled May 15 00:06:08.910793 kernel: NET: Registered PF_INET6 protocol family May 15 00:06:08.910800 kernel: Segment Routing with IPv6 May 15 00:06:08.910806 kernel: In-situ OAM (IOAM) with IPv6 May 15 00:06:08.910818 kernel: NET: Registered PF_PACKET protocol family May 15 00:06:08.910833 kernel: Key type dns_resolver registered May 15 00:06:08.910840 kernel: registered taskstats version 1 May 15 00:06:08.910848 kernel: Loading compiled-in X.509 certificates May 15 00:06:08.910855 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 02701f8a00afe25f5dd35b2d52090aece02392ec' May 15 00:06:08.910862 kernel: Key type .fscrypt registered May 15 00:06:08.910869 kernel: Key type fscrypt-provisioning registered May 15 00:06:08.910876 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 00:06:08.910883 kernel: ima: Allocated hash algorithm: sha1 May 15 00:06:08.910890 kernel: ima: No architecture policies found May 15 00:06:08.910897 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 00:06:08.910904 kernel: clk: Disabling unused clocks May 15 00:06:08.910912 kernel: Freeing unused kernel memory: 38464K May 15 00:06:08.910919 kernel: Run /init as init process May 15 00:06:08.910926 kernel: with arguments: May 15 00:06:08.910933 kernel: /init May 15 00:06:08.910940 kernel: with environment: May 15 00:06:08.910947 kernel: HOME=/ May 15 00:06:08.910954 kernel: TERM=linux May 15 00:06:08.910960 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 00:06:08.910968 systemd[1]: Successfully made /usr/ read-only. May 15 00:06:08.910989 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 00:06:08.910997 systemd[1]: Detected virtualization kvm. May 15 00:06:08.911004 systemd[1]: Detected architecture arm64. May 15 00:06:08.911011 systemd[1]: Running in initrd. May 15 00:06:08.911019 systemd[1]: No hostname configured, using default hostname. May 15 00:06:08.911026 systemd[1]: Hostname set to . May 15 00:06:08.911034 systemd[1]: Initializing machine ID from VM UUID. May 15 00:06:08.911043 systemd[1]: Queued start job for default target initrd.target. May 15 00:06:08.911051 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:06:08.911058 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:06:08.911066 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 00:06:08.911074 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:06:08.911082 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 00:06:08.911090 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 00:06:08.911103 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 00:06:08.911111 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 00:06:08.911122 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:06:08.911130 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:06:08.911137 systemd[1]: Reached target paths.target - Path Units. May 15 00:06:08.911145 systemd[1]: Reached target slices.target - Slice Units. May 15 00:06:08.911152 systemd[1]: Reached target swap.target - Swaps. May 15 00:06:08.911160 systemd[1]: Reached target timers.target - Timer Units. May 15 00:06:08.911169 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:06:08.911176 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:06:08.911186 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 00:06:08.911194 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 00:06:08.911202 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:06:08.911209 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:06:08.911217 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:06:08.911224 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:06:08.911232 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 00:06:08.911241 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:06:08.911248 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 00:06:08.911256 systemd[1]: Starting systemd-fsck-usr.service... May 15 00:06:08.911263 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:06:08.911271 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:06:08.911279 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:06:08.911286 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 00:06:08.911294 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:06:08.911304 systemd[1]: Finished systemd-fsck-usr.service. May 15 00:06:08.911311 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 00:06:08.911319 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:06:08.911344 systemd-journald[234]: Collecting audit messages is disabled. May 15 00:06:08.911367 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 00:06:08.911387 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:06:08.911395 kernel: Bridge firewalling registered May 15 00:06:08.911403 systemd-journald[234]: Journal started May 15 00:06:08.911423 systemd-journald[234]: Runtime Journal (/run/log/journal/e0fb386caeeb45c49190bdd6aec32c1f) is 5.9M, max 47.3M, 41.4M free. May 15 00:06:08.894589 systemd-modules-load[238]: Inserted module 'overlay' May 15 00:06:08.913119 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:06:08.911861 systemd-modules-load[238]: Inserted module 'br_netfilter' May 15 00:06:08.914056 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:06:08.915299 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:06:08.918387 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:06:08.921700 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:06:08.930494 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:06:08.931776 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:06:08.934898 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 00:06:08.941340 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:06:08.943308 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:06:08.944456 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:06:08.947955 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:06:08.948791 dracut-cmdline[271]: dracut-dracut-053 May 15 00:06:08.950948 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e480c7900a171de0fa6cd5a3274267ba91118ae5fbe1e4dae15bc86928fa4899 May 15 00:06:08.995220 systemd-resolved[286]: Positive Trust Anchors: May 15 00:06:08.995236 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:06:08.995267 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:06:09.003854 systemd-resolved[286]: Defaulting to hostname 'linux'. May 15 00:06:09.006956 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:06:09.007961 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:06:09.034984 kernel: SCSI subsystem initialized May 15 00:06:09.037991 kernel: Loading iSCSI transport class v2.0-870. May 15 00:06:09.046003 kernel: iscsi: registered transport (tcp) May 15 00:06:09.060229 kernel: iscsi: registered transport (qla4xxx) May 15 00:06:09.060255 kernel: QLogic iSCSI HBA Driver May 15 00:06:09.103391 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 00:06:09.105692 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 00:06:09.138173 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 00:06:09.138216 kernel: device-mapper: uevent: version 1.0.3 May 15 00:06:09.139597 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 00:06:09.188007 kernel: raid6: neonx8 gen() 15019 MB/s May 15 00:06:09.205010 kernel: raid6: neonx4 gen() 14985 MB/s May 15 00:06:09.222003 kernel: raid6: neonx2 gen() 12569 MB/s May 15 00:06:09.238995 kernel: raid6: neonx1 gen() 10482 MB/s May 15 00:06:09.255993 kernel: raid6: int64x8 gen() 6792 MB/s May 15 00:06:09.272994 kernel: raid6: int64x4 gen() 7352 MB/s May 15 00:06:09.289989 kernel: raid6: int64x2 gen() 6109 MB/s May 15 00:06:09.306995 kernel: raid6: int64x1 gen() 5059 MB/s May 15 00:06:09.307014 kernel: raid6: using algorithm neonx8 gen() 15019 MB/s May 15 00:06:09.324008 kernel: raid6: .... xor() 11974 MB/s, rmw enabled May 15 00:06:09.324034 kernel: raid6: using neon recovery algorithm May 15 00:06:09.328988 kernel: xor: measuring software checksum speed May 15 00:06:09.329009 kernel: 8regs : 21658 MB/sec May 15 00:06:09.329019 kernel: 32regs : 20182 MB/sec May 15 00:06:09.330334 kernel: arm64_neon : 27946 MB/sec May 15 00:06:09.330346 kernel: xor: using function: arm64_neon (27946 MB/sec) May 15 00:06:09.383002 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 00:06:09.399102 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 00:06:09.401728 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:06:09.436480 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 15 00:06:09.440251 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:06:09.443390 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 00:06:09.472382 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation May 15 00:06:09.501884 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:06:09.506107 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:06:09.556855 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:06:09.559449 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 00:06:09.579078 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 00:06:09.580333 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:06:09.582010 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:06:09.584045 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:06:09.588105 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 00:06:09.608031 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 15 00:06:09.608969 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:06:09.610277 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 00:06:09.609094 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:06:09.612830 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:06:09.613882 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:06:09.620295 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 00:06:09.620317 kernel: GPT:9289727 != 19775487 May 15 00:06:09.620333 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 00:06:09.620343 kernel: GPT:9289727 != 19775487 May 15 00:06:09.620352 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 00:06:09.620361 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:06:09.614051 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:06:09.620309 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:06:09.621931 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:06:09.631831 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 00:06:09.638010 kernel: BTRFS: device fsid 6bfb3c95-7a9f-4285-9600-0ba5e7814f96 devid 1 transid 47 /dev/vda3 scanned by (udev-worker) (523) May 15 00:06:09.641999 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (514) May 15 00:06:09.642029 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:06:09.655296 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 00:06:09.666882 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 00:06:09.677386 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 00:06:09.678475 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 00:06:09.687311 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 00:06:09.689195 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 00:06:09.690810 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:06:09.714251 disk-uuid[551]: Primary Header is updated. May 15 00:06:09.714251 disk-uuid[551]: Secondary Entries is updated. May 15 00:06:09.714251 disk-uuid[551]: Secondary Header is updated. May 15 00:06:09.717995 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:06:09.724762 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:06:10.731006 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:06:10.731440 disk-uuid[556]: The operation has completed successfully. May 15 00:06:10.754478 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 00:06:10.754589 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 00:06:10.782923 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 00:06:10.796779 sh[572]: Success May 15 00:06:10.807997 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 00:06:10.838453 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 00:06:10.840942 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 00:06:10.860138 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 00:06:10.865497 kernel: BTRFS info (device dm-0): first mount of filesystem 6bfb3c95-7a9f-4285-9600-0ba5e7814f96 May 15 00:06:10.865532 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 15 00:06:10.865543 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 00:06:10.866325 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 00:06:10.867397 kernel: BTRFS info (device dm-0): using free space tree May 15 00:06:10.871477 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 00:06:10.872296 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 00:06:10.872958 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 00:06:10.875686 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 00:06:10.895320 kernel: BTRFS info (device vda6): first mount of filesystem 2550790c-7644-4e3d-a6a1-eb68bfdbcf7d May 15 00:06:10.895363 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:06:10.895374 kernel: BTRFS info (device vda6): using free space tree May 15 00:06:10.897993 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:06:10.902006 kernel: BTRFS info (device vda6): last unmount of filesystem 2550790c-7644-4e3d-a6a1-eb68bfdbcf7d May 15 00:06:10.904302 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 00:06:10.906254 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 00:06:10.974013 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:06:10.977896 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:06:11.007468 ignition[660]: Ignition 2.20.0 May 15 00:06:11.007477 ignition[660]: Stage: fetch-offline May 15 00:06:11.007507 ignition[660]: no configs at "/usr/lib/ignition/base.d" May 15 00:06:11.007515 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:06:11.007721 ignition[660]: parsed url from cmdline: "" May 15 00:06:11.007725 ignition[660]: no config URL provided May 15 00:06:11.007729 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" May 15 00:06:11.007736 ignition[660]: no config at "/usr/lib/ignition/user.ign" May 15 00:06:11.007759 ignition[660]: op(1): [started] loading QEMU firmware config module May 15 00:06:11.007763 ignition[660]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 00:06:11.014679 systemd-networkd[760]: lo: Link UP May 15 00:06:11.014682 systemd-networkd[760]: lo: Gained carrier May 15 00:06:11.015942 systemd-networkd[760]: Enumeration completed May 15 00:06:11.016108 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:06:11.016804 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:06:11.016817 systemd-networkd[760]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:06:11.017693 systemd-networkd[760]: eth0: Link UP May 15 00:06:11.017697 systemd-networkd[760]: eth0: Gained carrier May 15 00:06:11.017703 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:06:11.018769 systemd[1]: Reached target network.target - Network. May 15 00:06:11.028457 ignition[660]: op(1): [finished] loading QEMU firmware config module May 15 00:06:11.037035 systemd-networkd[760]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:06:11.069841 ignition[660]: parsing config with SHA512: df7fff6593418cc9a57ddb959c17f4612c4eace32a93dca8c4272bd3278d6ea8f31d32fd54972600990cac3d289ed7e67e860d113bdae6fc38433311212c17f6 May 15 00:06:11.075827 unknown[660]: fetched base config from "system" May 15 00:06:11.075956 unknown[660]: fetched user config from "qemu" May 15 00:06:11.077222 ignition[660]: fetch-offline: fetch-offline passed May 15 00:06:11.077595 ignition[660]: Ignition finished successfully May 15 00:06:11.079101 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:06:11.080519 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 00:06:11.081288 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 00:06:11.105358 ignition[769]: Ignition 2.20.0 May 15 00:06:11.105369 ignition[769]: Stage: kargs May 15 00:06:11.105528 ignition[769]: no configs at "/usr/lib/ignition/base.d" May 15 00:06:11.105538 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:06:11.106437 ignition[769]: kargs: kargs passed May 15 00:06:11.109436 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 00:06:11.106482 ignition[769]: Ignition finished successfully May 15 00:06:11.111384 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 00:06:11.135396 ignition[777]: Ignition 2.20.0 May 15 00:06:11.135418 ignition[777]: Stage: disks May 15 00:06:11.135572 ignition[777]: no configs at "/usr/lib/ignition/base.d" May 15 00:06:11.135581 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:06:11.137740 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 00:06:11.136464 ignition[777]: disks: disks passed May 15 00:06:11.138948 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 00:06:11.136507 ignition[777]: Ignition finished successfully May 15 00:06:11.140718 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 00:06:11.142689 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:06:11.143924 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:06:11.145934 systemd[1]: Reached target basic.target - Basic System. May 15 00:06:11.148097 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 00:06:11.169119 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 00:06:11.173634 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 00:06:11.175852 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 00:06:11.237001 kernel: EXT4-fs (vda9): mounted filesystem ef34f074-e751-474e-98f6-0625809ada62 r/w with ordered data mode. Quota mode: none. May 15 00:06:11.237548 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 00:06:11.238780 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 00:06:11.241862 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:06:11.244314 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 00:06:11.245231 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 00:06:11.245270 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 00:06:11.245294 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:06:11.251508 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 00:06:11.253917 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 00:06:11.260034 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (796) May 15 00:06:11.262818 kernel: BTRFS info (device vda6): first mount of filesystem 2550790c-7644-4e3d-a6a1-eb68bfdbcf7d May 15 00:06:11.262854 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:06:11.262865 kernel: BTRFS info (device vda6): using free space tree May 15 00:06:11.264995 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:06:11.265917 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:06:11.301573 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory May 15 00:06:11.306275 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory May 15 00:06:11.310539 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory May 15 00:06:11.314447 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory May 15 00:06:11.385633 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 00:06:11.387971 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 00:06:11.390402 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 00:06:11.406004 kernel: BTRFS info (device vda6): last unmount of filesystem 2550790c-7644-4e3d-a6a1-eb68bfdbcf7d May 15 00:06:11.429212 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 00:06:11.439528 ignition[911]: INFO : Ignition 2.20.0 May 15 00:06:11.439528 ignition[911]: INFO : Stage: mount May 15 00:06:11.440918 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:06:11.440918 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:06:11.440918 ignition[911]: INFO : mount: mount passed May 15 00:06:11.440918 ignition[911]: INFO : Ignition finished successfully May 15 00:06:11.443404 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 00:06:11.445866 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 00:06:11.988947 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 00:06:11.990389 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:06:12.008993 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (927) May 15 00:06:12.011100 kernel: BTRFS info (device vda6): first mount of filesystem 2550790c-7644-4e3d-a6a1-eb68bfdbcf7d May 15 00:06:12.011116 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:06:12.011133 kernel: BTRFS info (device vda6): using free space tree May 15 00:06:12.015002 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:06:12.015785 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:06:12.048024 ignition[944]: INFO : Ignition 2.20.0 May 15 00:06:12.048024 ignition[944]: INFO : Stage: files May 15 00:06:12.049512 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:06:12.049512 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:06:12.049512 ignition[944]: DEBUG : files: compiled without relabeling support, skipping May 15 00:06:12.052924 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 00:06:12.052924 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 00:06:12.052924 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 00:06:12.052924 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 00:06:12.052924 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 00:06:12.052807 unknown[944]: wrote ssh authorized keys file for user: core May 15 00:06:12.059869 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 00:06:12.059869 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 15 00:06:12.614162 systemd-networkd[760]: eth0: Gained IPv6LL May 15 00:06:13.096790 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 00:06:17.119294 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 00:06:17.119294 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 15 00:06:17.122357 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 15 00:06:17.122357 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 00:06:17.122357 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 00:06:17.122357 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:06:17.122357 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:06:17.122357 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:06:17.122357 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:06:17.122357 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:06:17.122357 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:06:17.122357 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 00:06:17.122357 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 00:06:17.122357 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 00:06:17.122357 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 15 00:06:17.457025 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 15 00:06:17.910970 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 00:06:17.910970 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 15 00:06:17.913648 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:06:17.913648 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:06:17.913648 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 15 00:06:17.913648 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 15 00:06:17.913648 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:06:17.913648 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:06:17.913648 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 15 00:06:17.913648 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 15 00:06:17.931024 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:06:17.934402 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:06:17.936704 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 15 00:06:17.936704 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 15 00:06:17.936704 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 15 00:06:17.936704 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 00:06:17.936704 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 00:06:17.936704 ignition[944]: INFO : files: files passed May 15 00:06:17.936704 ignition[944]: INFO : Ignition finished successfully May 15 00:06:17.937131 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 00:06:17.941153 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 00:06:17.943854 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 00:06:17.955191 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 00:06:17.955295 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 00:06:17.958557 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory May 15 00:06:17.962085 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:06:17.962085 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 00:06:17.964992 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:06:17.965181 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:06:17.967446 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 00:06:17.970097 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 00:06:18.005141 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 00:06:18.005260 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 00:06:18.008010 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 00:06:18.010039 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 00:06:18.011611 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 00:06:18.012459 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 00:06:18.039501 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:06:18.042443 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 00:06:18.070273 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 00:06:18.072339 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:06:18.075415 systemd[1]: Stopped target timers.target - Timer Units. May 15 00:06:18.076319 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 00:06:18.076460 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:06:18.078732 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 00:06:18.080971 systemd[1]: Stopped target basic.target - Basic System. May 15 00:06:18.085616 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 00:06:18.088133 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:06:18.089779 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 00:06:18.093506 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 00:06:18.094840 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:06:18.096531 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 00:06:18.098016 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 00:06:18.099642 systemd[1]: Stopped target swap.target - Swaps. May 15 00:06:18.100869 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 00:06:18.101018 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 00:06:18.103007 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 00:06:18.104691 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:06:18.106199 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 00:06:18.106309 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:06:18.108068 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 00:06:18.108190 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 00:06:18.110639 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 00:06:18.110758 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:06:18.112301 systemd[1]: Stopped target paths.target - Path Units. May 15 00:06:18.113521 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 00:06:18.117028 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:06:18.117964 systemd[1]: Stopped target slices.target - Slice Units. May 15 00:06:18.120860 systemd[1]: Stopped target sockets.target - Socket Units. May 15 00:06:18.122150 systemd[1]: iscsid.socket: Deactivated successfully. May 15 00:06:18.122236 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:06:18.123440 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 00:06:18.123522 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:06:18.124841 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 00:06:18.124956 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:06:18.126579 systemd[1]: ignition-files.service: Deactivated successfully. May 15 00:06:18.126687 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 00:06:18.128809 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 00:06:18.131412 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 00:06:18.132531 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 00:06:18.132644 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:06:18.134065 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 00:06:18.134164 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:06:18.143236 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 00:06:18.143330 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 00:06:18.151634 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 00:06:18.154094 ignition[1000]: INFO : Ignition 2.20.0 May 15 00:06:18.154094 ignition[1000]: INFO : Stage: umount May 15 00:06:18.155502 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:06:18.155502 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:06:18.155502 ignition[1000]: INFO : umount: umount passed May 15 00:06:18.155502 ignition[1000]: INFO : Ignition finished successfully May 15 00:06:18.155638 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 00:06:18.155771 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 00:06:18.157164 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 00:06:18.159011 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 00:06:18.161118 systemd[1]: Stopped target network.target - Network. May 15 00:06:18.162511 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 00:06:18.162580 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 00:06:18.164123 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 00:06:18.164172 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 00:06:18.165703 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 00:06:18.165750 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 00:06:18.167051 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 00:06:18.167094 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 00:06:18.169184 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 00:06:18.169242 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 00:06:18.170885 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 00:06:18.172182 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 00:06:18.179454 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 00:06:18.181042 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 00:06:18.184242 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 00:06:18.184487 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 00:06:18.184590 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 00:06:18.187241 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 00:06:18.187853 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 00:06:18.187910 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 00:06:18.190035 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 00:06:18.191733 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 00:06:18.191812 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:06:18.193387 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:06:18.193433 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 00:06:18.195587 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 00:06:18.195635 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 00:06:18.197198 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 00:06:18.197244 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:06:18.199670 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:06:18.203091 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 00:06:18.203157 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 00:06:18.220226 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 00:06:18.220389 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:06:18.222576 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 00:06:18.222664 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 00:06:18.224142 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 00:06:18.224221 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 00:06:18.225277 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 00:06:18.225309 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:06:18.226648 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 00:06:18.226703 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 00:06:18.228854 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 00:06:18.228896 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 00:06:18.231067 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:06:18.231115 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:06:18.234166 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 00:06:18.235696 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 00:06:18.235756 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:06:18.237874 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 00:06:18.237922 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:06:18.239887 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 00:06:18.239939 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:06:18.241794 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:06:18.241838 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:06:18.245348 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 00:06:18.245403 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 00:06:18.254523 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 00:06:18.254633 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 00:06:18.256538 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 00:06:18.258630 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 00:06:18.280418 systemd[1]: Switching root. May 15 00:06:18.309827 systemd-journald[234]: Journal stopped May 15 00:06:19.103953 systemd-journald[234]: Received SIGTERM from PID 1 (systemd). May 15 00:06:19.104023 kernel: SELinux: policy capability network_peer_controls=1 May 15 00:06:19.104037 kernel: SELinux: policy capability open_perms=1 May 15 00:06:19.104050 kernel: SELinux: policy capability extended_socket_class=1 May 15 00:06:19.104060 kernel: SELinux: policy capability always_check_network=0 May 15 00:06:19.104073 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 00:06:19.104086 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 00:06:19.104095 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 00:06:19.104104 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 00:06:19.104113 kernel: audit: type=1403 audit(1747267578.499:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 00:06:19.105185 systemd[1]: Successfully loaded SELinux policy in 36.127ms. May 15 00:06:19.105241 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.195ms. May 15 00:06:19.105259 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 00:06:19.105270 systemd[1]: Detected virtualization kvm. May 15 00:06:19.105280 systemd[1]: Detected architecture arm64. May 15 00:06:19.105305 systemd[1]: Detected first boot. May 15 00:06:19.105316 systemd[1]: Initializing machine ID from VM UUID. May 15 00:06:19.105326 kernel: NET: Registered PF_VSOCK protocol family May 15 00:06:19.105337 zram_generator::config[1049]: No configuration found. May 15 00:06:19.105348 systemd[1]: Populated /etc with preset unit settings. May 15 00:06:19.105361 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 00:06:19.105371 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 00:06:19.105382 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 00:06:19.105392 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 00:06:19.105403 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 00:06:19.105413 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 00:06:19.105423 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 00:06:19.105434 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 00:06:19.105444 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 00:06:19.105456 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 00:06:19.105467 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 00:06:19.105478 systemd[1]: Created slice user.slice - User and Session Slice. May 15 00:06:19.105488 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:06:19.105499 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:06:19.105509 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 00:06:19.105519 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 00:06:19.105531 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 00:06:19.105543 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:06:19.105554 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 15 00:06:19.105564 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:06:19.105575 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 00:06:19.105585 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 00:06:19.105596 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 00:06:19.105606 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 00:06:19.105616 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:06:19.105628 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:06:19.105639 systemd[1]: Reached target slices.target - Slice Units. May 15 00:06:19.105649 systemd[1]: Reached target swap.target - Swaps. May 15 00:06:19.105659 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 00:06:19.105670 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 00:06:19.105680 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 00:06:19.105690 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:06:19.105701 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:06:19.105712 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:06:19.105722 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 00:06:19.105733 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 00:06:19.105744 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 00:06:19.105754 systemd[1]: Mounting media.mount - External Media Directory... May 15 00:06:19.105773 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 00:06:19.105784 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 00:06:19.105795 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 00:06:19.105805 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 00:06:19.105816 systemd[1]: Reached target machines.target - Containers. May 15 00:06:19.105829 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 00:06:19.105839 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:06:19.105850 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:06:19.105860 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 00:06:19.105870 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:06:19.105880 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:06:19.105891 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:06:19.105901 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 00:06:19.105913 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:06:19.105924 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 00:06:19.105934 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 00:06:19.105948 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 00:06:19.105959 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 00:06:19.105969 systemd[1]: Stopped systemd-fsck-usr.service. May 15 00:06:19.105990 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 00:06:19.106004 kernel: fuse: init (API version 7.39) May 15 00:06:19.106016 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:06:19.106028 kernel: loop: module loaded May 15 00:06:19.106038 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:06:19.106048 kernel: ACPI: bus type drm_connector registered May 15 00:06:19.106057 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 00:06:19.106067 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 00:06:19.106078 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 00:06:19.106088 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:06:19.106098 systemd[1]: verity-setup.service: Deactivated successfully. May 15 00:06:19.106109 systemd[1]: Stopped verity-setup.service. May 15 00:06:19.106120 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 00:06:19.106130 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 00:06:19.106141 systemd[1]: Mounted media.mount - External Media Directory. May 15 00:06:19.106151 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 00:06:19.106191 systemd-journald[1112]: Collecting audit messages is disabled. May 15 00:06:19.106215 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 00:06:19.106226 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 00:06:19.106238 systemd-journald[1112]: Journal started May 15 00:06:19.106259 systemd-journald[1112]: Runtime Journal (/run/log/journal/e0fb386caeeb45c49190bdd6aec32c1f) is 5.9M, max 47.3M, 41.4M free. May 15 00:06:18.908200 systemd[1]: Queued start job for default target multi-user.target. May 15 00:06:18.917912 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 00:06:18.918269 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 00:06:19.111323 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:06:19.111375 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:06:19.112215 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 00:06:19.112387 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 00:06:19.113560 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:06:19.113728 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:06:19.114955 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 00:06:19.116161 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:06:19.116323 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:06:19.117338 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:06:19.117497 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:06:19.118698 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 00:06:19.118873 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 00:06:19.120143 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:06:19.120304 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:06:19.121388 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:06:19.123453 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 00:06:19.124804 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 00:06:19.126183 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 00:06:19.137911 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 00:06:19.140222 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 00:06:19.141955 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 00:06:19.142797 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 00:06:19.142825 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:06:19.144548 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 00:06:19.151820 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 00:06:19.153892 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 00:06:19.154961 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:06:19.156294 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 00:06:19.157966 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 00:06:19.159185 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:06:19.163125 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 00:06:19.166496 systemd-journald[1112]: Time spent on flushing to /var/log/journal/e0fb386caeeb45c49190bdd6aec32c1f is 21.177ms for 867 entries. May 15 00:06:19.166496 systemd-journald[1112]: System Journal (/var/log/journal/e0fb386caeeb45c49190bdd6aec32c1f) is 8M, max 195.6M, 187.6M free. May 15 00:06:19.194535 systemd-journald[1112]: Received client request to flush runtime journal. May 15 00:06:19.194571 kernel: loop0: detected capacity change from 0 to 126448 May 15 00:06:19.164011 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:06:19.165380 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:06:19.167945 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 00:06:19.169701 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 00:06:19.173576 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:06:19.175701 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 00:06:19.176919 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 00:06:19.178150 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 00:06:19.186149 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 00:06:19.199780 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 00:06:19.201621 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 00:06:19.203447 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:06:19.207558 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 00:06:19.209503 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. May 15 00:06:19.209794 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. May 15 00:06:19.210605 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 00:06:19.214428 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:06:19.218004 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 00:06:19.220265 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 00:06:19.221613 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 00:06:19.248002 kernel: loop1: detected capacity change from 0 to 189592 May 15 00:06:19.252027 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 00:06:19.258641 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 00:06:19.263802 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:06:19.288011 kernel: loop2: detected capacity change from 0 to 103832 May 15 00:06:19.290955 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. May 15 00:06:19.290972 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. May 15 00:06:19.294743 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:06:19.326002 kernel: loop3: detected capacity change from 0 to 126448 May 15 00:06:19.332999 kernel: loop4: detected capacity change from 0 to 189592 May 15 00:06:19.341001 kernel: loop5: detected capacity change from 0 to 103832 May 15 00:06:19.345965 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 00:06:19.346373 (sd-merge)[1195]: Merged extensions into '/usr'. May 15 00:06:19.350742 systemd[1]: Reload requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... May 15 00:06:19.350770 systemd[1]: Reloading... May 15 00:06:19.391061 zram_generator::config[1221]: No configuration found. May 15 00:06:19.445925 ldconfig[1162]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 00:06:19.502553 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:06:19.552395 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 00:06:19.552556 systemd[1]: Reloading finished in 201 ms. May 15 00:06:19.568576 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 00:06:19.569832 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 00:06:19.591212 systemd[1]: Starting ensure-sysext.service... May 15 00:06:19.592795 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:06:19.603262 systemd[1]: Reload requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... May 15 00:06:19.603276 systemd[1]: Reloading... May 15 00:06:19.608371 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 00:06:19.608580 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 00:06:19.609297 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 00:06:19.609501 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. May 15 00:06:19.609552 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. May 15 00:06:19.611894 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:06:19.611907 systemd-tmpfiles[1259]: Skipping /boot May 15 00:06:19.621277 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:06:19.621292 systemd-tmpfiles[1259]: Skipping /boot May 15 00:06:19.650995 zram_generator::config[1288]: No configuration found. May 15 00:06:19.738154 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:06:19.788249 systemd[1]: Reloading finished in 184 ms. May 15 00:06:19.800565 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 00:06:19.818170 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:06:19.827206 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 00:06:19.830366 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 00:06:19.838202 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 00:06:19.841624 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:06:19.849000 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:06:19.851860 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 00:06:19.856085 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:06:19.863826 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:06:19.870267 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:06:19.873909 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:06:19.875597 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:06:19.875735 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 00:06:19.880525 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 00:06:19.882635 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 00:06:19.885345 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:06:19.885552 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:06:19.889578 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:06:19.889749 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:06:19.895479 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:06:19.895861 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:06:19.901609 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 00:06:19.905629 systemd-udevd[1329]: Using default interface naming scheme 'v255'. May 15 00:06:19.906418 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:06:19.908041 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:06:19.913218 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:06:19.923270 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:06:19.924308 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:06:19.924428 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 00:06:19.926451 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 00:06:19.927353 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:06:19.930758 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 00:06:19.932447 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:06:19.932666 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:06:19.934093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:06:19.934242 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:06:19.935456 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:06:19.936908 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:06:19.937060 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:06:19.946698 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 00:06:19.955152 systemd[1]: Finished ensure-sysext.service. May 15 00:06:19.959669 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:06:19.962520 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:06:19.965320 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:06:19.967548 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:06:19.976245 augenrules[1394]: No rules May 15 00:06:19.978385 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:06:19.979882 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:06:19.979938 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 00:06:19.983389 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:06:19.987062 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 00:06:19.988077 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:06:19.988550 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 00:06:19.990518 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:06:19.991584 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 00:06:19.992888 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:06:19.993066 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:06:19.994239 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:06:19.994404 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:06:19.997143 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:06:19.997315 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:06:19.999054 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:06:19.999284 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:06:20.014553 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 15 00:06:20.015745 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:06:20.015810 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:06:20.054015 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 47 scanned by (udev-worker) (1365) May 15 00:06:20.069778 systemd-resolved[1327]: Positive Trust Anchors: May 15 00:06:20.073461 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:06:20.073496 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:06:20.082133 systemd-resolved[1327]: Defaulting to hostname 'linux'. May 15 00:06:20.085560 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:06:20.086706 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:06:20.106226 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 00:06:20.107304 systemd[1]: Reached target time-set.target - System Time Set. May 15 00:06:20.112942 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 00:06:20.115943 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 00:06:20.123939 systemd-networkd[1400]: lo: Link UP May 15 00:06:20.123944 systemd-networkd[1400]: lo: Gained carrier May 15 00:06:20.124898 systemd-networkd[1400]: Enumeration completed May 15 00:06:20.126140 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:06:20.127137 systemd[1]: Reached target network.target - Network. May 15 00:06:20.129142 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:06:20.129153 systemd-networkd[1400]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:06:20.129845 systemd-networkd[1400]: eth0: Link UP May 15 00:06:20.129855 systemd-networkd[1400]: eth0: Gained carrier May 15 00:06:20.129870 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:06:20.132575 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 00:06:20.134575 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 00:06:20.143101 systemd-networkd[1400]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:06:20.143715 systemd-timesyncd[1401]: Network configuration changed, trying to establish connection. May 15 00:06:20.149108 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 00:06:20.153551 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 00:06:20.153608 systemd-timesyncd[1401]: Initial clock synchronization to Thu 2025-05-15 00:06:20.019256 UTC. May 15 00:06:20.163016 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:06:20.173324 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 00:06:20.174730 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 00:06:20.177698 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 00:06:20.203296 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:06:20.208765 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:06:20.236530 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 00:06:20.237800 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:06:20.238702 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:06:20.239576 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 00:06:20.240534 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 00:06:20.241729 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 00:06:20.242667 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 00:06:20.243817 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 00:06:20.244868 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 00:06:20.244903 systemd[1]: Reached target paths.target - Path Units. May 15 00:06:20.245628 systemd[1]: Reached target timers.target - Timer Units. May 15 00:06:20.247336 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 00:06:20.249410 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 00:06:20.252345 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 00:06:20.253502 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 00:06:20.254570 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 00:06:20.259064 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 00:06:20.260389 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 00:06:20.262649 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 00:06:20.264268 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 00:06:20.265184 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:06:20.266035 systemd[1]: Reached target basic.target - Basic System. May 15 00:06:20.266788 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 00:06:20.266819 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 00:06:20.267846 systemd[1]: Starting containerd.service - containerd container runtime... May 15 00:06:20.269834 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 00:06:20.270672 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:06:20.273110 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 00:06:20.278155 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 00:06:20.278967 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 00:06:20.281712 jq[1440]: false May 15 00:06:20.280091 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 00:06:20.282324 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 00:06:20.296022 dbus-daemon[1439]: [system] SELinux support is enabled May 15 00:06:20.302117 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 00:06:20.305131 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 00:06:20.314191 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 00:06:20.316074 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 00:06:20.316627 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 00:06:20.316924 extend-filesystems[1441]: Found loop3 May 15 00:06:20.318314 extend-filesystems[1441]: Found loop4 May 15 00:06:20.318314 extend-filesystems[1441]: Found loop5 May 15 00:06:20.318314 extend-filesystems[1441]: Found vda May 15 00:06:20.318314 extend-filesystems[1441]: Found vda1 May 15 00:06:20.318314 extend-filesystems[1441]: Found vda2 May 15 00:06:20.318314 extend-filesystems[1441]: Found vda3 May 15 00:06:20.318314 extend-filesystems[1441]: Found usr May 15 00:06:20.318314 extend-filesystems[1441]: Found vda4 May 15 00:06:20.318314 extend-filesystems[1441]: Found vda6 May 15 00:06:20.318314 extend-filesystems[1441]: Found vda7 May 15 00:06:20.318314 extend-filesystems[1441]: Found vda9 May 15 00:06:20.318314 extend-filesystems[1441]: Checking size of /dev/vda9 May 15 00:06:20.317911 systemd[1]: Starting update-engine.service - Update Engine... May 15 00:06:20.324785 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 00:06:20.334348 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 00:06:20.337462 jq[1457]: true May 15 00:06:20.340402 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 00:06:20.342717 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 00:06:20.342927 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 00:06:20.343245 systemd[1]: motdgen.service: Deactivated successfully. May 15 00:06:20.343417 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 00:06:20.346556 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 00:06:20.346767 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 00:06:20.353559 extend-filesystems[1441]: Resized partition /dev/vda9 May 15 00:06:20.364474 extend-filesystems[1468]: resize2fs 1.47.2 (1-Jan-2025) May 15 00:06:20.369132 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 47 scanned by (udev-worker) (1365) May 15 00:06:20.369191 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 00:06:20.368497 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 00:06:20.368527 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 00:06:20.375250 jq[1464]: true May 15 00:06:20.370703 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 00:06:20.370726 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 00:06:20.387482 tar[1461]: linux-arm64/helm May 15 00:06:20.395995 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 00:06:20.398495 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 00:06:20.420251 update_engine[1454]: I20250515 00:06:20.411432 1454 main.cc:92] Flatcar Update Engine starting May 15 00:06:20.420251 update_engine[1454]: I20250515 00:06:20.413391 1454 update_check_scheduler.cc:74] Next update check in 2m54s May 15 00:06:20.413635 systemd[1]: Started update-engine.service - Update Engine. May 15 00:06:20.417825 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 00:06:20.421845 extend-filesystems[1468]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 00:06:20.421845 extend-filesystems[1468]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 00:06:20.421845 extend-filesystems[1468]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 00:06:20.428560 extend-filesystems[1441]: Resized filesystem in /dev/vda9 May 15 00:06:20.428366 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 00:06:20.429423 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 00:06:20.429966 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (Power Button) May 15 00:06:20.431594 systemd-logind[1452]: New seat seat0. May 15 00:06:20.432772 systemd[1]: Started systemd-logind.service - User Login Management. May 15 00:06:20.475095 bash[1495]: Updated "/home/core/.ssh/authorized_keys" May 15 00:06:20.478644 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 00:06:20.481822 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 00:06:20.544100 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 00:06:20.629225 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 00:06:20.647325 containerd[1475]: time="2025-05-15T00:06:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 15 00:06:20.648261 containerd[1475]: time="2025-05-15T00:06:20.648211320Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 15 00:06:20.653522 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 00:06:20.657153 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 00:06:20.660284 containerd[1475]: time="2025-05-15T00:06:20.658990720Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.68µs" May 15 00:06:20.660284 containerd[1475]: time="2025-05-15T00:06:20.659038760Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 15 00:06:20.660284 containerd[1475]: time="2025-05-15T00:06:20.659059680Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 15 00:06:20.660284 containerd[1475]: time="2025-05-15T00:06:20.659221320Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 15 00:06:20.660284 containerd[1475]: time="2025-05-15T00:06:20.659241440Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 15 00:06:20.660284 containerd[1475]: time="2025-05-15T00:06:20.659266040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 00:06:20.660284 containerd[1475]: time="2025-05-15T00:06:20.659316840Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 00:06:20.660284 containerd[1475]: time="2025-05-15T00:06:20.659328440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 00:06:20.660284 containerd[1475]: time="2025-05-15T00:06:20.659624600Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 00:06:20.660284 containerd[1475]: time="2025-05-15T00:06:20.659638960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 00:06:20.660284 containerd[1475]: time="2025-05-15T00:06:20.659650080Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 00:06:20.660284 containerd[1475]: time="2025-05-15T00:06:20.659658040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 15 00:06:20.660520 containerd[1475]: time="2025-05-15T00:06:20.659726200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 15 00:06:20.660520 containerd[1475]: time="2025-05-15T00:06:20.659920640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 00:06:20.660520 containerd[1475]: time="2025-05-15T00:06:20.659962000Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 00:06:20.660600 containerd[1475]: time="2025-05-15T00:06:20.660574760Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 15 00:06:20.660678 containerd[1475]: time="2025-05-15T00:06:20.660663640Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 15 00:06:20.661130 containerd[1475]: time="2025-05-15T00:06:20.661093480Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 15 00:06:20.661216 containerd[1475]: time="2025-05-15T00:06:20.661199360Z" level=info msg="metadata content store policy set" policy=shared May 15 00:06:20.664966 containerd[1475]: time="2025-05-15T00:06:20.664929760Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 15 00:06:20.665020 containerd[1475]: time="2025-05-15T00:06:20.664995520Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 15 00:06:20.665020 containerd[1475]: time="2025-05-15T00:06:20.665011320Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 15 00:06:20.665060 containerd[1475]: time="2025-05-15T00:06:20.665023680Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 15 00:06:20.665060 containerd[1475]: time="2025-05-15T00:06:20.665039680Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 15 00:06:20.665060 containerd[1475]: time="2025-05-15T00:06:20.665050880Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 15 00:06:20.665126 containerd[1475]: time="2025-05-15T00:06:20.665063480Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 15 00:06:20.665126 containerd[1475]: time="2025-05-15T00:06:20.665076720Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 15 00:06:20.665126 containerd[1475]: time="2025-05-15T00:06:20.665087480Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 15 00:06:20.665126 containerd[1475]: time="2025-05-15T00:06:20.665098520Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 15 00:06:20.665126 containerd[1475]: time="2025-05-15T00:06:20.665115000Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 15 00:06:20.665126 containerd[1475]: time="2025-05-15T00:06:20.665126760Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 15 00:06:20.665270 containerd[1475]: time="2025-05-15T00:06:20.665239080Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 15 00:06:20.665299 containerd[1475]: time="2025-05-15T00:06:20.665270240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 15 00:06:20.665299 containerd[1475]: time="2025-05-15T00:06:20.665284680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 15 00:06:20.665299 containerd[1475]: time="2025-05-15T00:06:20.665295960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 15 00:06:20.665346 containerd[1475]: time="2025-05-15T00:06:20.665306360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 15 00:06:20.665346 containerd[1475]: time="2025-05-15T00:06:20.665316160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 15 00:06:20.665346 containerd[1475]: time="2025-05-15T00:06:20.665327200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 15 00:06:20.665346 containerd[1475]: time="2025-05-15T00:06:20.665337400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 15 00:06:20.665418 containerd[1475]: time="2025-05-15T00:06:20.665347920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 15 00:06:20.665418 containerd[1475]: time="2025-05-15T00:06:20.665358480Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 15 00:06:20.665418 containerd[1475]: time="2025-05-15T00:06:20.665369640Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 15 00:06:20.665665 containerd[1475]: time="2025-05-15T00:06:20.665641240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 15 00:06:20.665665 containerd[1475]: time="2025-05-15T00:06:20.665662120Z" level=info msg="Start snapshots syncer" May 15 00:06:20.665711 containerd[1475]: time="2025-05-15T00:06:20.665683640Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 15 00:06:20.666882 containerd[1475]: time="2025-05-15T00:06:20.666676240Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 15 00:06:20.666882 containerd[1475]: time="2025-05-15T00:06:20.666763240Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 15 00:06:20.667311 containerd[1475]: time="2025-05-15T00:06:20.667220160Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 15 00:06:20.668003 containerd[1475]: time="2025-05-15T00:06:20.667449640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 15 00:06:20.668003 containerd[1475]: time="2025-05-15T00:06:20.667496360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 15 00:06:20.668003 containerd[1475]: time="2025-05-15T00:06:20.667514920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 15 00:06:20.668003 containerd[1475]: time="2025-05-15T00:06:20.667530160Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 15 00:06:20.668003 containerd[1475]: time="2025-05-15T00:06:20.667544440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 15 00:06:20.668003 containerd[1475]: time="2025-05-15T00:06:20.667559480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 15 00:06:20.668003 containerd[1475]: time="2025-05-15T00:06:20.667574320Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 15 00:06:20.668003 containerd[1475]: time="2025-05-15T00:06:20.667614200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 15 00:06:20.668003 containerd[1475]: time="2025-05-15T00:06:20.667631760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 15 00:06:20.668003 containerd[1475]: time="2025-05-15T00:06:20.667645240Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 15 00:06:20.668003 containerd[1475]: time="2025-05-15T00:06:20.667695480Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 00:06:20.668003 containerd[1475]: time="2025-05-15T00:06:20.667711160Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 00:06:20.668003 containerd[1475]: time="2025-05-15T00:06:20.667724240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 00:06:20.668250 containerd[1475]: time="2025-05-15T00:06:20.667738320Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 00:06:20.668250 containerd[1475]: time="2025-05-15T00:06:20.667759880Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 15 00:06:20.668250 containerd[1475]: time="2025-05-15T00:06:20.667776360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 15 00:06:20.668250 containerd[1475]: time="2025-05-15T00:06:20.667791960Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 15 00:06:20.668250 containerd[1475]: time="2025-05-15T00:06:20.667874360Z" level=info msg="runtime interface created" May 15 00:06:20.668250 containerd[1475]: time="2025-05-15T00:06:20.667879880Z" level=info msg="created NRI interface" May 15 00:06:20.668250 containerd[1475]: time="2025-05-15T00:06:20.667892440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 15 00:06:20.668250 containerd[1475]: time="2025-05-15T00:06:20.667906480Z" level=info msg="Connect containerd service" May 15 00:06:20.668250 containerd[1475]: time="2025-05-15T00:06:20.667944480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 00:06:20.670625 containerd[1475]: time="2025-05-15T00:06:20.668683880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:06:20.673394 systemd[1]: issuegen.service: Deactivated successfully. May 15 00:06:20.675050 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 00:06:20.677679 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 00:06:20.696483 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 00:06:20.699925 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 00:06:20.701905 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 15 00:06:20.703434 systemd[1]: Reached target getty.target - Login Prompts. May 15 00:06:20.782656 containerd[1475]: time="2025-05-15T00:06:20.782573560Z" level=info msg="Start subscribing containerd event" May 15 00:06:20.782763 containerd[1475]: time="2025-05-15T00:06:20.782668440Z" level=info msg="Start recovering state" May 15 00:06:20.782785 containerd[1475]: time="2025-05-15T00:06:20.782766680Z" level=info msg="Start event monitor" May 15 00:06:20.782785 containerd[1475]: time="2025-05-15T00:06:20.782782200Z" level=info msg="Start cni network conf syncer for default" May 15 00:06:20.782819 containerd[1475]: time="2025-05-15T00:06:20.782789640Z" level=info msg="Start streaming server" May 15 00:06:20.782819 containerd[1475]: time="2025-05-15T00:06:20.782798480Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 15 00:06:20.782819 containerd[1475]: time="2025-05-15T00:06:20.782805200Z" level=info msg="runtime interface starting up..." May 15 00:06:20.782819 containerd[1475]: time="2025-05-15T00:06:20.782810600Z" level=info msg="starting plugins..." May 15 00:06:20.782908 containerd[1475]: time="2025-05-15T00:06:20.782823680Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 15 00:06:20.783319 containerd[1475]: time="2025-05-15T00:06:20.783283240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 00:06:20.783348 containerd[1475]: time="2025-05-15T00:06:20.783341560Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 00:06:20.783410 containerd[1475]: time="2025-05-15T00:06:20.783391680Z" level=info msg="containerd successfully booted in 0.136474s" May 15 00:06:20.783502 systemd[1]: Started containerd.service - containerd container runtime. May 15 00:06:20.791877 tar[1461]: linux-arm64/LICENSE May 15 00:06:20.792002 tar[1461]: linux-arm64/README.md May 15 00:06:20.811661 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 00:06:21.318143 systemd-networkd[1400]: eth0: Gained IPv6LL May 15 00:06:21.322018 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 00:06:21.323370 systemd[1]: Reached target network-online.target - Network is Online. May 15 00:06:21.325833 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 00:06:21.328026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:06:21.329932 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 00:06:21.341594 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 00:06:21.351404 systemd[1]: Started sshd@0-10.0.0.138:22-10.0.0.1:33810.service - OpenSSH per-connection server daemon (10.0.0.1:33810). May 15 00:06:21.353261 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 00:06:21.363454 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 00:06:21.364626 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 00:06:21.366059 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 00:06:21.427853 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 33810 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:06:21.429562 sshd-session[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:21.439497 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 00:06:21.441546 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 00:06:21.444432 systemd-logind[1452]: New session 1 of user core. May 15 00:06:21.465086 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 00:06:21.469781 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 00:06:21.499900 (systemd)[1567]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 00:06:21.502128 systemd-logind[1452]: New session c1 of user core. May 15 00:06:21.606439 systemd[1567]: Queued start job for default target default.target. May 15 00:06:21.628994 systemd[1567]: Created slice app.slice - User Application Slice. May 15 00:06:21.629021 systemd[1567]: Reached target paths.target - Paths. May 15 00:06:21.629059 systemd[1567]: Reached target timers.target - Timers. May 15 00:06:21.630248 systemd[1567]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 00:06:21.639144 systemd[1567]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 00:06:21.639194 systemd[1567]: Reached target sockets.target - Sockets. May 15 00:06:21.639228 systemd[1567]: Reached target basic.target - Basic System. May 15 00:06:21.639253 systemd[1567]: Reached target default.target - Main User Target. May 15 00:06:21.639276 systemd[1567]: Startup finished in 131ms. May 15 00:06:21.639506 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 00:06:21.641745 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 00:06:21.711126 systemd[1]: Started sshd@1-10.0.0.138:22-10.0.0.1:33822.service - OpenSSH per-connection server daemon (10.0.0.1:33822). May 15 00:06:21.757103 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 33822 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:06:21.758381 sshd-session[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:21.762500 systemd-logind[1452]: New session 2 of user core. May 15 00:06:21.779199 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 00:06:21.830587 sshd[1580]: Connection closed by 10.0.0.1 port 33822 May 15 00:06:21.830897 sshd-session[1578]: pam_unix(sshd:session): session closed for user core May 15 00:06:21.841107 systemd[1]: sshd@1-10.0.0.138:22-10.0.0.1:33822.service: Deactivated successfully. May 15 00:06:21.843998 systemd[1]: session-2.scope: Deactivated successfully. May 15 00:06:21.846184 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. May 15 00:06:21.847034 systemd[1]: Started sshd@2-10.0.0.138:22-10.0.0.1:33824.service - OpenSSH per-connection server daemon (10.0.0.1:33824). May 15 00:06:21.849172 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:06:21.851110 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 00:06:21.853882 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:06:21.856151 systemd[1]: Startup finished in 558ms (kernel) + 9.802s (initrd) + 3.392s (userspace) = 13.753s. May 15 00:06:21.856860 systemd-logind[1452]: Removed session 2. May 15 00:06:21.890393 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 33824 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:06:21.891651 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:21.896322 systemd-logind[1452]: New session 3 of user core. May 15 00:06:21.906168 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 00:06:21.957993 sshd[1598]: Connection closed by 10.0.0.1 port 33824 May 15 00:06:21.958480 sshd-session[1589]: pam_unix(sshd:session): session closed for user core May 15 00:06:21.964171 systemd[1]: sshd@2-10.0.0.138:22-10.0.0.1:33824.service: Deactivated successfully. May 15 00:06:21.965779 systemd[1]: session-3.scope: Deactivated successfully. May 15 00:06:21.968470 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. May 15 00:06:21.969258 systemd-logind[1452]: Removed session 3. May 15 00:06:22.290188 kubelet[1590]: E0515 00:06:22.290067 1590 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:06:22.293171 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:06:22.293307 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:06:22.295061 systemd[1]: kubelet.service: Consumed 790ms CPU time, 235.8M memory peak. May 15 00:06:31.908845 systemd[1]: Started sshd@3-10.0.0.138:22-10.0.0.1:33862.service - OpenSSH per-connection server daemon (10.0.0.1:33862). May 15 00:06:31.959658 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 33862 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:06:31.960902 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:31.964654 systemd-logind[1452]: New session 4 of user core. May 15 00:06:31.979108 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 00:06:32.030039 sshd[1613]: Connection closed by 10.0.0.1 port 33862 May 15 00:06:32.029908 sshd-session[1611]: pam_unix(sshd:session): session closed for user core May 15 00:06:32.039878 systemd[1]: sshd@3-10.0.0.138:22-10.0.0.1:33862.service: Deactivated successfully. May 15 00:06:32.041183 systemd[1]: session-4.scope: Deactivated successfully. May 15 00:06:32.044068 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. May 15 00:06:32.044746 systemd[1]: Started sshd@4-10.0.0.138:22-10.0.0.1:33874.service - OpenSSH per-connection server daemon (10.0.0.1:33874). May 15 00:06:32.045872 systemd-logind[1452]: Removed session 4. May 15 00:06:32.097218 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 33874 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:06:32.098372 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:32.102483 systemd-logind[1452]: New session 5 of user core. May 15 00:06:32.109141 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 00:06:32.156558 sshd[1621]: Connection closed by 10.0.0.1 port 33874 May 15 00:06:32.157026 sshd-session[1618]: pam_unix(sshd:session): session closed for user core May 15 00:06:32.173884 systemd[1]: sshd@4-10.0.0.138:22-10.0.0.1:33874.service: Deactivated successfully. May 15 00:06:32.175304 systemd[1]: session-5.scope: Deactivated successfully. May 15 00:06:32.176099 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. May 15 00:06:32.177447 systemd-logind[1452]: Removed session 5. May 15 00:06:32.178569 systemd[1]: Started sshd@5-10.0.0.138:22-10.0.0.1:33876.service - OpenSSH per-connection server daemon (10.0.0.1:33876). May 15 00:06:32.236189 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 33876 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:06:32.237443 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:32.242695 systemd-logind[1452]: New session 6 of user core. May 15 00:06:32.250138 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 00:06:32.300877 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 00:06:32.302187 sshd[1629]: Connection closed by 10.0.0.1 port 33876 May 15 00:06:32.302517 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:06:32.303108 sshd-session[1626]: pam_unix(sshd:session): session closed for user core May 15 00:06:32.314091 systemd[1]: sshd@5-10.0.0.138:22-10.0.0.1:33876.service: Deactivated successfully. May 15 00:06:32.316099 systemd[1]: session-6.scope: Deactivated successfully. May 15 00:06:32.317394 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. May 15 00:06:32.319283 systemd[1]: Started sshd@6-10.0.0.138:22-10.0.0.1:33892.service - OpenSSH per-connection server daemon (10.0.0.1:33892). May 15 00:06:32.321230 systemd-logind[1452]: Removed session 6. May 15 00:06:32.362328 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 33892 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:06:32.363619 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:32.368029 systemd-logind[1452]: New session 7 of user core. May 15 00:06:32.378129 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 00:06:32.438217 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 00:06:32.438511 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:06:32.454920 sudo[1641]: pam_unix(sudo:session): session closed for user root May 15 00:06:32.457000 sshd[1640]: Connection closed by 10.0.0.1 port 33892 May 15 00:06:32.456830 sshd-session[1637]: pam_unix(sshd:session): session closed for user core May 15 00:06:32.463548 systemd[1]: sshd@6-10.0.0.138:22-10.0.0.1:33892.service: Deactivated successfully. May 15 00:06:32.466943 systemd[1]: session-7.scope: Deactivated successfully. May 15 00:06:32.468281 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. May 15 00:06:32.470039 systemd[1]: Started sshd@7-10.0.0.138:22-10.0.0.1:33904.service - OpenSSH per-connection server daemon (10.0.0.1:33904). May 15 00:06:32.471743 systemd-logind[1452]: Removed session 7. May 15 00:06:32.474187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:06:32.477587 (kubelet)[1653]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:06:32.512038 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 33904 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:06:32.513464 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:32.513855 kubelet[1653]: E0515 00:06:32.513665 1653 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:06:32.516918 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:06:32.517081 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:06:32.517366 systemd[1]: kubelet.service: Consumed 133ms CPU time, 96.1M memory peak. May 15 00:06:32.519719 systemd-logind[1452]: New session 8 of user core. May 15 00:06:32.529183 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 00:06:32.581183 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 00:06:32.581460 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:06:32.584695 sudo[1665]: pam_unix(sudo:session): session closed for user root May 15 00:06:32.589491 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 00:06:32.589844 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:06:32.598399 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 00:06:32.633719 augenrules[1687]: No rules May 15 00:06:32.634884 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:06:32.636052 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 00:06:32.636991 sudo[1664]: pam_unix(sudo:session): session closed for user root May 15 00:06:32.638136 sshd[1663]: Connection closed by 10.0.0.1 port 33904 May 15 00:06:32.639222 sshd-session[1650]: pam_unix(sshd:session): session closed for user core May 15 00:06:32.650809 systemd[1]: Started sshd@8-10.0.0.138:22-10.0.0.1:44158.service - OpenSSH per-connection server daemon (10.0.0.1:44158). May 15 00:06:32.651394 systemd[1]: sshd@7-10.0.0.138:22-10.0.0.1:33904.service: Deactivated successfully. May 15 00:06:32.653012 systemd[1]: session-8.scope: Deactivated successfully. May 15 00:06:32.656324 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. May 15 00:06:32.664318 systemd-logind[1452]: Removed session 8. May 15 00:06:32.701223 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 44158 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:06:32.702952 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:32.707069 systemd-logind[1452]: New session 9 of user core. May 15 00:06:32.729163 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 00:06:32.780675 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 00:06:32.780943 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:06:33.185396 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 00:06:33.200296 (dockerd)[1720]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 00:06:33.490549 dockerd[1720]: time="2025-05-15T00:06:33.490413817Z" level=info msg="Starting up" May 15 00:06:33.491836 dockerd[1720]: time="2025-05-15T00:06:33.491804578Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 15 00:06:33.593451 dockerd[1720]: time="2025-05-15T00:06:33.593397834Z" level=info msg="Loading containers: start." May 15 00:06:33.746024 kernel: Initializing XFRM netlink socket May 15 00:06:33.814501 systemd-networkd[1400]: docker0: Link UP May 15 00:06:33.885103 dockerd[1720]: time="2025-05-15T00:06:33.885047335Z" level=info msg="Loading containers: done." May 15 00:06:33.897775 dockerd[1720]: time="2025-05-15T00:06:33.897354957Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 00:06:33.897775 dockerd[1720]: time="2025-05-15T00:06:33.897444330Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 15 00:06:33.897775 dockerd[1720]: time="2025-05-15T00:06:33.897612995Z" level=info msg="Daemon has completed initialization" May 15 00:06:33.926047 dockerd[1720]: time="2025-05-15T00:06:33.925958518Z" level=info msg="API listen on /run/docker.sock" May 15 00:06:33.926302 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 00:06:34.668186 containerd[1475]: time="2025-05-15T00:06:34.668148925Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 15 00:06:35.288262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1391851999.mount: Deactivated successfully. May 15 00:06:36.653010 containerd[1475]: time="2025-05-15T00:06:36.652949721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:36.653816 containerd[1475]: time="2025-05-15T00:06:36.653455265Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" May 15 00:06:36.654459 containerd[1475]: time="2025-05-15T00:06:36.654426401Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:36.657177 containerd[1475]: time="2025-05-15T00:06:36.657141443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:36.658136 containerd[1475]: time="2025-05-15T00:06:36.658098902Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 1.989907806s" May 15 00:06:36.658177 containerd[1475]: time="2025-05-15T00:06:36.658141885Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 15 00:06:36.658783 containerd[1475]: time="2025-05-15T00:06:36.658711185Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 15 00:06:38.467852 containerd[1475]: time="2025-05-15T00:06:38.467791368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:38.469606 containerd[1475]: time="2025-05-15T00:06:38.469529328Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" May 15 00:06:38.470600 containerd[1475]: time="2025-05-15T00:06:38.470567373Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:38.474089 containerd[1475]: time="2025-05-15T00:06:38.474018162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:38.475129 containerd[1475]: time="2025-05-15T00:06:38.475031994Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.816288469s" May 15 00:06:38.475129 containerd[1475]: time="2025-05-15T00:06:38.475070845Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 15 00:06:38.475761 containerd[1475]: time="2025-05-15T00:06:38.475517072Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 15 00:06:39.742873 containerd[1475]: time="2025-05-15T00:06:39.742798897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:39.743609 containerd[1475]: time="2025-05-15T00:06:39.743543098Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" May 15 00:06:39.744109 containerd[1475]: time="2025-05-15T00:06:39.744076893Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:39.747438 containerd[1475]: time="2025-05-15T00:06:39.747402538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:39.748304 containerd[1475]: time="2025-05-15T00:06:39.748263751Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.272715325s" May 15 00:06:39.748342 containerd[1475]: time="2025-05-15T00:06:39.748311665Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 15 00:06:39.748771 containerd[1475]: time="2025-05-15T00:06:39.748751308Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 15 00:06:40.718675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount499047447.mount: Deactivated successfully. May 15 00:06:41.067927 containerd[1475]: time="2025-05-15T00:06:41.067766992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:41.068484 containerd[1475]: time="2025-05-15T00:06:41.068422995Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 15 00:06:41.069196 containerd[1475]: time="2025-05-15T00:06:41.069160489Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:41.072532 containerd[1475]: time="2025-05-15T00:06:41.072499008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:41.073499 containerd[1475]: time="2025-05-15T00:06:41.073467288Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.324684061s" May 15 00:06:41.073578 containerd[1475]: time="2025-05-15T00:06:41.073503604Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 15 00:06:41.073885 containerd[1475]: time="2025-05-15T00:06:41.073860339Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 00:06:41.578551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3725042894.mount: Deactivated successfully. May 15 00:06:42.303476 containerd[1475]: time="2025-05-15T00:06:42.303428084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:42.304762 containerd[1475]: time="2025-05-15T00:06:42.304489942Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 15 00:06:42.305537 containerd[1475]: time="2025-05-15T00:06:42.305501070Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:42.310914 containerd[1475]: time="2025-05-15T00:06:42.308398428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:42.310914 containerd[1475]: time="2025-05-15T00:06:42.310388793Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.236427327s" May 15 00:06:42.310914 containerd[1475]: time="2025-05-15T00:06:42.310418130Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 15 00:06:42.311078 containerd[1475]: time="2025-05-15T00:06:42.310924632Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 00:06:42.767488 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 00:06:42.768898 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:06:42.786599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount662998063.mount: Deactivated successfully. May 15 00:06:42.791476 containerd[1475]: time="2025-05-15T00:06:42.791433423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:06:42.791942 containerd[1475]: time="2025-05-15T00:06:42.791885163Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 15 00:06:42.792888 containerd[1475]: time="2025-05-15T00:06:42.792817302Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:06:42.794766 containerd[1475]: time="2025-05-15T00:06:42.794733388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:06:42.795519 containerd[1475]: time="2025-05-15T00:06:42.795483003Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 484.405303ms" May 15 00:06:42.795519 containerd[1475]: time="2025-05-15T00:06:42.795518726Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 15 00:06:42.796061 containerd[1475]: time="2025-05-15T00:06:42.795937578Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 15 00:06:42.883688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:06:42.887369 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:06:42.952172 kubelet[2054]: E0515 00:06:42.952107 2054 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:06:42.954606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:06:42.954748 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:06:42.955078 systemd[1]: kubelet.service: Consumed 130ms CPU time, 94.9M memory peak. May 15 00:06:43.361765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount101811102.mount: Deactivated successfully. May 15 00:06:46.246681 containerd[1475]: time="2025-05-15T00:06:46.246632520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:46.247300 containerd[1475]: time="2025-05-15T00:06:46.247247371Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 15 00:06:46.248805 containerd[1475]: time="2025-05-15T00:06:46.248755486Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:46.252096 containerd[1475]: time="2025-05-15T00:06:46.252037433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:46.252711 containerd[1475]: time="2025-05-15T00:06:46.252627285Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.456657931s" May 15 00:06:46.252711 containerd[1475]: time="2025-05-15T00:06:46.252664583Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 15 00:06:51.068259 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:06:51.068987 systemd[1]: kubelet.service: Consumed 130ms CPU time, 94.9M memory peak. May 15 00:06:51.087046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:06:51.111064 systemd[1]: Reload requested from client PID 2146 ('systemctl') (unit session-9.scope)... May 15 00:06:51.111087 systemd[1]: Reloading... May 15 00:06:51.180228 zram_generator::config[2185]: No configuration found. May 15 00:06:51.320931 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:06:51.396820 systemd[1]: Reloading finished in 285 ms. May 15 00:06:51.451442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:06:51.454427 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:06:51.455424 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:06:51.455637 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:06:51.455690 systemd[1]: kubelet.service: Consumed 88ms CPU time, 82.4M memory peak. May 15 00:06:51.457186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:06:51.567236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:06:51.577285 (kubelet)[2236]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:06:51.617461 kubelet[2236]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:06:51.617461 kubelet[2236]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:06:51.617461 kubelet[2236]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:06:51.617912 kubelet[2236]: I0515 00:06:51.617620 2236 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:06:52.255872 kubelet[2236]: I0515 00:06:52.255810 2236 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 00:06:52.255872 kubelet[2236]: I0515 00:06:52.255849 2236 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:06:52.257014 kubelet[2236]: I0515 00:06:52.256346 2236 server.go:929] "Client rotation is on, will bootstrap in background" May 15 00:06:52.314289 kubelet[2236]: E0515 00:06:52.314243 2236 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" May 15 00:06:52.315627 kubelet[2236]: I0515 00:06:52.315600 2236 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:06:52.325823 kubelet[2236]: I0515 00:06:52.325738 2236 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 00:06:52.329558 kubelet[2236]: I0515 00:06:52.329526 2236 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:06:52.330430 kubelet[2236]: I0515 00:06:52.330391 2236 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 00:06:52.330615 kubelet[2236]: I0515 00:06:52.330576 2236 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:06:52.330790 kubelet[2236]: I0515 00:06:52.330607 2236 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:06:52.330892 kubelet[2236]: I0515 00:06:52.330862 2236 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:06:52.330892 kubelet[2236]: I0515 00:06:52.330872 2236 container_manager_linux.go:300] "Creating device plugin manager" May 15 00:06:52.331121 kubelet[2236]: I0515 00:06:52.331094 2236 state_mem.go:36] "Initialized new in-memory state store" May 15 00:06:52.332864 kubelet[2236]: I0515 00:06:52.332831 2236 kubelet.go:408] "Attempting to sync node with API server" May 15 00:06:52.332864 kubelet[2236]: I0515 00:06:52.332863 2236 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:06:52.332989 kubelet[2236]: I0515 00:06:52.332967 2236 kubelet.go:314] "Adding apiserver pod source" May 15 00:06:52.333027 kubelet[2236]: I0515 00:06:52.332996 2236 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:06:52.336013 kubelet[2236]: I0515 00:06:52.335987 2236 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 15 00:06:52.340401 kubelet[2236]: I0515 00:06:52.340293 2236 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:06:52.341745 kubelet[2236]: W0515 00:06:52.341618 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused May 15 00:06:52.342060 kubelet[2236]: E0515 00:06:52.341693 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" May 15 00:06:52.342060 kubelet[2236]: W0515 00:06:52.341963 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused May 15 00:06:52.342060 kubelet[2236]: E0515 00:06:52.342034 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" May 15 00:06:52.345001 kubelet[2236]: W0515 00:06:52.344951 2236 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 00:06:52.346058 kubelet[2236]: I0515 00:06:52.345848 2236 server.go:1269] "Started kubelet" May 15 00:06:52.346439 kubelet[2236]: I0515 00:06:52.346143 2236 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:06:52.347122 kubelet[2236]: I0515 00:06:52.346774 2236 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:06:52.348247 kubelet[2236]: I0515 00:06:52.348224 2236 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:06:52.352690 kubelet[2236]: I0515 00:06:52.350918 2236 server.go:460] "Adding debug handlers to kubelet server" May 15 00:06:52.352690 kubelet[2236]: I0515 00:06:52.350963 2236 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:06:52.352690 kubelet[2236]: I0515 00:06:52.351127 2236 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:06:52.352690 kubelet[2236]: I0515 00:06:52.352089 2236 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 00:06:52.352856 kubelet[2236]: I0515 00:06:52.352826 2236 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 00:06:52.353248 kubelet[2236]: I0515 00:06:52.352941 2236 reconciler.go:26] "Reconciler: start to sync state" May 15 00:06:52.353248 kubelet[2236]: I0515 00:06:52.353114 2236 factory.go:221] Registration of the systemd container factory successfully May 15 00:06:52.353248 kubelet[2236]: I0515 00:06:52.353217 2236 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:06:52.353675 kubelet[2236]: W0515 00:06:52.353625 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused May 15 00:06:52.353735 kubelet[2236]: E0515 00:06:52.353682 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" May 15 00:06:52.354099 kubelet[2236]: E0515 00:06:52.351232 2236 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.138:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.138:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f8aa2d16ae39a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 00:06:52.345820058 +0000 UTC m=+0.765174095,LastTimestamp:2025-05-15 00:06:52.345820058 +0000 UTC m=+0.765174095,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 00:06:52.354505 kubelet[2236]: E0515 00:06:52.354472 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="200ms" May 15 00:06:52.357597 kubelet[2236]: I0515 00:06:52.354618 2236 factory.go:221] Registration of the containerd container factory successfully May 15 00:06:52.357597 kubelet[2236]: E0515 00:06:52.357178 2236 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:06:52.360457 kubelet[2236]: E0515 00:06:52.360416 2236 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:06:52.370383 kubelet[2236]: I0515 00:06:52.369786 2236 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:06:52.370383 kubelet[2236]: I0515 00:06:52.369807 2236 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:06:52.370383 kubelet[2236]: I0515 00:06:52.369829 2236 state_mem.go:36] "Initialized new in-memory state store" May 15 00:06:52.374642 kubelet[2236]: I0515 00:06:52.373592 2236 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:06:52.374765 kubelet[2236]: I0515 00:06:52.374656 2236 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:06:52.374765 kubelet[2236]: I0515 00:06:52.374684 2236 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:06:52.374765 kubelet[2236]: I0515 00:06:52.374702 2236 kubelet.go:2321] "Starting kubelet main sync loop" May 15 00:06:52.374765 kubelet[2236]: E0515 00:06:52.374747 2236 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:06:52.375574 kubelet[2236]: W0515 00:06:52.375318 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused May 15 00:06:52.375574 kubelet[2236]: E0515 00:06:52.375380 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" May 15 00:06:52.438035 kubelet[2236]: I0515 00:06:52.438004 2236 policy_none.go:49] "None policy: Start" May 15 00:06:52.439011 kubelet[2236]: I0515 00:06:52.438994 2236 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:06:52.439067 kubelet[2236]: I0515 00:06:52.439021 2236 state_mem.go:35] "Initializing new in-memory state store" May 15 00:06:52.445632 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 00:06:52.457994 kubelet[2236]: E0515 00:06:52.457949 2236 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:06:52.458633 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 00:06:52.461745 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 00:06:52.474069 kubelet[2236]: I0515 00:06:52.473959 2236 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:06:52.474559 kubelet[2236]: I0515 00:06:52.474178 2236 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:06:52.474559 kubelet[2236]: I0515 00:06:52.474189 2236 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:06:52.474559 kubelet[2236]: I0515 00:06:52.474399 2236 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:06:52.475902 kubelet[2236]: E0515 00:06:52.475876 2236 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 00:06:52.482909 systemd[1]: Created slice kubepods-burstable-pod4475d19ca36dbc8d8933dafea2ca3886.slice - libcontainer container kubepods-burstable-pod4475d19ca36dbc8d8933dafea2ca3886.slice. May 15 00:06:52.499700 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 15 00:06:52.519662 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 15 00:06:52.555861 kubelet[2236]: E0515 00:06:52.555799 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="400ms" May 15 00:06:52.579884 kubelet[2236]: I0515 00:06:52.579664 2236 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:06:52.580258 kubelet[2236]: E0515 00:06:52.580224 2236 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" May 15 00:06:52.653804 kubelet[2236]: I0515 00:06:52.653739 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4475d19ca36dbc8d8933dafea2ca3886-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4475d19ca36dbc8d8933dafea2ca3886\") " pod="kube-system/kube-apiserver-localhost" May 15 00:06:52.653804 kubelet[2236]: I0515 00:06:52.653788 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4475d19ca36dbc8d8933dafea2ca3886-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4475d19ca36dbc8d8933dafea2ca3886\") " pod="kube-system/kube-apiserver-localhost" May 15 00:06:52.653804 kubelet[2236]: I0515 00:06:52.653814 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:06:52.654234 kubelet[2236]: I0515 00:06:52.653860 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 15 00:06:52.654234 kubelet[2236]: I0515 00:06:52.653899 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4475d19ca36dbc8d8933dafea2ca3886-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4475d19ca36dbc8d8933dafea2ca3886\") " pod="kube-system/kube-apiserver-localhost" May 15 00:06:52.654234 kubelet[2236]: I0515 00:06:52.653925 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:06:52.654234 kubelet[2236]: I0515 00:06:52.653941 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:06:52.654234 kubelet[2236]: I0515 00:06:52.653957 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:06:52.654333 kubelet[2236]: I0515 00:06:52.654006 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:06:52.781478 kubelet[2236]: I0515 00:06:52.781371 2236 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:06:52.781798 kubelet[2236]: E0515 00:06:52.781753 2236 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" May 15 00:06:52.800189 kubelet[2236]: E0515 00:06:52.800137 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:06:52.800820 containerd[1475]: time="2025-05-15T00:06:52.800781493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4475d19ca36dbc8d8933dafea2ca3886,Namespace:kube-system,Attempt:0,}" May 15 00:06:52.818179 kubelet[2236]: E0515 00:06:52.818144 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:06:52.818733 containerd[1475]: time="2025-05-15T00:06:52.818698422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 15 00:06:52.819823 containerd[1475]: time="2025-05-15T00:06:52.819740918Z" level=info msg="connecting to shim 0143379fbdc1eb070a35ece94999ddd3ab57152478621ae14a360942b33a8bb5" address="unix:///run/containerd/s/0fab2e189735b8b45d421b320cde0d08eb0cd362ea177219487ba1f9801c9632" namespace=k8s.io protocol=ttrpc version=3 May 15 00:06:52.822579 kubelet[2236]: E0515 00:06:52.822491 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:06:52.822932 containerd[1475]: time="2025-05-15T00:06:52.822900649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 15 00:06:52.844197 systemd[1]: Started cri-containerd-0143379fbdc1eb070a35ece94999ddd3ab57152478621ae14a360942b33a8bb5.scope - libcontainer container 0143379fbdc1eb070a35ece94999ddd3ab57152478621ae14a360942b33a8bb5. May 15 00:06:52.856556 containerd[1475]: time="2025-05-15T00:06:52.856488459Z" level=info msg="connecting to shim dd780b86f7942287d8acbce2b4c3aef5f3b08c05b040a0032349f6d2360a6bb5" address="unix:///run/containerd/s/c5d3af5fec8c0571be698577013d8c3554429fd74d21266069b1b202327925c9" namespace=k8s.io protocol=ttrpc version=3 May 15 00:06:52.859918 containerd[1475]: time="2025-05-15T00:06:52.859804892Z" level=info msg="connecting to shim 6d9c688a70d2d2e4e9644e8b92aae3525d170ffad4efe72bb845b230fba991d8" address="unix:///run/containerd/s/f25e16001cffbeb3b16761328220e7f2ce6afcbe21e4a8e0adc2fe5ae6cccf9a" namespace=k8s.io protocol=ttrpc version=3 May 15 00:06:52.881147 systemd[1]: Started cri-containerd-dd780b86f7942287d8acbce2b4c3aef5f3b08c05b040a0032349f6d2360a6bb5.scope - libcontainer container dd780b86f7942287d8acbce2b4c3aef5f3b08c05b040a0032349f6d2360a6bb5. May 15 00:06:52.887011 containerd[1475]: time="2025-05-15T00:06:52.886627905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4475d19ca36dbc8d8933dafea2ca3886,Namespace:kube-system,Attempt:0,} returns sandbox id \"0143379fbdc1eb070a35ece94999ddd3ab57152478621ae14a360942b33a8bb5\"" May 15 00:06:52.887926 systemd[1]: Started cri-containerd-6d9c688a70d2d2e4e9644e8b92aae3525d170ffad4efe72bb845b230fba991d8.scope - libcontainer container 6d9c688a70d2d2e4e9644e8b92aae3525d170ffad4efe72bb845b230fba991d8. May 15 00:06:52.889884 kubelet[2236]: E0515 00:06:52.889835 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:06:52.892395 containerd[1475]: time="2025-05-15T00:06:52.892344092Z" level=info msg="CreateContainer within sandbox \"0143379fbdc1eb070a35ece94999ddd3ab57152478621ae14a360942b33a8bb5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 00:06:52.901118 containerd[1475]: time="2025-05-15T00:06:52.900138878Z" level=info msg="Container 988652d059638690e40b686e62e22d8006e50dd5be0f8f5af7d41cbb144076db: CDI devices from CRI Config.CDIDevices: []" May 15 00:06:52.913138 containerd[1475]: time="2025-05-15T00:06:52.913086492Z" level=info msg="CreateContainer within sandbox \"0143379fbdc1eb070a35ece94999ddd3ab57152478621ae14a360942b33a8bb5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"988652d059638690e40b686e62e22d8006e50dd5be0f8f5af7d41cbb144076db\"" May 15 00:06:52.914193 containerd[1475]: time="2025-05-15T00:06:52.914115923Z" level=info msg="StartContainer for \"988652d059638690e40b686e62e22d8006e50dd5be0f8f5af7d41cbb144076db\"" May 15 00:06:52.916442 containerd[1475]: time="2025-05-15T00:06:52.916410836Z" level=info msg="connecting to shim 988652d059638690e40b686e62e22d8006e50dd5be0f8f5af7d41cbb144076db" address="unix:///run/containerd/s/0fab2e189735b8b45d421b320cde0d08eb0cd362ea177219487ba1f9801c9632" protocol=ttrpc version=3 May 15 00:06:52.921569 containerd[1475]: time="2025-05-15T00:06:52.921521551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd780b86f7942287d8acbce2b4c3aef5f3b08c05b040a0032349f6d2360a6bb5\"" May 15 00:06:52.922303 kubelet[2236]: E0515 00:06:52.922270 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:06:52.923914 containerd[1475]: time="2025-05-15T00:06:52.923877955Z" level=info msg="CreateContainer within sandbox \"dd780b86f7942287d8acbce2b4c3aef5f3b08c05b040a0032349f6d2360a6bb5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 00:06:52.928106 containerd[1475]: time="2025-05-15T00:06:52.928070992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d9c688a70d2d2e4e9644e8b92aae3525d170ffad4efe72bb845b230fba991d8\"" May 15 00:06:52.929255 kubelet[2236]: E0515 00:06:52.929229 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:06:52.932192 containerd[1475]: time="2025-05-15T00:06:52.932090986Z" level=info msg="CreateContainer within sandbox \"6d9c688a70d2d2e4e9644e8b92aae3525d170ffad4efe72bb845b230fba991d8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 00:06:52.937296 containerd[1475]: time="2025-05-15T00:06:52.937261073Z" level=info msg="Container 59b07c7b3fa2eed6fa9a39841fd5a0b7f26d7991d112909c7712bf00939d7e10: CDI devices from CRI Config.CDIDevices: []" May 15 00:06:52.939243 containerd[1475]: time="2025-05-15T00:06:52.939081566Z" level=info msg="Container b4d91fe18c5665be8476013e3a464848b92f654276975c2e7a0a4be2c01ae13a: CDI devices from CRI Config.CDIDevices: []" May 15 00:06:52.943317 containerd[1475]: time="2025-05-15T00:06:52.943257862Z" level=info msg="CreateContainer within sandbox \"dd780b86f7942287d8acbce2b4c3aef5f3b08c05b040a0032349f6d2360a6bb5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"59b07c7b3fa2eed6fa9a39841fd5a0b7f26d7991d112909c7712bf00939d7e10\"" May 15 00:06:52.943316 systemd[1]: Started cri-containerd-988652d059638690e40b686e62e22d8006e50dd5be0f8f5af7d41cbb144076db.scope - libcontainer container 988652d059638690e40b686e62e22d8006e50dd5be0f8f5af7d41cbb144076db. May 15 00:06:52.943998 containerd[1475]: time="2025-05-15T00:06:52.943936411Z" level=info msg="StartContainer for \"59b07c7b3fa2eed6fa9a39841fd5a0b7f26d7991d112909c7712bf00939d7e10\"" May 15 00:06:52.945311 containerd[1475]: time="2025-05-15T00:06:52.945282043Z" level=info msg="connecting to shim 59b07c7b3fa2eed6fa9a39841fd5a0b7f26d7991d112909c7712bf00939d7e10" address="unix:///run/containerd/s/c5d3af5fec8c0571be698577013d8c3554429fd74d21266069b1b202327925c9" protocol=ttrpc version=3 May 15 00:06:52.950423 containerd[1475]: time="2025-05-15T00:06:52.950280965Z" level=info msg="CreateContainer within sandbox \"6d9c688a70d2d2e4e9644e8b92aae3525d170ffad4efe72bb845b230fba991d8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b4d91fe18c5665be8476013e3a464848b92f654276975c2e7a0a4be2c01ae13a\"" May 15 00:06:52.951008 containerd[1475]: time="2025-05-15T00:06:52.950944291Z" level=info msg="StartContainer for \"b4d91fe18c5665be8476013e3a464848b92f654276975c2e7a0a4be2c01ae13a\"" May 15 00:06:52.952088 containerd[1475]: time="2025-05-15T00:06:52.952052193Z" level=info msg="connecting to shim b4d91fe18c5665be8476013e3a464848b92f654276975c2e7a0a4be2c01ae13a" address="unix:///run/containerd/s/f25e16001cffbeb3b16761328220e7f2ce6afcbe21e4a8e0adc2fe5ae6cccf9a" protocol=ttrpc version=3 May 15 00:06:52.956927 kubelet[2236]: E0515 00:06:52.956882 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="800ms" May 15 00:06:52.965186 systemd[1]: Started cri-containerd-59b07c7b3fa2eed6fa9a39841fd5a0b7f26d7991d112909c7712bf00939d7e10.scope - libcontainer container 59b07c7b3fa2eed6fa9a39841fd5a0b7f26d7991d112909c7712bf00939d7e10. May 15 00:06:52.967671 systemd[1]: Started cri-containerd-b4d91fe18c5665be8476013e3a464848b92f654276975c2e7a0a4be2c01ae13a.scope - libcontainer container b4d91fe18c5665be8476013e3a464848b92f654276975c2e7a0a4be2c01ae13a. May 15 00:06:53.011783 containerd[1475]: time="2025-05-15T00:06:53.008766888Z" level=info msg="StartContainer for \"988652d059638690e40b686e62e22d8006e50dd5be0f8f5af7d41cbb144076db\" returns successfully" May 15 00:06:53.039760 containerd[1475]: time="2025-05-15T00:06:53.039656557Z" level=info msg="StartContainer for \"b4d91fe18c5665be8476013e3a464848b92f654276975c2e7a0a4be2c01ae13a\" returns successfully" May 15 00:06:53.056110 containerd[1475]: time="2025-05-15T00:06:53.055312207Z" level=info msg="StartContainer for \"59b07c7b3fa2eed6fa9a39841fd5a0b7f26d7991d112909c7712bf00939d7e10\" returns successfully" May 15 00:06:53.183808 kubelet[2236]: I0515 00:06:53.183768 2236 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:06:53.184215 kubelet[2236]: E0515 00:06:53.184185 2236 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" May 15 00:06:53.257448 kubelet[2236]: W0515 00:06:53.257361 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused May 15 00:06:53.257448 kubelet[2236]: E0515 00:06:53.257442 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" May 15 00:06:53.379925 kubelet[2236]: E0515 00:06:53.379784 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:06:53.381598 kubelet[2236]: E0515 00:06:53.381568 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:06:53.383456 kubelet[2236]: E0515 00:06:53.383403 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:06:53.985291 kubelet[2236]: I0515 00:06:53.985254 2236 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:06:54.385248 kubelet[2236]: E0515 00:06:54.385182 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:06:55.157965 kubelet[2236]: E0515 00:06:55.157885 2236 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 00:06:55.224912 kubelet[2236]: I0515 00:06:55.224865 2236 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 00:06:55.335293 kubelet[2236]: I0515 00:06:55.335241 2236 apiserver.go:52] "Watching apiserver" May 15 00:06:55.353484 kubelet[2236]: I0515 00:06:55.353439 2236 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 00:06:55.391608 kubelet[2236]: E0515 00:06:55.391564 2236 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 15 00:06:55.391764 kubelet[2236]: E0515 00:06:55.391748 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:06:56.146042 kubelet[2236]: E0515 00:06:56.145900 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:06:56.388248 kubelet[2236]: E0515 00:06:56.388140 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:06:57.042855 systemd[1]: Reload requested from client PID 2509 ('systemctl') (unit session-9.scope)... May 15 00:06:57.042872 systemd[1]: Reloading... May 15 00:06:57.115012 zram_generator::config[2556]: No configuration found. May 15 00:06:57.213300 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:06:57.298383 systemd[1]: Reloading finished in 255 ms. May 15 00:06:57.322634 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:06:57.339897 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:06:57.341066 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:06:57.341137 systemd[1]: kubelet.service: Consumed 1.166s CPU time, 120.6M memory peak. May 15 00:06:57.343026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:06:57.461960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:06:57.471290 (kubelet)[2595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:06:57.505873 kubelet[2595]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:06:57.505873 kubelet[2595]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:06:57.505873 kubelet[2595]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:06:57.506510 kubelet[2595]: I0515 00:06:57.506253 2595 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:06:57.514819 kubelet[2595]: I0515 00:06:57.514765 2595 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 00:06:57.514819 kubelet[2595]: I0515 00:06:57.514801 2595 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:06:57.515098 kubelet[2595]: I0515 00:06:57.515084 2595 server.go:929] "Client rotation is on, will bootstrap in background" May 15 00:06:57.516492 kubelet[2595]: I0515 00:06:57.516458 2595 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 00:06:57.518472 kubelet[2595]: I0515 00:06:57.518438 2595 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:06:57.522195 kubelet[2595]: I0515 00:06:57.522175 2595 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 00:06:57.524833 kubelet[2595]: I0515 00:06:57.524797 2595 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:06:57.524982 kubelet[2595]: I0515 00:06:57.524957 2595 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 00:06:57.525120 kubelet[2595]: I0515 00:06:57.525091 2595 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:06:57.525295 kubelet[2595]: I0515 00:06:57.525115 2595 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:06:57.525369 kubelet[2595]: I0515 00:06:57.525301 2595 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:06:57.525369 kubelet[2595]: I0515 00:06:57.525313 2595 container_manager_linux.go:300] "Creating device plugin manager" May 15 00:06:57.525369 kubelet[2595]: I0515 00:06:57.525343 2595 state_mem.go:36] "Initialized new in-memory state store" May 15 00:06:57.525457 kubelet[2595]: I0515 00:06:57.525445 2595 kubelet.go:408] "Attempting to sync node with API server" May 15 00:06:57.525480 kubelet[2595]: I0515 00:06:57.525461 2595 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:06:57.525501 kubelet[2595]: I0515 00:06:57.525481 2595 kubelet.go:314] "Adding apiserver pod source" May 15 00:06:57.525501 kubelet[2595]: I0515 00:06:57.525491 2595 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:06:57.529766 kubelet[2595]: I0515 00:06:57.529728 2595 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 15 00:06:57.530265 kubelet[2595]: I0515 00:06:57.530243 2595 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:06:57.530662 kubelet[2595]: I0515 00:06:57.530647 2595 server.go:1269] "Started kubelet" May 15 00:06:57.531210 kubelet[2595]: I0515 00:06:57.531166 2595 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:06:57.531435 kubelet[2595]: I0515 00:06:57.531417 2595 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:06:57.531496 kubelet[2595]: I0515 00:06:57.531478 2595 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:06:57.532086 kubelet[2595]: I0515 00:06:57.532066 2595 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:06:57.536592 kubelet[2595]: I0515 00:06:57.535879 2595 server.go:460] "Adding debug handlers to kubelet server" May 15 00:06:57.537151 kubelet[2595]: I0515 00:06:57.537127 2595 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:06:57.538969 kubelet[2595]: I0515 00:06:57.538940 2595 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 00:06:57.541982 kubelet[2595]: E0515 00:06:57.539179 2595 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:06:57.541982 kubelet[2595]: I0515 00:06:57.540588 2595 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 00:06:57.541982 kubelet[2595]: I0515 00:06:57.540714 2595 reconciler.go:26] "Reconciler: start to sync state" May 15 00:06:57.545122 kubelet[2595]: I0515 00:06:57.544944 2595 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:06:57.547845 kubelet[2595]: I0515 00:06:57.547638 2595 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:06:57.547845 kubelet[2595]: I0515 00:06:57.547676 2595 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:06:57.547845 kubelet[2595]: I0515 00:06:57.547693 2595 kubelet.go:2321] "Starting kubelet main sync loop" May 15 00:06:57.547845 kubelet[2595]: E0515 00:06:57.547735 2595 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:06:57.552281 kubelet[2595]: I0515 00:06:57.551433 2595 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:06:57.553763 kubelet[2595]: E0515 00:06:57.553723 2595 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:06:57.554564 kubelet[2595]: I0515 00:06:57.554525 2595 factory.go:221] Registration of the containerd container factory successfully May 15 00:06:57.554677 kubelet[2595]: I0515 00:06:57.554666 2595 factory.go:221] Registration of the systemd container factory successfully May 15 00:06:57.583237 kubelet[2595]: I0515 00:06:57.583199 2595 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:06:57.583237 kubelet[2595]: I0515 00:06:57.583222 2595 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:06:57.583237 kubelet[2595]: I0515 00:06:57.583244 2595 state_mem.go:36] "Initialized new in-memory state store" May 15 00:06:57.583413 kubelet[2595]: I0515 00:06:57.583405 2595 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 00:06:57.583434 kubelet[2595]: I0515 00:06:57.583416 2595 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 00:06:57.583434 kubelet[2595]: I0515 00:06:57.583433 2595 policy_none.go:49] "None policy: Start" May 15 00:06:57.584067 kubelet[2595]: I0515 00:06:57.584046 2595 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:06:57.584067 kubelet[2595]: I0515 00:06:57.584073 2595 state_mem.go:35] "Initializing new in-memory state store" May 15 00:06:57.584243 kubelet[2595]: I0515 00:06:57.584227 2595 state_mem.go:75] "Updated machine memory state" May 15 00:06:57.588956 kubelet[2595]: I0515 00:06:57.588337 2595 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:06:57.588956 kubelet[2595]: I0515 00:06:57.588530 2595 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:06:57.588956 kubelet[2595]: I0515 00:06:57.588541 2595 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:06:57.588956 kubelet[2595]: I0515 00:06:57.588881 2595 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:06:57.656176 kubelet[2595]: E0515 00:06:57.656128 2595 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 00:06:57.692923 kubelet[2595]: I0515 00:06:57.692888 2595 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:06:57.704232 kubelet[2595]: I0515 00:06:57.704134 2595 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 15 00:06:57.704232 kubelet[2595]: I0515 00:06:57.704237 2595 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 00:06:57.842819 kubelet[2595]: I0515 00:06:57.842674 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:06:57.842819 kubelet[2595]: I0515 00:06:57.842738 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:06:57.842819 kubelet[2595]: I0515 00:06:57.842766 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 15 00:06:57.842819 kubelet[2595]: I0515 00:06:57.842795 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4475d19ca36dbc8d8933dafea2ca3886-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4475d19ca36dbc8d8933dafea2ca3886\") " pod="kube-system/kube-apiserver-localhost" May 15 00:06:57.842819 kubelet[2595]: I0515 00:06:57.842816 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4475d19ca36dbc8d8933dafea2ca3886-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4475d19ca36dbc8d8933dafea2ca3886\") " pod="kube-system/kube-apiserver-localhost" May 15 00:06:57.843040 kubelet[2595]: I0515 00:06:57.842833 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:06:57.843040 kubelet[2595]: I0515 00:06:57.842849 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4475d19ca36dbc8d8933dafea2ca3886-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4475d19ca36dbc8d8933dafea2ca3886\") " pod="kube-system/kube-apiserver-localhost" May 15 00:06:57.843040 kubelet[2595]: I0515 00:06:57.842874 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:06:57.843040 kubelet[2595]: I0515 00:06:57.842895 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:06:57.957105 kubelet[2595]: E0515 00:06:57.957043 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:06:57.957105 kubelet[2595]: E0515 00:06:57.957047 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:06:57.957257 kubelet[2595]: E0515 00:06:57.957118 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:06:58.526434 kubelet[2595]: I0515 00:06:58.526351 2595 apiserver.go:52] "Watching apiserver" May 15 00:06:58.541106 kubelet[2595]: I0515 00:06:58.541067 2595 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 00:06:58.566778 kubelet[2595]: E0515 00:06:58.566625 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:06:58.566778 kubelet[2595]: E0515 00:06:58.566720 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:06:58.573221 kubelet[2595]: E0515 00:06:58.573107 2595 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 00:06:58.573895 kubelet[2595]: E0515 00:06:58.573293 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:06:58.587331 kubelet[2595]: I0515 00:06:58.587242 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.587224949 podStartE2EDuration="2.587224949s" podCreationTimestamp="2025-05-15 00:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:06:58.586864107 +0000 UTC m=+1.112539166" watchObservedRunningTime="2025-05-15 00:06:58.587224949 +0000 UTC m=+1.112900008" May 15 00:06:58.607904 kubelet[2595]: I0515 00:06:58.607847 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.607831584 podStartE2EDuration="1.607831584s" podCreationTimestamp="2025-05-15 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:06:58.596403234 +0000 UTC m=+1.122078293" watchObservedRunningTime="2025-05-15 00:06:58.607831584 +0000 UTC m=+1.133506643" May 15 00:06:58.608116 kubelet[2595]: I0515 00:06:58.607967 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.607962962 podStartE2EDuration="1.607962962s" podCreationTimestamp="2025-05-15 00:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:06:58.607351354 +0000 UTC m=+1.133026413" watchObservedRunningTime="2025-05-15 00:06:58.607962962 +0000 UTC m=+1.133638021" May 15 00:06:59.568570 kubelet[2595]: E0515 00:06:59.568532 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:00.669783 kubelet[2595]: E0515 00:07:00.669713 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:02.359435 sudo[1699]: pam_unix(sudo:session): session closed for user root May 15 00:07:02.372735 sshd[1698]: Connection closed by 10.0.0.1 port 44158 May 15 00:07:02.374006 sshd-session[1693]: pam_unix(sshd:session): session closed for user core May 15 00:07:02.379785 systemd[1]: sshd@8-10.0.0.138:22-10.0.0.1:44158.service: Deactivated successfully. May 15 00:07:02.383213 systemd[1]: session-9.scope: Deactivated successfully. May 15 00:07:02.383495 systemd[1]: session-9.scope: Consumed 6.896s CPU time, 229.8M memory peak. May 15 00:07:02.384770 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. May 15 00:07:02.385688 systemd-logind[1452]: Removed session 9. May 15 00:07:03.565501 kubelet[2595]: I0515 00:07:03.565460 2595 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 00:07:03.566273 containerd[1475]: time="2025-05-15T00:07:03.566152669Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 00:07:03.566898 kubelet[2595]: I0515 00:07:03.566459 2595 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 00:07:04.141859 systemd[1]: Created slice kubepods-besteffort-pod8704b4cb_0653_4d54_a987_9273024250ba.slice - libcontainer container kubepods-besteffort-pod8704b4cb_0653_4d54_a987_9273024250ba.slice. May 15 00:07:04.284257 kubelet[2595]: I0515 00:07:04.284174 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8704b4cb-0653-4d54-a987-9273024250ba-xtables-lock\") pod \"kube-proxy-nnrg8\" (UID: \"8704b4cb-0653-4d54-a987-9273024250ba\") " pod="kube-system/kube-proxy-nnrg8" May 15 00:07:04.284257 kubelet[2595]: I0515 00:07:04.284225 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8704b4cb-0653-4d54-a987-9273024250ba-lib-modules\") pod \"kube-proxy-nnrg8\" (UID: \"8704b4cb-0653-4d54-a987-9273024250ba\") " pod="kube-system/kube-proxy-nnrg8" May 15 00:07:04.284257 kubelet[2595]: I0515 00:07:04.284248 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8704b4cb-0653-4d54-a987-9273024250ba-kube-proxy\") pod \"kube-proxy-nnrg8\" (UID: \"8704b4cb-0653-4d54-a987-9273024250ba\") " pod="kube-system/kube-proxy-nnrg8" May 15 00:07:04.284539 kubelet[2595]: I0515 00:07:04.284274 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj84m\" (UniqueName: \"kubernetes.io/projected/8704b4cb-0653-4d54-a987-9273024250ba-kube-api-access-dj84m\") pod \"kube-proxy-nnrg8\" (UID: \"8704b4cb-0653-4d54-a987-9273024250ba\") " pod="kube-system/kube-proxy-nnrg8" May 15 00:07:04.452953 kubelet[2595]: E0515 00:07:04.452751 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:04.453757 containerd[1475]: time="2025-05-15T00:07:04.453363898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nnrg8,Uid:8704b4cb-0653-4d54-a987-9273024250ba,Namespace:kube-system,Attempt:0,}" May 15 00:07:04.476103 containerd[1475]: time="2025-05-15T00:07:04.476058023Z" level=info msg="connecting to shim 7d55d10adf07e7ab7558fee57f8f7ab7f15077a7c6576ca67c243462f41dccdf" address="unix:///run/containerd/s/5bf346c57adb6c6bff35b033f0ddd016138e3d74458c9f18d9d88b7d5383183d" namespace=k8s.io protocol=ttrpc version=3 May 15 00:07:04.504218 systemd[1]: Started cri-containerd-7d55d10adf07e7ab7558fee57f8f7ab7f15077a7c6576ca67c243462f41dccdf.scope - libcontainer container 7d55d10adf07e7ab7558fee57f8f7ab7f15077a7c6576ca67c243462f41dccdf. May 15 00:07:04.537103 containerd[1475]: time="2025-05-15T00:07:04.537048630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nnrg8,Uid:8704b4cb-0653-4d54-a987-9273024250ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d55d10adf07e7ab7558fee57f8f7ab7f15077a7c6576ca67c243462f41dccdf\"" May 15 00:07:04.537878 kubelet[2595]: E0515 00:07:04.537848 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:04.540728 containerd[1475]: time="2025-05-15T00:07:04.540686607Z" level=info msg="CreateContainer within sandbox \"7d55d10adf07e7ab7558fee57f8f7ab7f15077a7c6576ca67c243462f41dccdf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 00:07:04.551404 containerd[1475]: time="2025-05-15T00:07:04.550661067Z" level=info msg="Container 9c54b5d96a3f2859613d4ac3960b334e869e62f28ed139320431f59611921357: CDI devices from CRI Config.CDIDevices: []" May 15 00:07:04.562391 containerd[1475]: time="2025-05-15T00:07:04.562327723Z" level=info msg="CreateContainer within sandbox \"7d55d10adf07e7ab7558fee57f8f7ab7f15077a7c6576ca67c243462f41dccdf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9c54b5d96a3f2859613d4ac3960b334e869e62f28ed139320431f59611921357\"" May 15 00:07:04.564450 containerd[1475]: time="2025-05-15T00:07:04.564416350Z" level=info msg="StartContainer for \"9c54b5d96a3f2859613d4ac3960b334e869e62f28ed139320431f59611921357\"" May 15 00:07:04.566199 containerd[1475]: time="2025-05-15T00:07:04.566011435Z" level=info msg="connecting to shim 9c54b5d96a3f2859613d4ac3960b334e869e62f28ed139320431f59611921357" address="unix:///run/containerd/s/5bf346c57adb6c6bff35b033f0ddd016138e3d74458c9f18d9d88b7d5383183d" protocol=ttrpc version=3 May 15 00:07:04.602259 systemd[1]: Started cri-containerd-9c54b5d96a3f2859613d4ac3960b334e869e62f28ed139320431f59611921357.scope - libcontainer container 9c54b5d96a3f2859613d4ac3960b334e869e62f28ed139320431f59611921357. May 15 00:07:04.647308 containerd[1475]: time="2025-05-15T00:07:04.647189159Z" level=info msg="StartContainer for \"9c54b5d96a3f2859613d4ac3960b334e869e62f28ed139320431f59611921357\" returns successfully" May 15 00:07:04.716806 systemd[1]: Created slice kubepods-besteffort-pod7e81a00e_ae2a_48e1_8c21_86d813df59af.slice - libcontainer container kubepods-besteffort-pod7e81a00e_ae2a_48e1_8c21_86d813df59af.slice. May 15 00:07:04.890162 kubelet[2595]: I0515 00:07:04.890106 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7e81a00e-ae2a-48e1-8c21-86d813df59af-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-wlkxg\" (UID: \"7e81a00e-ae2a-48e1-8c21-86d813df59af\") " pod="tigera-operator/tigera-operator-6f6897fdc5-wlkxg" May 15 00:07:04.890162 kubelet[2595]: I0515 00:07:04.890159 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cbtf\" (UniqueName: \"kubernetes.io/projected/7e81a00e-ae2a-48e1-8c21-86d813df59af-kube-api-access-7cbtf\") pod \"tigera-operator-6f6897fdc5-wlkxg\" (UID: \"7e81a00e-ae2a-48e1-8c21-86d813df59af\") " pod="tigera-operator/tigera-operator-6f6897fdc5-wlkxg" May 15 00:07:05.022055 containerd[1475]: time="2025-05-15T00:07:05.021963826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-wlkxg,Uid:7e81a00e-ae2a-48e1-8c21-86d813df59af,Namespace:tigera-operator,Attempt:0,}" May 15 00:07:05.037536 containerd[1475]: time="2025-05-15T00:07:05.037495807Z" level=info msg="connecting to shim 6c6cba01bb298e4c9d89b52ca4adfc6144aa2319af88aae9880b0f00979832a1" address="unix:///run/containerd/s/c5e3cceab63e60588904bb6735c2e97c204e06246eda7b72439efb893dd00349" namespace=k8s.io protocol=ttrpc version=3 May 15 00:07:05.060151 systemd[1]: Started cri-containerd-6c6cba01bb298e4c9d89b52ca4adfc6144aa2319af88aae9880b0f00979832a1.scope - libcontainer container 6c6cba01bb298e4c9d89b52ca4adfc6144aa2319af88aae9880b0f00979832a1. May 15 00:07:05.098680 containerd[1475]: time="2025-05-15T00:07:05.098636975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-wlkxg,Uid:7e81a00e-ae2a-48e1-8c21-86d813df59af,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6c6cba01bb298e4c9d89b52ca4adfc6144aa2319af88aae9880b0f00979832a1\"" May 15 00:07:05.104556 containerd[1475]: time="2025-05-15T00:07:05.104524247Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 00:07:05.592794 kubelet[2595]: E0515 00:07:05.592760 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:06.104280 update_engine[1454]: I20250515 00:07:06.104025 1454 update_attempter.cc:509] Updating boot flags... May 15 00:07:06.130021 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 47 scanned by (udev-worker) (2942) May 15 00:07:06.172608 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 47 scanned by (udev-worker) (2945) May 15 00:07:06.208557 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 47 scanned by (udev-worker) (2945) May 15 00:07:06.469411 kubelet[2595]: E0515 00:07:06.469287 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:06.483465 kubelet[2595]: I0515 00:07:06.483394 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nnrg8" podStartSLOduration=2.483379835 podStartE2EDuration="2.483379835s" podCreationTimestamp="2025-05-15 00:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:07:05.602096773 +0000 UTC m=+8.127771832" watchObservedRunningTime="2025-05-15 00:07:06.483379835 +0000 UTC m=+9.009054894" May 15 00:07:06.594616 kubelet[2595]: E0515 00:07:06.594538 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:06.594752 kubelet[2595]: E0515 00:07:06.594713 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:07.235110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4145344417.mount: Deactivated successfully. May 15 00:07:07.547479 kubelet[2595]: E0515 00:07:07.547129 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:07.596048 kubelet[2595]: E0515 00:07:07.595932 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:10.683053 kubelet[2595]: E0515 00:07:10.683020 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:11.349765 containerd[1475]: time="2025-05-15T00:07:11.349711031Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:11.350190 containerd[1475]: time="2025-05-15T00:07:11.350095623Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 15 00:07:11.350942 containerd[1475]: time="2025-05-15T00:07:11.350906193Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:11.352895 containerd[1475]: time="2025-05-15T00:07:11.352860342Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:11.353762 containerd[1475]: time="2025-05-15T00:07:11.353728893Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 6.249167462s" May 15 00:07:11.353803 containerd[1475]: time="2025-05-15T00:07:11.353776237Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 15 00:07:11.358264 containerd[1475]: time="2025-05-15T00:07:11.358227475Z" level=info msg="CreateContainer within sandbox \"6c6cba01bb298e4c9d89b52ca4adfc6144aa2319af88aae9880b0f00979832a1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 00:07:11.364174 containerd[1475]: time="2025-05-15T00:07:11.364139427Z" level=info msg="Container 5740df17c1a728fe51f3d2b224dea7f7d37b190a38fdcc728d32771a1555d0e9: CDI devices from CRI Config.CDIDevices: []" May 15 00:07:11.369236 containerd[1475]: time="2025-05-15T00:07:11.369193784Z" level=info msg="CreateContainer within sandbox \"6c6cba01bb298e4c9d89b52ca4adfc6144aa2319af88aae9880b0f00979832a1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5740df17c1a728fe51f3d2b224dea7f7d37b190a38fdcc728d32771a1555d0e9\"" May 15 00:07:11.369620 containerd[1475]: time="2025-05-15T00:07:11.369596010Z" level=info msg="StartContainer for \"5740df17c1a728fe51f3d2b224dea7f7d37b190a38fdcc728d32771a1555d0e9\"" May 15 00:07:11.370533 containerd[1475]: time="2025-05-15T00:07:11.370353357Z" level=info msg="connecting to shim 5740df17c1a728fe51f3d2b224dea7f7d37b190a38fdcc728d32771a1555d0e9" address="unix:///run/containerd/s/c5e3cceab63e60588904bb6735c2e97c204e06246eda7b72439efb893dd00349" protocol=ttrpc version=3 May 15 00:07:11.408152 systemd[1]: Started cri-containerd-5740df17c1a728fe51f3d2b224dea7f7d37b190a38fdcc728d32771a1555d0e9.scope - libcontainer container 5740df17c1a728fe51f3d2b224dea7f7d37b190a38fdcc728d32771a1555d0e9. May 15 00:07:11.459040 containerd[1475]: time="2025-05-15T00:07:11.458561825Z" level=info msg="StartContainer for \"5740df17c1a728fe51f3d2b224dea7f7d37b190a38fdcc728d32771a1555d0e9\" returns successfully" May 15 00:07:11.629491 kubelet[2595]: I0515 00:07:11.629321 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-wlkxg" podStartSLOduration=1.3729584639999999 podStartE2EDuration="7.629301972s" podCreationTimestamp="2025-05-15 00:07:04 +0000 UTC" firstStartedPulling="2025-05-15 00:07:05.099697175 +0000 UTC m=+7.625372234" lastFinishedPulling="2025-05-15 00:07:11.356040683 +0000 UTC m=+13.881715742" observedRunningTime="2025-05-15 00:07:11.629101278 +0000 UTC m=+14.154776377" watchObservedRunningTime="2025-05-15 00:07:11.629301972 +0000 UTC m=+14.154977030" May 15 00:07:15.729026 systemd[1]: Created slice kubepods-besteffort-pod6a7decb8_4ba7_46b4_acb9_c43042ac9598.slice - libcontainer container kubepods-besteffort-pod6a7decb8_4ba7_46b4_acb9_c43042ac9598.slice. May 15 00:07:15.774879 systemd[1]: Created slice kubepods-besteffort-pod9045b6ef_45a9_41b7_82b5_d4ecaac71d14.slice - libcontainer container kubepods-besteffort-pod9045b6ef_45a9_41b7_82b5_d4ecaac71d14.slice. May 15 00:07:15.862848 kubelet[2595]: I0515 00:07:15.862801 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9045b6ef-45a9-41b7-82b5-d4ecaac71d14-cni-bin-dir\") pod \"calico-node-wrhhc\" (UID: \"9045b6ef-45a9-41b7-82b5-d4ecaac71d14\") " pod="calico-system/calico-node-wrhhc" May 15 00:07:15.862848 kubelet[2595]: I0515 00:07:15.862852 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9045b6ef-45a9-41b7-82b5-d4ecaac71d14-cni-net-dir\") pod \"calico-node-wrhhc\" (UID: \"9045b6ef-45a9-41b7-82b5-d4ecaac71d14\") " pod="calico-system/calico-node-wrhhc" May 15 00:07:15.863263 kubelet[2595]: I0515 00:07:15.862908 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49vnl\" (UniqueName: \"kubernetes.io/projected/9045b6ef-45a9-41b7-82b5-d4ecaac71d14-kube-api-access-49vnl\") pod \"calico-node-wrhhc\" (UID: \"9045b6ef-45a9-41b7-82b5-d4ecaac71d14\") " pod="calico-system/calico-node-wrhhc" May 15 00:07:15.863263 kubelet[2595]: I0515 00:07:15.863048 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9045b6ef-45a9-41b7-82b5-d4ecaac71d14-var-lib-calico\") pod \"calico-node-wrhhc\" (UID: \"9045b6ef-45a9-41b7-82b5-d4ecaac71d14\") " pod="calico-system/calico-node-wrhhc" May 15 00:07:15.863263 kubelet[2595]: I0515 00:07:15.863080 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86s57\" (UniqueName: \"kubernetes.io/projected/6a7decb8-4ba7-46b4-acb9-c43042ac9598-kube-api-access-86s57\") pod \"calico-typha-5b9f44dbb6-dmlf8\" (UID: \"6a7decb8-4ba7-46b4-acb9-c43042ac9598\") " pod="calico-system/calico-typha-5b9f44dbb6-dmlf8" May 15 00:07:15.863263 kubelet[2595]: I0515 00:07:15.863099 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a7decb8-4ba7-46b4-acb9-c43042ac9598-tigera-ca-bundle\") pod \"calico-typha-5b9f44dbb6-dmlf8\" (UID: \"6a7decb8-4ba7-46b4-acb9-c43042ac9598\") " pod="calico-system/calico-typha-5b9f44dbb6-dmlf8" May 15 00:07:15.863263 kubelet[2595]: I0515 00:07:15.863149 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9045b6ef-45a9-41b7-82b5-d4ecaac71d14-flexvol-driver-host\") pod \"calico-node-wrhhc\" (UID: \"9045b6ef-45a9-41b7-82b5-d4ecaac71d14\") " pod="calico-system/calico-node-wrhhc" May 15 00:07:15.863375 kubelet[2595]: I0515 00:07:15.863197 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9045b6ef-45a9-41b7-82b5-d4ecaac71d14-node-certs\") pod \"calico-node-wrhhc\" (UID: \"9045b6ef-45a9-41b7-82b5-d4ecaac71d14\") " pod="calico-system/calico-node-wrhhc" May 15 00:07:15.863375 kubelet[2595]: I0515 00:07:15.863216 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9045b6ef-45a9-41b7-82b5-d4ecaac71d14-var-run-calico\") pod \"calico-node-wrhhc\" (UID: \"9045b6ef-45a9-41b7-82b5-d4ecaac71d14\") " pod="calico-system/calico-node-wrhhc" May 15 00:07:15.863375 kubelet[2595]: I0515 00:07:15.863239 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9045b6ef-45a9-41b7-82b5-d4ecaac71d14-cni-log-dir\") pod \"calico-node-wrhhc\" (UID: \"9045b6ef-45a9-41b7-82b5-d4ecaac71d14\") " pod="calico-system/calico-node-wrhhc" May 15 00:07:15.863375 kubelet[2595]: I0515 00:07:15.863257 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6a7decb8-4ba7-46b4-acb9-c43042ac9598-typha-certs\") pod \"calico-typha-5b9f44dbb6-dmlf8\" (UID: \"6a7decb8-4ba7-46b4-acb9-c43042ac9598\") " pod="calico-system/calico-typha-5b9f44dbb6-dmlf8" May 15 00:07:15.863375 kubelet[2595]: I0515 00:07:15.863285 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9045b6ef-45a9-41b7-82b5-d4ecaac71d14-lib-modules\") pod \"calico-node-wrhhc\" (UID: \"9045b6ef-45a9-41b7-82b5-d4ecaac71d14\") " pod="calico-system/calico-node-wrhhc" May 15 00:07:15.863476 kubelet[2595]: I0515 00:07:15.863302 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9045b6ef-45a9-41b7-82b5-d4ecaac71d14-xtables-lock\") pod \"calico-node-wrhhc\" (UID: \"9045b6ef-45a9-41b7-82b5-d4ecaac71d14\") " pod="calico-system/calico-node-wrhhc" May 15 00:07:15.863476 kubelet[2595]: I0515 00:07:15.863317 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9045b6ef-45a9-41b7-82b5-d4ecaac71d14-policysync\") pod \"calico-node-wrhhc\" (UID: \"9045b6ef-45a9-41b7-82b5-d4ecaac71d14\") " pod="calico-system/calico-node-wrhhc" May 15 00:07:15.863476 kubelet[2595]: I0515 00:07:15.863335 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9045b6ef-45a9-41b7-82b5-d4ecaac71d14-tigera-ca-bundle\") pod \"calico-node-wrhhc\" (UID: \"9045b6ef-45a9-41b7-82b5-d4ecaac71d14\") " pod="calico-system/calico-node-wrhhc" May 15 00:07:15.880143 kubelet[2595]: E0515 00:07:15.879918 2595 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8zzgl" podUID="b968e49b-b6e1-4eac-b633-cad76111fc0d" May 15 00:07:15.992105 kubelet[2595]: E0515 00:07:15.991899 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:15.992105 kubelet[2595]: W0515 00:07:15.992006 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:15.992344 kubelet[2595]: E0515 00:07:15.992292 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:15.992424 kubelet[2595]: E0515 00:07:15.992406 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:15.992451 kubelet[2595]: W0515 00:07:15.992425 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:15.992451 kubelet[2595]: E0515 00:07:15.992440 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.041154 kubelet[2595]: E0515 00:07:16.041122 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:16.041831 containerd[1475]: time="2025-05-15T00:07:16.041772497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b9f44dbb6-dmlf8,Uid:6a7decb8-4ba7-46b4-acb9-c43042ac9598,Namespace:calico-system,Attempt:0,}" May 15 00:07:16.065187 kubelet[2595]: E0515 00:07:16.065133 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.065187 kubelet[2595]: W0515 00:07:16.065162 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.065187 kubelet[2595]: E0515 00:07:16.065183 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.065391 kubelet[2595]: I0515 00:07:16.065213 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b968e49b-b6e1-4eac-b633-cad76111fc0d-kubelet-dir\") pod \"csi-node-driver-8zzgl\" (UID: \"b968e49b-b6e1-4eac-b633-cad76111fc0d\") " pod="calico-system/csi-node-driver-8zzgl" May 15 00:07:16.065417 kubelet[2595]: E0515 00:07:16.065406 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.065441 kubelet[2595]: W0515 00:07:16.065416 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.065441 kubelet[2595]: E0515 00:07:16.065427 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.065481 kubelet[2595]: I0515 00:07:16.065441 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b968e49b-b6e1-4eac-b633-cad76111fc0d-registration-dir\") pod \"csi-node-driver-8zzgl\" (UID: \"b968e49b-b6e1-4eac-b633-cad76111fc0d\") " pod="calico-system/csi-node-driver-8zzgl" May 15 00:07:16.065642 kubelet[2595]: E0515 00:07:16.065614 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.065642 kubelet[2595]: W0515 00:07:16.065628 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.065642 kubelet[2595]: E0515 00:07:16.065638 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.065768 kubelet[2595]: I0515 00:07:16.065653 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b968e49b-b6e1-4eac-b633-cad76111fc0d-varrun\") pod \"csi-node-driver-8zzgl\" (UID: \"b968e49b-b6e1-4eac-b633-cad76111fc0d\") " pod="calico-system/csi-node-driver-8zzgl" May 15 00:07:16.065905 kubelet[2595]: E0515 00:07:16.065816 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.065905 kubelet[2595]: W0515 00:07:16.065829 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.065905 kubelet[2595]: E0515 00:07:16.065839 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.065905 kubelet[2595]: I0515 00:07:16.065854 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b968e49b-b6e1-4eac-b633-cad76111fc0d-socket-dir\") pod \"csi-node-driver-8zzgl\" (UID: \"b968e49b-b6e1-4eac-b633-cad76111fc0d\") " pod="calico-system/csi-node-driver-8zzgl" May 15 00:07:16.066122 kubelet[2595]: E0515 00:07:16.066035 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.066122 kubelet[2595]: W0515 00:07:16.066045 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.066122 kubelet[2595]: E0515 00:07:16.066056 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.066122 kubelet[2595]: I0515 00:07:16.066070 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jdng\" (UniqueName: \"kubernetes.io/projected/b968e49b-b6e1-4eac-b633-cad76111fc0d-kube-api-access-9jdng\") pod \"csi-node-driver-8zzgl\" (UID: \"b968e49b-b6e1-4eac-b633-cad76111fc0d\") " pod="calico-system/csi-node-driver-8zzgl" May 15 00:07:16.066295 kubelet[2595]: E0515 00:07:16.066267 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.066295 kubelet[2595]: W0515 00:07:16.066282 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.066295 kubelet[2595]: E0515 00:07:16.066295 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.066462 kubelet[2595]: E0515 00:07:16.066438 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.066462 kubelet[2595]: W0515 00:07:16.066449 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.066542 kubelet[2595]: E0515 00:07:16.066510 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.066603 kubelet[2595]: E0515 00:07:16.066591 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.066603 kubelet[2595]: W0515 00:07:16.066602 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.066685 kubelet[2595]: E0515 00:07:16.066671 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.066742 kubelet[2595]: E0515 00:07:16.066732 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.066769 kubelet[2595]: W0515 00:07:16.066742 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.066823 kubelet[2595]: E0515 00:07:16.066805 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.066912 kubelet[2595]: E0515 00:07:16.066875 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.066912 kubelet[2595]: W0515 00:07:16.066886 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.066912 kubelet[2595]: E0515 00:07:16.066895 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.067064 kubelet[2595]: E0515 00:07:16.067040 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.067064 kubelet[2595]: W0515 00:07:16.067048 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.067064 kubelet[2595]: E0515 00:07:16.067056 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.067217 kubelet[2595]: E0515 00:07:16.067201 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.067217 kubelet[2595]: W0515 00:07:16.067213 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.067273 kubelet[2595]: E0515 00:07:16.067221 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.067452 kubelet[2595]: E0515 00:07:16.067436 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.067452 kubelet[2595]: W0515 00:07:16.067449 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.067516 kubelet[2595]: E0515 00:07:16.067458 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.067656 kubelet[2595]: E0515 00:07:16.067629 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.067656 kubelet[2595]: W0515 00:07:16.067642 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.067656 kubelet[2595]: E0515 00:07:16.067651 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.067837 kubelet[2595]: E0515 00:07:16.067825 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.067837 kubelet[2595]: W0515 00:07:16.067836 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.067879 kubelet[2595]: E0515 00:07:16.067844 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.078445 kubelet[2595]: E0515 00:07:16.078330 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:16.080236 containerd[1475]: time="2025-05-15T00:07:16.079558346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wrhhc,Uid:9045b6ef-45a9-41b7-82b5-d4ecaac71d14,Namespace:calico-system,Attempt:0,}" May 15 00:07:16.080724 containerd[1475]: time="2025-05-15T00:07:16.080695672Z" level=info msg="connecting to shim 7797089883158f621897f18f71d55b80a117ff99b0046e0a47db4e618ce2febd" address="unix:///run/containerd/s/24c7f469213967a587c876358109a8632b8321481b517fd6fdb311c2e92c22a6" namespace=k8s.io protocol=ttrpc version=3 May 15 00:07:16.102106 containerd[1475]: time="2025-05-15T00:07:16.101930791Z" level=info msg="connecting to shim 782b0d5276213e3dfb84ee381b118a6ac5458f40e2b2f1e2e3977b8c4c1a6b24" address="unix:///run/containerd/s/0cce4d12f4b3ad6f40562b9caeb078f5fee816656a2c893e960189a986ff08ce" namespace=k8s.io protocol=ttrpc version=3 May 15 00:07:16.109159 systemd[1]: Started cri-containerd-7797089883158f621897f18f71d55b80a117ff99b0046e0a47db4e618ce2febd.scope - libcontainer container 7797089883158f621897f18f71d55b80a117ff99b0046e0a47db4e618ce2febd. May 15 00:07:16.152220 systemd[1]: Started cri-containerd-782b0d5276213e3dfb84ee381b118a6ac5458f40e2b2f1e2e3977b8c4c1a6b24.scope - libcontainer container 782b0d5276213e3dfb84ee381b118a6ac5458f40e2b2f1e2e3977b8c4c1a6b24. May 15 00:07:16.167191 kubelet[2595]: E0515 00:07:16.166710 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.167191 kubelet[2595]: W0515 00:07:16.166733 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.167191 kubelet[2595]: E0515 00:07:16.166962 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.167346 kubelet[2595]: E0515 00:07:16.167283 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.167346 kubelet[2595]: W0515 00:07:16.167295 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.167346 kubelet[2595]: E0515 00:07:16.167308 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.167518 kubelet[2595]: E0515 00:07:16.167500 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.167518 kubelet[2595]: W0515 00:07:16.167514 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.167600 kubelet[2595]: E0515 00:07:16.167525 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.167729 kubelet[2595]: E0515 00:07:16.167703 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.167729 kubelet[2595]: W0515 00:07:16.167718 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.167819 kubelet[2595]: E0515 00:07:16.167735 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.167957 kubelet[2595]: E0515 00:07:16.167927 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.167957 kubelet[2595]: W0515 00:07:16.167942 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.167957 kubelet[2595]: E0515 00:07:16.167953 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.168257 kubelet[2595]: E0515 00:07:16.168237 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.168303 kubelet[2595]: W0515 00:07:16.168257 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.168334 kubelet[2595]: E0515 00:07:16.168319 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.168504 kubelet[2595]: E0515 00:07:16.168485 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.168544 kubelet[2595]: W0515 00:07:16.168505 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.168544 kubelet[2595]: E0515 00:07:16.168522 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.169149 kubelet[2595]: E0515 00:07:16.169132 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.169187 kubelet[2595]: W0515 00:07:16.169150 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.169187 kubelet[2595]: E0515 00:07:16.169166 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.169406 kubelet[2595]: E0515 00:07:16.169381 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.169478 kubelet[2595]: W0515 00:07:16.169406 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.169478 kubelet[2595]: E0515 00:07:16.169427 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.170074 kubelet[2595]: E0515 00:07:16.169938 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.170074 kubelet[2595]: W0515 00:07:16.169952 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.170074 kubelet[2595]: E0515 00:07:16.170001 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.170885 kubelet[2595]: E0515 00:07:16.170232 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.170885 kubelet[2595]: W0515 00:07:16.170242 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.170885 kubelet[2595]: E0515 00:07:16.170293 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.170885 kubelet[2595]: E0515 00:07:16.170444 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.170885 kubelet[2595]: W0515 00:07:16.170451 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.170885 kubelet[2595]: E0515 00:07:16.170491 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.170885 kubelet[2595]: E0515 00:07:16.170619 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.170885 kubelet[2595]: W0515 00:07:16.170626 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.170885 kubelet[2595]: E0515 00:07:16.170693 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.172205 kubelet[2595]: E0515 00:07:16.170941 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.172205 kubelet[2595]: W0515 00:07:16.170953 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.172205 kubelet[2595]: E0515 00:07:16.171029 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.172205 kubelet[2595]: E0515 00:07:16.171184 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.172205 kubelet[2595]: W0515 00:07:16.171193 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.172205 kubelet[2595]: E0515 00:07:16.171209 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.172205 kubelet[2595]: E0515 00:07:16.171448 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.172205 kubelet[2595]: W0515 00:07:16.171457 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.172205 kubelet[2595]: E0515 00:07:16.171503 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.172205 kubelet[2595]: E0515 00:07:16.171676 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.172448 kubelet[2595]: W0515 00:07:16.171683 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.172448 kubelet[2595]: E0515 00:07:16.171721 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.172448 kubelet[2595]: E0515 00:07:16.171860 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.172448 kubelet[2595]: W0515 00:07:16.171869 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.172448 kubelet[2595]: E0515 00:07:16.172077 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.172736 kubelet[2595]: E0515 00:07:16.172619 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.172736 kubelet[2595]: W0515 00:07:16.172637 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.172736 kubelet[2595]: E0515 00:07:16.172686 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.173059 kubelet[2595]: E0515 00:07:16.172905 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.173059 kubelet[2595]: W0515 00:07:16.172918 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.173059 kubelet[2595]: E0515 00:07:16.172938 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.173395 kubelet[2595]: E0515 00:07:16.173236 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.173395 kubelet[2595]: W0515 00:07:16.173251 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.173395 kubelet[2595]: E0515 00:07:16.173275 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.173582 kubelet[2595]: E0515 00:07:16.173554 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.173657 kubelet[2595]: W0515 00:07:16.173643 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.173736 kubelet[2595]: E0515 00:07:16.173718 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.174782 kubelet[2595]: E0515 00:07:16.174757 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.174782 kubelet[2595]: W0515 00:07:16.174779 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.174907 kubelet[2595]: E0515 00:07:16.174813 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.176188 kubelet[2595]: E0515 00:07:16.175818 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.176188 kubelet[2595]: W0515 00:07:16.176175 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.176323 kubelet[2595]: E0515 00:07:16.176204 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.177574 kubelet[2595]: E0515 00:07:16.177492 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.177574 kubelet[2595]: W0515 00:07:16.177514 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.177574 kubelet[2595]: E0515 00:07:16.177531 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.184519 containerd[1475]: time="2025-05-15T00:07:16.184462851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b9f44dbb6-dmlf8,Uid:6a7decb8-4ba7-46b4-acb9-c43042ac9598,Namespace:calico-system,Attempt:0,} returns sandbox id \"7797089883158f621897f18f71d55b80a117ff99b0046e0a47db4e618ce2febd\"" May 15 00:07:16.188429 kubelet[2595]: E0515 00:07:16.188385 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:16.189208 kubelet[2595]: E0515 00:07:16.188987 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:16.189208 kubelet[2595]: W0515 00:07:16.189011 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:16.189208 kubelet[2595]: E0515 00:07:16.189038 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:16.192484 containerd[1475]: time="2025-05-15T00:07:16.192434329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wrhhc,Uid:9045b6ef-45a9-41b7-82b5-d4ecaac71d14,Namespace:calico-system,Attempt:0,} returns sandbox id \"782b0d5276213e3dfb84ee381b118a6ac5458f40e2b2f1e2e3977b8c4c1a6b24\"" May 15 00:07:16.193376 kubelet[2595]: E0515 00:07:16.193320 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:16.195141 containerd[1475]: time="2025-05-15T00:07:16.195106365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 00:07:17.549043 kubelet[2595]: E0515 00:07:17.548956 2595 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8zzgl" podUID="b968e49b-b6e1-4eac-b633-cad76111fc0d" May 15 00:07:19.222649 containerd[1475]: time="2025-05-15T00:07:19.222572092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:19.224772 containerd[1475]: time="2025-05-15T00:07:19.224718145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 15 00:07:19.226412 containerd[1475]: time="2025-05-15T00:07:19.226379975Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:19.229200 containerd[1475]: time="2025-05-15T00:07:19.229144466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:19.230054 containerd[1475]: time="2025-05-15T00:07:19.229949026Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 3.03480731s" May 15 00:07:19.230105 containerd[1475]: time="2025-05-15T00:07:19.230056645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 15 00:07:19.232837 containerd[1475]: time="2025-05-15T00:07:19.232613097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 00:07:19.256474 containerd[1475]: time="2025-05-15T00:07:19.256422767Z" level=info msg="CreateContainer within sandbox \"7797089883158f621897f18f71d55b80a117ff99b0046e0a47db4e618ce2febd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 00:07:19.277105 containerd[1475]: time="2025-05-15T00:07:19.275442548Z" level=info msg="Container 06da0a8fe80c916c1a69d56caf4ef4d847df7ba5a8be2a895ee93133dcff286a: CDI devices from CRI Config.CDIDevices: []" May 15 00:07:19.284113 containerd[1475]: time="2025-05-15T00:07:19.284049798Z" level=info msg="CreateContainer within sandbox \"7797089883158f621897f18f71d55b80a117ff99b0046e0a47db4e618ce2febd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"06da0a8fe80c916c1a69d56caf4ef4d847df7ba5a8be2a895ee93133dcff286a\"" May 15 00:07:19.286129 containerd[1475]: time="2025-05-15T00:07:19.286089513Z" level=info msg="StartContainer for \"06da0a8fe80c916c1a69d56caf4ef4d847df7ba5a8be2a895ee93133dcff286a\"" May 15 00:07:19.287632 containerd[1475]: time="2025-05-15T00:07:19.287590614Z" level=info msg="connecting to shim 06da0a8fe80c916c1a69d56caf4ef4d847df7ba5a8be2a895ee93133dcff286a" address="unix:///run/containerd/s/24c7f469213967a587c876358109a8632b8321481b517fd6fdb311c2e92c22a6" protocol=ttrpc version=3 May 15 00:07:19.322223 systemd[1]: Started cri-containerd-06da0a8fe80c916c1a69d56caf4ef4d847df7ba5a8be2a895ee93133dcff286a.scope - libcontainer container 06da0a8fe80c916c1a69d56caf4ef4d847df7ba5a8be2a895ee93133dcff286a. May 15 00:07:19.420830 containerd[1475]: time="2025-05-15T00:07:19.420763357Z" level=info msg="StartContainer for \"06da0a8fe80c916c1a69d56caf4ef4d847df7ba5a8be2a895ee93133dcff286a\" returns successfully" May 15 00:07:19.549847 kubelet[2595]: E0515 00:07:19.549609 2595 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8zzgl" podUID="b968e49b-b6e1-4eac-b633-cad76111fc0d" May 15 00:07:19.642397 kubelet[2595]: E0515 00:07:19.642361 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:19.675248 kubelet[2595]: I0515 00:07:19.674537 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b9f44dbb6-dmlf8" podStartSLOduration=1.632693568 podStartE2EDuration="4.674499107s" podCreationTimestamp="2025-05-15 00:07:15 +0000 UTC" firstStartedPulling="2025-05-15 00:07:16.189937491 +0000 UTC m=+18.715612550" lastFinishedPulling="2025-05-15 00:07:19.23174303 +0000 UTC m=+21.757418089" observedRunningTime="2025-05-15 00:07:19.672379568 +0000 UTC m=+22.198054627" watchObservedRunningTime="2025-05-15 00:07:19.674499107 +0000 UTC m=+22.200174285" May 15 00:07:19.708387 kubelet[2595]: E0515 00:07:19.708350 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.708387 kubelet[2595]: W0515 00:07:19.708374 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.708387 kubelet[2595]: E0515 00:07:19.708393 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.708966 kubelet[2595]: E0515 00:07:19.708829 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.708966 kubelet[2595]: W0515 00:07:19.708846 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.708966 kubelet[2595]: E0515 00:07:19.708863 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.709170 kubelet[2595]: E0515 00:07:19.709158 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.709271 kubelet[2595]: W0515 00:07:19.709214 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.709271 kubelet[2595]: E0515 00:07:19.709230 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.709582 kubelet[2595]: E0515 00:07:19.709508 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.709582 kubelet[2595]: W0515 00:07:19.709521 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.709582 kubelet[2595]: E0515 00:07:19.709532 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.709842 kubelet[2595]: E0515 00:07:19.709829 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.709952 kubelet[2595]: W0515 00:07:19.709897 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.709952 kubelet[2595]: E0515 00:07:19.709915 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.710243 kubelet[2595]: E0515 00:07:19.710229 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.710370 kubelet[2595]: W0515 00:07:19.710311 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.710370 kubelet[2595]: E0515 00:07:19.710328 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.710642 kubelet[2595]: E0515 00:07:19.710582 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.710642 kubelet[2595]: W0515 00:07:19.710593 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.710642 kubelet[2595]: E0515 00:07:19.710605 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.710953 kubelet[2595]: E0515 00:07:19.710886 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.710953 kubelet[2595]: W0515 00:07:19.710898 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.710953 kubelet[2595]: E0515 00:07:19.710909 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.711301 kubelet[2595]: E0515 00:07:19.711287 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.711424 kubelet[2595]: W0515 00:07:19.711377 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.711424 kubelet[2595]: E0515 00:07:19.711394 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.711698 kubelet[2595]: E0515 00:07:19.711684 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.711827 kubelet[2595]: W0515 00:07:19.711769 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.711827 kubelet[2595]: E0515 00:07:19.711787 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.712202 kubelet[2595]: E0515 00:07:19.712187 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.712371 kubelet[2595]: W0515 00:07:19.712266 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.712371 kubelet[2595]: E0515 00:07:19.712286 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.712508 kubelet[2595]: E0515 00:07:19.712497 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.712568 kubelet[2595]: W0515 00:07:19.712556 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.712620 kubelet[2595]: E0515 00:07:19.712610 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.712869 kubelet[2595]: E0515 00:07:19.712857 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.712929 kubelet[2595]: W0515 00:07:19.712918 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.713048 kubelet[2595]: E0515 00:07:19.713033 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.713437 kubelet[2595]: E0515 00:07:19.713342 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.713437 kubelet[2595]: W0515 00:07:19.713354 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.713437 kubelet[2595]: E0515 00:07:19.713365 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.713604 kubelet[2595]: E0515 00:07:19.713593 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.713658 kubelet[2595]: W0515 00:07:19.713646 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.713718 kubelet[2595]: E0515 00:07:19.713707 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.796375 kubelet[2595]: E0515 00:07:19.796338 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.796375 kubelet[2595]: W0515 00:07:19.796364 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.796375 kubelet[2595]: E0515 00:07:19.796386 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.796713 kubelet[2595]: E0515 00:07:19.796697 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.796713 kubelet[2595]: W0515 00:07:19.796713 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.796777 kubelet[2595]: E0515 00:07:19.796757 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.797063 kubelet[2595]: E0515 00:07:19.797047 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.797063 kubelet[2595]: W0515 00:07:19.797062 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.797121 kubelet[2595]: E0515 00:07:19.797078 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.797332 kubelet[2595]: E0515 00:07:19.797317 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.797332 kubelet[2595]: W0515 00:07:19.797330 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.797389 kubelet[2595]: E0515 00:07:19.797374 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.797579 kubelet[2595]: E0515 00:07:19.797567 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.797579 kubelet[2595]: W0515 00:07:19.797578 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.797634 kubelet[2595]: E0515 00:07:19.797591 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.797737 kubelet[2595]: E0515 00:07:19.797726 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.797737 kubelet[2595]: W0515 00:07:19.797736 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.797813 kubelet[2595]: E0515 00:07:19.797786 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.797999 kubelet[2595]: E0515 00:07:19.797983 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.797999 kubelet[2595]: W0515 00:07:19.797996 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.798059 kubelet[2595]: E0515 00:07:19.798020 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.798215 kubelet[2595]: E0515 00:07:19.798177 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.798215 kubelet[2595]: W0515 00:07:19.798186 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.798322 kubelet[2595]: E0515 00:07:19.798289 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.798445 kubelet[2595]: E0515 00:07:19.798434 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.798472 kubelet[2595]: W0515 00:07:19.798445 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.798496 kubelet[2595]: E0515 00:07:19.798481 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.798857 kubelet[2595]: E0515 00:07:19.798844 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.798884 kubelet[2595]: W0515 00:07:19.798857 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.798912 kubelet[2595]: E0515 00:07:19.798899 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.799182 kubelet[2595]: E0515 00:07:19.799169 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.799182 kubelet[2595]: W0515 00:07:19.799181 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.799239 kubelet[2595]: E0515 00:07:19.799223 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.799504 kubelet[2595]: E0515 00:07:19.799491 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.799533 kubelet[2595]: W0515 00:07:19.799503 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.799533 kubelet[2595]: E0515 00:07:19.799518 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.799721 kubelet[2595]: E0515 00:07:19.799709 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.799753 kubelet[2595]: W0515 00:07:19.799722 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.800103 kubelet[2595]: E0515 00:07:19.799798 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.815206 kubelet[2595]: E0515 00:07:19.815080 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.815206 kubelet[2595]: W0515 00:07:19.815099 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.815206 kubelet[2595]: E0515 00:07:19.815142 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.815392 kubelet[2595]: E0515 00:07:19.815370 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.815392 kubelet[2595]: W0515 00:07:19.815380 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.815392 kubelet[2595]: E0515 00:07:19.815389 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.815595 kubelet[2595]: E0515 00:07:19.815554 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.815595 kubelet[2595]: W0515 00:07:19.815572 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.816412 kubelet[2595]: E0515 00:07:19.815728 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.816412 kubelet[2595]: W0515 00:07:19.815740 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.816412 kubelet[2595]: E0515 00:07:19.815750 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.816412 kubelet[2595]: E0515 00:07:19.815786 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:19.816412 kubelet[2595]: E0515 00:07:19.816304 2595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:07:19.816412 kubelet[2595]: W0515 00:07:19.816318 2595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:07:19.816412 kubelet[2595]: E0515 00:07:19.816330 2595 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:07:20.568213 containerd[1475]: time="2025-05-15T00:07:20.568166800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:20.571025 containerd[1475]: time="2025-05-15T00:07:20.568799643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 15 00:07:20.571025 containerd[1475]: time="2025-05-15T00:07:20.569720751Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:20.572200 containerd[1475]: time="2025-05-15T00:07:20.571887867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:20.572498 containerd[1475]: time="2025-05-15T00:07:20.572469639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.339825348s" May 15 00:07:20.572601 containerd[1475]: time="2025-05-15T00:07:20.572583538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 15 00:07:20.575115 containerd[1475]: time="2025-05-15T00:07:20.575076753Z" level=info msg="CreateContainer within sandbox \"782b0d5276213e3dfb84ee381b118a6ac5458f40e2b2f1e2e3977b8c4c1a6b24\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 00:07:20.585301 containerd[1475]: time="2025-05-15T00:07:20.582867182Z" level=info msg="Container 8c80b8e07fd6cfa02bac89ee63649b5e87e36ee281a538c4ec11319d466edff4: CDI devices from CRI Config.CDIDevices: []" May 15 00:07:20.590727 containerd[1475]: time="2025-05-15T00:07:20.590682407Z" level=info msg="CreateContainer within sandbox \"782b0d5276213e3dfb84ee381b118a6ac5458f40e2b2f1e2e3977b8c4c1a6b24\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8c80b8e07fd6cfa02bac89ee63649b5e87e36ee281a538c4ec11319d466edff4\"" May 15 00:07:20.591402 containerd[1475]: time="2025-05-15T00:07:20.591252101Z" level=info msg="StartContainer for \"8c80b8e07fd6cfa02bac89ee63649b5e87e36ee281a538c4ec11319d466edff4\"" May 15 00:07:20.592698 containerd[1475]: time="2025-05-15T00:07:20.592668557Z" level=info msg="connecting to shim 8c80b8e07fd6cfa02bac89ee63649b5e87e36ee281a538c4ec11319d466edff4" address="unix:///run/containerd/s/0cce4d12f4b3ad6f40562b9caeb078f5fee816656a2c893e960189a986ff08ce" protocol=ttrpc version=3 May 15 00:07:20.618176 systemd[1]: Started cri-containerd-8c80b8e07fd6cfa02bac89ee63649b5e87e36ee281a538c4ec11319d466edff4.scope - libcontainer container 8c80b8e07fd6cfa02bac89ee63649b5e87e36ee281a538c4ec11319d466edff4. May 15 00:07:20.646862 kubelet[2595]: I0515 00:07:20.646662 2595 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:07:20.647233 kubelet[2595]: E0515 00:07:20.647161 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:20.659696 containerd[1475]: time="2025-05-15T00:07:20.658861948Z" level=info msg="StartContainer for \"8c80b8e07fd6cfa02bac89ee63649b5e87e36ee281a538c4ec11319d466edff4\" returns successfully" May 15 00:07:20.695573 systemd[1]: cri-containerd-8c80b8e07fd6cfa02bac89ee63649b5e87e36ee281a538c4ec11319d466edff4.scope: Deactivated successfully. May 15 00:07:20.697207 systemd[1]: cri-containerd-8c80b8e07fd6cfa02bac89ee63649b5e87e36ee281a538c4ec11319d466edff4.scope: Consumed 55ms CPU time, 8.1M memory peak, 6.2M written to disk. May 15 00:07:20.731779 containerd[1475]: time="2025-05-15T00:07:20.731735015Z" level=info msg="received exit event container_id:\"8c80b8e07fd6cfa02bac89ee63649b5e87e36ee281a538c4ec11319d466edff4\" id:\"8c80b8e07fd6cfa02bac89ee63649b5e87e36ee281a538c4ec11319d466edff4\" pid:3234 exited_at:{seconds:1747267640 nanos:707872940}" May 15 00:07:20.732024 containerd[1475]: time="2025-05-15T00:07:20.731837156Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c80b8e07fd6cfa02bac89ee63649b5e87e36ee281a538c4ec11319d466edff4\" id:\"8c80b8e07fd6cfa02bac89ee63649b5e87e36ee281a538c4ec11319d466edff4\" pid:3234 exited_at:{seconds:1747267640 nanos:707872940}" May 15 00:07:20.762693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c80b8e07fd6cfa02bac89ee63649b5e87e36ee281a538c4ec11319d466edff4-rootfs.mount: Deactivated successfully. May 15 00:07:21.549115 kubelet[2595]: E0515 00:07:21.549047 2595 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8zzgl" podUID="b968e49b-b6e1-4eac-b633-cad76111fc0d" May 15 00:07:21.649214 kubelet[2595]: E0515 00:07:21.649183 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:21.650201 containerd[1475]: time="2025-05-15T00:07:21.650171554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 00:07:23.548527 kubelet[2595]: E0515 00:07:23.548489 2595 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8zzgl" podUID="b968e49b-b6e1-4eac-b633-cad76111fc0d" May 15 00:07:25.124926 containerd[1475]: time="2025-05-15T00:07:25.124870756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:25.125394 containerd[1475]: time="2025-05-15T00:07:25.125333889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 15 00:07:25.126520 containerd[1475]: time="2025-05-15T00:07:25.126488580Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:25.128399 containerd[1475]: time="2025-05-15T00:07:25.128343429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:25.129470 containerd[1475]: time="2025-05-15T00:07:25.129188419Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.47898159s" May 15 00:07:25.129470 containerd[1475]: time="2025-05-15T00:07:25.129219537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 15 00:07:25.131390 containerd[1475]: time="2025-05-15T00:07:25.131362129Z" level=info msg="CreateContainer within sandbox \"782b0d5276213e3dfb84ee381b118a6ac5458f40e2b2f1e2e3977b8c4c1a6b24\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 00:07:25.151235 containerd[1475]: time="2025-05-15T00:07:25.150027095Z" level=info msg="Container 757d42d604c1bd9ae27abbb57050a6c580c24419f5e695e3377e4fe899cdc8ac: CDI devices from CRI Config.CDIDevices: []" May 15 00:07:25.158175 containerd[1475]: time="2025-05-15T00:07:25.158141011Z" level=info msg="CreateContainer within sandbox \"782b0d5276213e3dfb84ee381b118a6ac5458f40e2b2f1e2e3977b8c4c1a6b24\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"757d42d604c1bd9ae27abbb57050a6c580c24419f5e695e3377e4fe899cdc8ac\"" May 15 00:07:25.159258 containerd[1475]: time="2025-05-15T00:07:25.158963722Z" level=info msg="StartContainer for \"757d42d604c1bd9ae27abbb57050a6c580c24419f5e695e3377e4fe899cdc8ac\"" May 15 00:07:25.160346 containerd[1475]: time="2025-05-15T00:07:25.160317401Z" level=info msg="connecting to shim 757d42d604c1bd9ae27abbb57050a6c580c24419f5e695e3377e4fe899cdc8ac" address="unix:///run/containerd/s/0cce4d12f4b3ad6f40562b9caeb078f5fee816656a2c893e960189a986ff08ce" protocol=ttrpc version=3 May 15 00:07:25.191150 systemd[1]: Started cri-containerd-757d42d604c1bd9ae27abbb57050a6c580c24419f5e695e3377e4fe899cdc8ac.scope - libcontainer container 757d42d604c1bd9ae27abbb57050a6c580c24419f5e695e3377e4fe899cdc8ac. May 15 00:07:25.254806 containerd[1475]: time="2025-05-15T00:07:25.254545698Z" level=info msg="StartContainer for \"757d42d604c1bd9ae27abbb57050a6c580c24419f5e695e3377e4fe899cdc8ac\" returns successfully" May 15 00:07:25.548132 kubelet[2595]: E0515 00:07:25.548071 2595 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8zzgl" podUID="b968e49b-b6e1-4eac-b633-cad76111fc0d" May 15 00:07:25.663074 kubelet[2595]: E0515 00:07:25.663025 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:25.820560 systemd[1]: cri-containerd-757d42d604c1bd9ae27abbb57050a6c580c24419f5e695e3377e4fe899cdc8ac.scope: Deactivated successfully. May 15 00:07:25.821354 systemd[1]: cri-containerd-757d42d604c1bd9ae27abbb57050a6c580c24419f5e695e3377e4fe899cdc8ac.scope: Consumed 483ms CPU time, 158.3M memory peak, 4K read from disk, 150.3M written to disk. May 15 00:07:25.837190 containerd[1475]: time="2025-05-15T00:07:25.837138492Z" level=info msg="received exit event container_id:\"757d42d604c1bd9ae27abbb57050a6c580c24419f5e695e3377e4fe899cdc8ac\" id:\"757d42d604c1bd9ae27abbb57050a6c580c24419f5e695e3377e4fe899cdc8ac\" pid:3293 exited_at:{seconds:1747267645 nanos:836801992}" May 15 00:07:25.838146 containerd[1475]: time="2025-05-15T00:07:25.837271044Z" level=info msg="TaskExit event in podsandbox handler container_id:\"757d42d604c1bd9ae27abbb57050a6c580c24419f5e695e3377e4fe899cdc8ac\" id:\"757d42d604c1bd9ae27abbb57050a6c580c24419f5e695e3377e4fe899cdc8ac\" pid:3293 exited_at:{seconds:1747267645 nanos:836801992}" May 15 00:07:25.863916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-757d42d604c1bd9ae27abbb57050a6c580c24419f5e695e3377e4fe899cdc8ac-rootfs.mount: Deactivated successfully. May 15 00:07:25.899427 kubelet[2595]: I0515 00:07:25.899394 2595 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 15 00:07:26.005736 systemd[1]: Created slice kubepods-burstable-pod53828117_6bad_4dc9_b17e_eb808c9745f5.slice - libcontainer container kubepods-burstable-pod53828117_6bad_4dc9_b17e_eb808c9745f5.slice. May 15 00:07:26.029337 systemd[1]: Created slice kubepods-besteffort-pod8a32791c_fc18_490f_91e3_aff3170763c0.slice - libcontainer container kubepods-besteffort-pod8a32791c_fc18_490f_91e3_aff3170763c0.slice. May 15 00:07:26.034616 systemd[1]: Created slice kubepods-besteffort-pod1a49a1cf_d7ec_4482_a014_bd9971947209.slice - libcontainer container kubepods-besteffort-pod1a49a1cf_d7ec_4482_a014_bd9971947209.slice. May 15 00:07:26.039996 systemd[1]: Created slice kubepods-besteffort-pode6220ea6_d626_40a5_9d4e_699231369f1b.slice - libcontainer container kubepods-besteffort-pode6220ea6_d626_40a5_9d4e_699231369f1b.slice. May 15 00:07:26.045877 systemd[1]: Created slice kubepods-burstable-poddd74038d_ce76_4de0_b632_6c340ef7536b.slice - libcontainer container kubepods-burstable-poddd74038d_ce76_4de0_b632_6c340ef7536b.slice. May 15 00:07:26.142215 kubelet[2595]: I0515 00:07:26.141485 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd74038d-ce76-4de0-b632-6c340ef7536b-config-volume\") pod \"coredns-6f6b679f8f-lvpbn\" (UID: \"dd74038d-ce76-4de0-b632-6c340ef7536b\") " pod="kube-system/coredns-6f6b679f8f-lvpbn" May 15 00:07:26.142215 kubelet[2595]: I0515 00:07:26.141535 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a49a1cf-d7ec-4482-a014-bd9971947209-tigera-ca-bundle\") pod \"calico-kube-controllers-7947848598-hdlzj\" (UID: \"1a49a1cf-d7ec-4482-a014-bd9971947209\") " pod="calico-system/calico-kube-controllers-7947848598-hdlzj" May 15 00:07:26.142215 kubelet[2595]: I0515 00:07:26.141556 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ld6c\" (UniqueName: \"kubernetes.io/projected/53828117-6bad-4dc9-b17e-eb808c9745f5-kube-api-access-4ld6c\") pod \"coredns-6f6b679f8f-v8qdh\" (UID: \"53828117-6bad-4dc9-b17e-eb808c9745f5\") " pod="kube-system/coredns-6f6b679f8f-v8qdh" May 15 00:07:26.142215 kubelet[2595]: I0515 00:07:26.141574 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmd84\" (UniqueName: \"kubernetes.io/projected/8a32791c-fc18-490f-91e3-aff3170763c0-kube-api-access-bmd84\") pod \"calico-apiserver-5547cb7878-2k2hx\" (UID: \"8a32791c-fc18-490f-91e3-aff3170763c0\") " pod="calico-apiserver/calico-apiserver-5547cb7878-2k2hx" May 15 00:07:26.142215 kubelet[2595]: I0515 00:07:26.141850 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mzdk\" (UniqueName: \"kubernetes.io/projected/e6220ea6-d626-40a5-9d4e-699231369f1b-kube-api-access-7mzdk\") pod \"calico-apiserver-5547cb7878-sn2c9\" (UID: \"e6220ea6-d626-40a5-9d4e-699231369f1b\") " pod="calico-apiserver/calico-apiserver-5547cb7878-sn2c9" May 15 00:07:26.142598 kubelet[2595]: I0515 00:07:26.141892 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8a32791c-fc18-490f-91e3-aff3170763c0-calico-apiserver-certs\") pod \"calico-apiserver-5547cb7878-2k2hx\" (UID: \"8a32791c-fc18-490f-91e3-aff3170763c0\") " pod="calico-apiserver/calico-apiserver-5547cb7878-2k2hx" May 15 00:07:26.142598 kubelet[2595]: I0515 00:07:26.141914 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdxnz\" (UniqueName: \"kubernetes.io/projected/1a49a1cf-d7ec-4482-a014-bd9971947209-kube-api-access-fdxnz\") pod \"calico-kube-controllers-7947848598-hdlzj\" (UID: \"1a49a1cf-d7ec-4482-a014-bd9971947209\") " pod="calico-system/calico-kube-controllers-7947848598-hdlzj" May 15 00:07:26.142598 kubelet[2595]: I0515 00:07:26.141936 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53828117-6bad-4dc9-b17e-eb808c9745f5-config-volume\") pod \"coredns-6f6b679f8f-v8qdh\" (UID: \"53828117-6bad-4dc9-b17e-eb808c9745f5\") " pod="kube-system/coredns-6f6b679f8f-v8qdh" May 15 00:07:26.142598 kubelet[2595]: I0515 00:07:26.141952 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsmvh\" (UniqueName: \"kubernetes.io/projected/dd74038d-ce76-4de0-b632-6c340ef7536b-kube-api-access-rsmvh\") pod \"coredns-6f6b679f8f-lvpbn\" (UID: \"dd74038d-ce76-4de0-b632-6c340ef7536b\") " pod="kube-system/coredns-6f6b679f8f-lvpbn" May 15 00:07:26.142598 kubelet[2595]: I0515 00:07:26.142009 2595 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e6220ea6-d626-40a5-9d4e-699231369f1b-calico-apiserver-certs\") pod \"calico-apiserver-5547cb7878-sn2c9\" (UID: \"e6220ea6-d626-40a5-9d4e-699231369f1b\") " pod="calico-apiserver/calico-apiserver-5547cb7878-sn2c9" May 15 00:07:26.312442 kubelet[2595]: E0515 00:07:26.312400 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:26.313502 containerd[1475]: time="2025-05-15T00:07:26.313459106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v8qdh,Uid:53828117-6bad-4dc9-b17e-eb808c9745f5,Namespace:kube-system,Attempt:0,}" May 15 00:07:26.333473 containerd[1475]: time="2025-05-15T00:07:26.332928017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5547cb7878-2k2hx,Uid:8a32791c-fc18-490f-91e3-aff3170763c0,Namespace:calico-apiserver,Attempt:0,}" May 15 00:07:26.354608 kubelet[2595]: E0515 00:07:26.354574 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:26.356714 containerd[1475]: time="2025-05-15T00:07:26.355225123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5547cb7878-sn2c9,Uid:e6220ea6-d626-40a5-9d4e-699231369f1b,Namespace:calico-apiserver,Attempt:0,}" May 15 00:07:26.373913 containerd[1475]: time="2025-05-15T00:07:26.368277926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7947848598-hdlzj,Uid:1a49a1cf-d7ec-4482-a014-bd9971947209,Namespace:calico-system,Attempt:0,}" May 15 00:07:26.375476 containerd[1475]: time="2025-05-15T00:07:26.375433631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lvpbn,Uid:dd74038d-ce76-4de0-b632-6c340ef7536b,Namespace:kube-system,Attempt:0,}" May 15 00:07:26.594404 systemd[1]: Started sshd@9-10.0.0.138:22-10.0.0.1:58592.service - OpenSSH per-connection server daemon (10.0.0.1:58592). May 15 00:07:26.663307 sshd[3393]: Accepted publickey for core from 10.0.0.1 port 58592 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:07:26.665857 sshd-session[3393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:26.672154 systemd-logind[1452]: New session 10 of user core. May 15 00:07:26.676391 kubelet[2595]: E0515 00:07:26.675226 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:26.679348 containerd[1475]: time="2025-05-15T00:07:26.676427248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 00:07:26.678704 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 00:07:26.689988 containerd[1475]: time="2025-05-15T00:07:26.689924385Z" level=error msg="Failed to destroy network for sandbox \"1c47f7f864f8e345bdbaaacbcc08280587364dfe0e68d8b182eeb9e1868dd65f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:07:26.691774 containerd[1475]: time="2025-05-15T00:07:26.691076958Z" level=error msg="Failed to destroy network for sandbox \"75bc7db846c054d7574c19e329b207f01a029739ff6f70164443ffcae1597f2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:07:26.694093 containerd[1475]: time="2025-05-15T00:07:26.692953889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5547cb7878-sn2c9,Uid:e6220ea6-d626-40a5-9d4e-699231369f1b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c47f7f864f8e345bdbaaacbcc08280587364dfe0e68d8b182eeb9e1868dd65f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:07:26.695412 containerd[1475]: time="2025-05-15T00:07:26.695114404Z" level=error msg="Failed to destroy network for sandbox \"cfa13618b9879e8f924b6c1afe5a7d47aacf297c1761008bcb65da20bf3dd253\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:07:26.696596 containerd[1475]: time="2025-05-15T00:07:26.696564080Z" level=error msg="Failed to destroy network for sandbox \"7a166202c07294139a56a062d0fedf0ac16adae1afc796241adfb9af154dc53a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:07:26.697414 containerd[1475]: time="2025-05-15T00:07:26.697376793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5547cb7878-2k2hx,Uid:8a32791c-fc18-490f-91e3-aff3170763c0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"75bc7db846c054d7574c19e329b207f01a029739ff6f70164443ffcae1597f2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:07:26.698747 containerd[1475]: time="2025-05-15T00:07:26.698660278Z" level=error msg="Failed to destroy network for sandbox \"e4b673b38894a8ee9577b5b2c44465794a8ec5c19fb1bb024c55c5d2e78c023e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:07:26.700650 containerd[1475]: time="2025-05-15T00:07:26.700612685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v8qdh,Uid:53828117-6bad-4dc9-b17e-eb808c9745f5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a166202c07294139a56a062d0fedf0ac16adae1afc796241adfb9af154dc53a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:07:26.701411 containerd[1475]: time="2025-05-15T00:07:26.701356762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7947848598-hdlzj,Uid:1a49a1cf-d7ec-4482-a014-bd9971947209,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfa13618b9879e8f924b6c1afe5a7d47aacf297c1761008bcb65da20bf3dd253\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:07:26.702123 kubelet[2595]: E0515 00:07:26.702016 2595 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75bc7db846c054d7574c19e329b207f01a029739ff6f70164443ffcae1597f2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:07:26.702123 kubelet[2595]: E0515 00:07:26.702020 2595 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfa13618b9879e8f924b6c1afe5a7d47aacf297c1761008bcb65da20bf3dd253\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:07:26.702413 kubelet[2595]: E0515 00:07:26.702207 2595 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75bc7db846c054d7574c19e329b207f01a029739ff6f70164443ffcae1597f2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5547cb7878-2k2hx" May 15 00:07:26.702413 kubelet[2595]: E0515 00:07:26.702237 2595 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75bc7db846c054d7574c19e329b207f01a029739ff6f70164443ffcae1597f2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5547cb7878-2k2hx" May 15 00:07:26.702413 kubelet[2595]: E0515 00:07:26.702237 2595 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfa13618b9879e8f924b6c1afe5a7d47aacf297c1761008bcb65da20bf3dd253\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7947848598-hdlzj" May 15 00:07:26.702413 kubelet[2595]: E0515 00:07:26.702261 2595 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfa13618b9879e8f924b6c1afe5a7d47aacf297c1761008bcb65da20bf3dd253\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7947848598-hdlzj" May 15 00:07:26.702540 containerd[1475]: time="2025-05-15T00:07:26.702127757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lvpbn,Uid:dd74038d-ce76-4de0-b632-6c340ef7536b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b673b38894a8ee9577b5b2c44465794a8ec5c19fb1bb024c55c5d2e78c023e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:07:26.702591 kubelet[2595]: E0515 00:07:26.702278 2595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5547cb7878-2k2hx_calico-apiserver(8a32791c-fc18-490f-91e3-aff3170763c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5547cb7878-2k2hx_calico-apiserver(8a32791c-fc18-490f-91e3-aff3170763c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75bc7db846c054d7574c19e329b207f01a029739ff6f70164443ffcae1597f2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5547cb7878-2k2hx" podUID="8a32791c-fc18-490f-91e3-aff3170763c0" May 15 00:07:26.702591 kubelet[2595]: E0515 00:07:26.702294 2595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7947848598-hdlzj_calico-system(1a49a1cf-d7ec-4482-a014-bd9971947209)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7947848598-hdlzj_calico-system(1a49a1cf-d7ec-4482-a014-bd9971947209)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfa13618b9879e8f924b6c1afe5a7d47aacf297c1761008bcb65da20bf3dd253\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7947848598-hdlzj" podUID="1a49a1cf-d7ec-4482-a014-bd9971947209" May 15 00:07:26.703304 kubelet[2595]: E0515 00:07:26.702018 2595 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a166202c07294139a56a062d0fedf0ac16adae1afc796241adfb9af154dc53a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:07:26.703304 kubelet[2595]: E0515 00:07:26.702327 2595 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c47f7f864f8e345bdbaaacbcc08280587364dfe0e68d8b182eeb9e1868dd65f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:07:26.703304 kubelet[2595]: E0515 00:07:26.702944 2595 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a166202c07294139a56a062d0fedf0ac16adae1afc796241adfb9af154dc53a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v8qdh" May 15 00:07:26.703304 kubelet[2595]: E0515 00:07:26.702985 2595 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a166202c07294139a56a062d0fedf0ac16adae1afc796241adfb9af154dc53a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v8qdh" May 15 00:07:26.703526 kubelet[2595]: E0515 00:07:26.702993 2595 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c47f7f864f8e345bdbaaacbcc08280587364dfe0e68d8b182eeb9e1868dd65f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5547cb7878-sn2c9" May 15 00:07:26.703526 kubelet[2595]: E0515 00:07:26.703015 2595 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c47f7f864f8e345bdbaaacbcc08280587364dfe0e68d8b182eeb9e1868dd65f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5547cb7878-sn2c9" May 15 00:07:26.703526 kubelet[2595]: E0515 00:07:26.703028 2595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-v8qdh_kube-system(53828117-6bad-4dc9-b17e-eb808c9745f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-v8qdh_kube-system(53828117-6bad-4dc9-b17e-eb808c9745f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a166202c07294139a56a062d0fedf0ac16adae1afc796241adfb9af154dc53a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-v8qdh" podUID="53828117-6bad-4dc9-b17e-eb808c9745f5" May 15 00:07:26.703666 kubelet[2595]: E0515 00:07:26.703060 2595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5547cb7878-sn2c9_calico-apiserver(e6220ea6-d626-40a5-9d4e-699231369f1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5547cb7878-sn2c9_calico-apiserver(e6220ea6-d626-40a5-9d4e-699231369f1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c47f7f864f8e345bdbaaacbcc08280587364dfe0e68d8b182eeb9e1868dd65f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5547cb7878-sn2c9" podUID="e6220ea6-d626-40a5-9d4e-699231369f1b" May 15 00:07:26.703666 kubelet[2595]: E0515 00:07:26.703208 2595 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b673b38894a8ee9577b5b2c44465794a8ec5c19fb1bb024c55c5d2e78c023e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:07:26.703666 kubelet[2595]: E0515 00:07:26.703232 2595 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b673b38894a8ee9577b5b2c44465794a8ec5c19fb1bb024c55c5d2e78c023e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lvpbn" May 15 00:07:26.703771 kubelet[2595]: E0515 00:07:26.703247 2595 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b673b38894a8ee9577b5b2c44465794a8ec5c19fb1bb024c55c5d2e78c023e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lvpbn" May 15 00:07:26.703771 kubelet[2595]: E0515 00:07:26.703284 2595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-lvpbn_kube-system(dd74038d-ce76-4de0-b632-6c340ef7536b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-lvpbn_kube-system(dd74038d-ce76-4de0-b632-6c340ef7536b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4b673b38894a8ee9577b5b2c44465794a8ec5c19fb1bb024c55c5d2e78c023e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-lvpbn" podUID="dd74038d-ce76-4de0-b632-6c340ef7536b" May 15 00:07:26.814649 sshd[3514]: Connection closed by 10.0.0.1 port 58592 May 15 00:07:26.816741 sshd-session[3393]: pam_unix(sshd:session): session closed for user core May 15 00:07:26.823111 systemd[1]: sshd@9-10.0.0.138:22-10.0.0.1:58592.service: Deactivated successfully. May 15 00:07:26.827417 systemd[1]: session-10.scope: Deactivated successfully. May 15 00:07:26.828705 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. May 15 00:07:26.829493 systemd-logind[1452]: Removed session 10. May 15 00:07:27.556719 systemd[1]: Created slice kubepods-besteffort-podb968e49b_b6e1_4eac_b633_cad76111fc0d.slice - libcontainer container kubepods-besteffort-podb968e49b_b6e1_4eac_b633_cad76111fc0d.slice. May 15 00:07:27.558591 containerd[1475]: time="2025-05-15T00:07:27.558557804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8zzgl,Uid:b968e49b-b6e1-4eac-b633-cad76111fc0d,Namespace:calico-system,Attempt:0,}" May 15 00:07:27.603403 containerd[1475]: time="2025-05-15T00:07:27.603348957Z" level=error msg="Failed to destroy network for sandbox \"7589db7d1be75708ecc2285a3f2a35c80b6a4fdd4251fb9b1f44255111c03831\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:07:27.604514 containerd[1475]: time="2025-05-15T00:07:27.604460454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8zzgl,Uid:b968e49b-b6e1-4eac-b633-cad76111fc0d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7589db7d1be75708ecc2285a3f2a35c80b6a4fdd4251fb9b1f44255111c03831\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:07:27.604720 kubelet[2595]: E0515 00:07:27.604673 2595 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7589db7d1be75708ecc2285a3f2a35c80b6a4fdd4251fb9b1f44255111c03831\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:07:27.604766 kubelet[2595]: E0515 00:07:27.604737 2595 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7589db7d1be75708ecc2285a3f2a35c80b6a4fdd4251fb9b1f44255111c03831\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8zzgl" May 15 00:07:27.604800 kubelet[2595]: E0515 00:07:27.604756 2595 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7589db7d1be75708ecc2285a3f2a35c80b6a4fdd4251fb9b1f44255111c03831\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8zzgl" May 15 00:07:27.604857 kubelet[2595]: E0515 00:07:27.604823 2595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8zzgl_calico-system(b968e49b-b6e1-4eac-b633-cad76111fc0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8zzgl_calico-system(b968e49b-b6e1-4eac-b633-cad76111fc0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7589db7d1be75708ecc2285a3f2a35c80b6a4fdd4251fb9b1f44255111c03831\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8zzgl" podUID="b968e49b-b6e1-4eac-b633-cad76111fc0d" May 15 00:07:27.606014 systemd[1]: run-netns-cni\x2ddf7be54c\x2da0ca\x2d36b0\x2d0a56\x2d424aee77950c.mount: Deactivated successfully. May 15 00:07:30.546368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1623862404.mount: Deactivated successfully. May 15 00:07:30.643085 containerd[1475]: time="2025-05-15T00:07:30.643033009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:30.645832 containerd[1475]: time="2025-05-15T00:07:30.645648073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 15 00:07:30.647104 containerd[1475]: time="2025-05-15T00:07:30.646947846Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:30.648616 containerd[1475]: time="2025-05-15T00:07:30.648591400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:30.649188 containerd[1475]: time="2025-05-15T00:07:30.649158491Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 3.972510495s" May 15 00:07:30.649250 containerd[1475]: time="2025-05-15T00:07:30.649191569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 15 00:07:30.659340 containerd[1475]: time="2025-05-15T00:07:30.659019299Z" level=info msg="CreateContainer within sandbox \"782b0d5276213e3dfb84ee381b118a6ac5458f40e2b2f1e2e3977b8c4c1a6b24\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 00:07:30.679648 containerd[1475]: time="2025-05-15T00:07:30.678208344Z" level=info msg="Container 46113305547cc74bbabf2bdf704ffb9a696d66d2f7386401c79f7a625e5dcf97: CDI devices from CRI Config.CDIDevices: []" May 15 00:07:30.698603 containerd[1475]: time="2025-05-15T00:07:30.698540849Z" level=info msg="CreateContainer within sandbox \"782b0d5276213e3dfb84ee381b118a6ac5458f40e2b2f1e2e3977b8c4c1a6b24\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"46113305547cc74bbabf2bdf704ffb9a696d66d2f7386401c79f7a625e5dcf97\"" May 15 00:07:30.699132 containerd[1475]: time="2025-05-15T00:07:30.699088140Z" level=info msg="StartContainer for \"46113305547cc74bbabf2bdf704ffb9a696d66d2f7386401c79f7a625e5dcf97\"" May 15 00:07:30.700681 containerd[1475]: time="2025-05-15T00:07:30.700647059Z" level=info msg="connecting to shim 46113305547cc74bbabf2bdf704ffb9a696d66d2f7386401c79f7a625e5dcf97" address="unix:///run/containerd/s/0cce4d12f4b3ad6f40562b9caeb078f5fee816656a2c893e960189a986ff08ce" protocol=ttrpc version=3 May 15 00:07:30.724255 systemd[1]: Started cri-containerd-46113305547cc74bbabf2bdf704ffb9a696d66d2f7386401c79f7a625e5dcf97.scope - libcontainer container 46113305547cc74bbabf2bdf704ffb9a696d66d2f7386401c79f7a625e5dcf97. May 15 00:07:30.881478 containerd[1475]: time="2025-05-15T00:07:30.880796232Z" level=info msg="StartContainer for \"46113305547cc74bbabf2bdf704ffb9a696d66d2f7386401c79f7a625e5dcf97\" returns successfully" May 15 00:07:31.018618 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 15 00:07:31.018748 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 15 00:07:31.688999 kubelet[2595]: E0515 00:07:31.688949 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:31.707030 kubelet[2595]: I0515 00:07:31.706248 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wrhhc" podStartSLOduration=2.250162363 podStartE2EDuration="16.706230759s" podCreationTimestamp="2025-05-15 00:07:15 +0000 UTC" firstStartedPulling="2025-05-15 00:07:16.193704503 +0000 UTC m=+18.719379562" lastFinishedPulling="2025-05-15 00:07:30.649772899 +0000 UTC m=+33.175447958" observedRunningTime="2025-05-15 00:07:31.705311206 +0000 UTC m=+34.230986265" watchObservedRunningTime="2025-05-15 00:07:31.706230759 +0000 UTC m=+34.231905818" May 15 00:07:31.803773 containerd[1475]: time="2025-05-15T00:07:31.803729158Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46113305547cc74bbabf2bdf704ffb9a696d66d2f7386401c79f7a625e5dcf97\" id:\"54c4038b826e7d2ba9fb08247a793409541ee3e1bc6500fadde51d41a97e97b3\" pid:3653 exit_status:1 exited_at:{seconds:1747267651 nanos:803394695}" May 15 00:07:31.832885 systemd[1]: Started sshd@10-10.0.0.138:22-10.0.0.1:58608.service - OpenSSH per-connection server daemon (10.0.0.1:58608). May 15 00:07:31.900551 sshd[3666]: Accepted publickey for core from 10.0.0.1 port 58608 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:07:31.902504 sshd-session[3666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:31.906688 systemd-logind[1452]: New session 11 of user core. May 15 00:07:31.915182 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 00:07:32.038596 sshd[3668]: Connection closed by 10.0.0.1 port 58608 May 15 00:07:32.039324 sshd-session[3666]: pam_unix(sshd:session): session closed for user core May 15 00:07:32.044177 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. May 15 00:07:32.044923 systemd[1]: sshd@10-10.0.0.138:22-10.0.0.1:58608.service: Deactivated successfully. May 15 00:07:32.047783 systemd[1]: session-11.scope: Deactivated successfully. May 15 00:07:32.049934 systemd-logind[1452]: Removed session 11. May 15 00:07:32.690742 kubelet[2595]: E0515 00:07:32.690315 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:32.753487 containerd[1475]: time="2025-05-15T00:07:32.753290181Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46113305547cc74bbabf2bdf704ffb9a696d66d2f7386401c79f7a625e5dcf97\" id:\"586ecd9ca0fc25ed66b61358a6881ba072a68828c8b77b54c1020d48f50b6174\" pid:3797 exit_status:1 exited_at:{seconds:1747267652 nanos:752754887}" May 15 00:07:33.692093 kubelet[2595]: E0515 00:07:33.692063 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:33.748916 containerd[1475]: time="2025-05-15T00:07:33.748525064Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46113305547cc74bbabf2bdf704ffb9a696d66d2f7386401c79f7a625e5dcf97\" id:\"a4a60cf9b22b785fb1f6bfa13ea28383a66c4300f60e61c19383e6460ed433f5\" pid:3849 exit_status:1 exited_at:{seconds:1747267653 nanos:748172521}" May 15 00:07:37.053546 systemd[1]: Started sshd@11-10.0.0.138:22-10.0.0.1:48586.service - OpenSSH per-connection server daemon (10.0.0.1:48586). May 15 00:07:37.117955 sshd[3941]: Accepted publickey for core from 10.0.0.1 port 48586 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:07:37.119441 sshd-session[3941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:37.124696 systemd-logind[1452]: New session 12 of user core. May 15 00:07:37.133273 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 00:07:37.262005 sshd[3944]: Connection closed by 10.0.0.1 port 48586 May 15 00:07:37.262954 sshd-session[3941]: pam_unix(sshd:session): session closed for user core May 15 00:07:37.273631 systemd[1]: sshd@11-10.0.0.138:22-10.0.0.1:48586.service: Deactivated successfully. May 15 00:07:37.275833 systemd[1]: session-12.scope: Deactivated successfully. May 15 00:07:37.276724 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. May 15 00:07:37.279236 systemd[1]: Started sshd@12-10.0.0.138:22-10.0.0.1:48590.service - OpenSSH per-connection server daemon (10.0.0.1:48590). May 15 00:07:37.280148 systemd-logind[1452]: Removed session 12. May 15 00:07:37.335023 sshd[3958]: Accepted publickey for core from 10.0.0.1 port 48590 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:07:37.336206 sshd-session[3958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:37.341123 systemd-logind[1452]: New session 13 of user core. May 15 00:07:37.348139 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 00:07:37.510954 sshd[3963]: Connection closed by 10.0.0.1 port 48590 May 15 00:07:37.511639 sshd-session[3958]: pam_unix(sshd:session): session closed for user core May 15 00:07:37.523366 systemd[1]: sshd@12-10.0.0.138:22-10.0.0.1:48590.service: Deactivated successfully. May 15 00:07:37.527236 systemd[1]: session-13.scope: Deactivated successfully. May 15 00:07:37.534523 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. May 15 00:07:37.537246 systemd[1]: Started sshd@13-10.0.0.138:22-10.0.0.1:48600.service - OpenSSH per-connection server daemon (10.0.0.1:48600). May 15 00:07:37.543137 systemd-logind[1452]: Removed session 13. May 15 00:07:37.563199 kubelet[2595]: E0515 00:07:37.562659 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:37.563567 containerd[1475]: time="2025-05-15T00:07:37.563320162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v8qdh,Uid:53828117-6bad-4dc9-b17e-eb808c9745f5,Namespace:kube-system,Attempt:0,}" May 15 00:07:37.580502 containerd[1475]: time="2025-05-15T00:07:37.580419309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5547cb7878-2k2hx,Uid:8a32791c-fc18-490f-91e3-aff3170763c0,Namespace:calico-apiserver,Attempt:0,}" May 15 00:07:37.601075 sshd[3974]: Accepted publickey for core from 10.0.0.1 port 48600 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:07:37.602838 sshd-session[3974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:37.611566 systemd-logind[1452]: New session 14 of user core. May 15 00:07:37.620205 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 00:07:37.827892 sshd[4008]: Connection closed by 10.0.0.1 port 48600 May 15 00:07:37.832213 sshd-session[3974]: pam_unix(sshd:session): session closed for user core May 15 00:07:37.837051 systemd[1]: sshd@13-10.0.0.138:22-10.0.0.1:48600.service: Deactivated successfully. May 15 00:07:37.843285 systemd[1]: session-14.scope: Deactivated successfully. May 15 00:07:37.846867 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. May 15 00:07:37.848277 systemd-logind[1452]: Removed session 14. May 15 00:07:37.955959 systemd-networkd[1400]: cali2157054a75b: Link UP May 15 00:07:37.956993 systemd-networkd[1400]: cali2157054a75b: Gained carrier May 15 00:07:37.971025 containerd[1475]: 2025-05-15 00:07:37.616 [INFO][3977] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:07:37.971025 containerd[1475]: 2025-05-15 00:07:37.715 [INFO][3977] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--v8qdh-eth0 coredns-6f6b679f8f- kube-system 53828117-6bad-4dc9-b17e-eb808c9745f5 700 0 2025-05-15 00:07:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-v8qdh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2157054a75b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48" Namespace="kube-system" Pod="coredns-6f6b679f8f-v8qdh" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--v8qdh-" May 15 00:07:37.971025 containerd[1475]: 2025-05-15 00:07:37.715 [INFO][3977] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48" Namespace="kube-system" Pod="coredns-6f6b679f8f-v8qdh" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--v8qdh-eth0" May 15 00:07:37.971025 containerd[1475]: 2025-05-15 00:07:37.890 [INFO][4037] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48" HandleID="k8s-pod-network.f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48" Workload="localhost-k8s-coredns--6f6b679f8f--v8qdh-eth0" May 15 00:07:37.971232 containerd[1475]: 2025-05-15 00:07:37.908 [INFO][4037] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48" HandleID="k8s-pod-network.f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48" Workload="localhost-k8s-coredns--6f6b679f8f--v8qdh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003096f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-v8qdh", "timestamp":"2025-05-15 00:07:37.890547894 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:07:37.971232 containerd[1475]: 2025-05-15 00:07:37.908 [INFO][4037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:07:37.971232 containerd[1475]: 2025-05-15 00:07:37.908 [INFO][4037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:07:37.971232 containerd[1475]: 2025-05-15 00:07:37.908 [INFO][4037] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:07:37.971232 containerd[1475]: 2025-05-15 00:07:37.912 [INFO][4037] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48" host="localhost" May 15 00:07:37.971232 containerd[1475]: 2025-05-15 00:07:37.919 [INFO][4037] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:07:37.971232 containerd[1475]: 2025-05-15 00:07:37.923 [INFO][4037] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:07:37.971232 containerd[1475]: 2025-05-15 00:07:37.925 [INFO][4037] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:07:37.971232 containerd[1475]: 2025-05-15 00:07:37.927 [INFO][4037] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:07:37.971232 containerd[1475]: 2025-05-15 00:07:37.927 [INFO][4037] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48" host="localhost" May 15 00:07:37.971458 containerd[1475]: 2025-05-15 00:07:37.930 [INFO][4037] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48 May 15 00:07:37.971458 containerd[1475]: 2025-05-15 00:07:37.934 [INFO][4037] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48" host="localhost" May 15 00:07:37.971458 containerd[1475]: 2025-05-15 00:07:37.942 [INFO][4037] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48" host="localhost" May 15 00:07:37.971458 containerd[1475]: 2025-05-15 00:07:37.942 [INFO][4037] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48" host="localhost" May 15 00:07:37.971458 containerd[1475]: 2025-05-15 00:07:37.942 [INFO][4037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:07:37.971458 containerd[1475]: 2025-05-15 00:07:37.942 [INFO][4037] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48" HandleID="k8s-pod-network.f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48" Workload="localhost-k8s-coredns--6f6b679f8f--v8qdh-eth0" May 15 00:07:37.971570 containerd[1475]: 2025-05-15 00:07:37.945 [INFO][3977] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48" Namespace="kube-system" Pod="coredns-6f6b679f8f-v8qdh" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--v8qdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--v8qdh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"53828117-6bad-4dc9-b17e-eb808c9745f5", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-v8qdh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2157054a75b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:07:37.971629 containerd[1475]: 2025-05-15 00:07:37.945 [INFO][3977] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48" Namespace="kube-system" Pod="coredns-6f6b679f8f-v8qdh" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--v8qdh-eth0" May 15 00:07:37.971629 containerd[1475]: 2025-05-15 00:07:37.945 [INFO][3977] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2157054a75b ContainerID="f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48" Namespace="kube-system" Pod="coredns-6f6b679f8f-v8qdh" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--v8qdh-eth0" May 15 00:07:37.971629 containerd[1475]: 2025-05-15 00:07:37.957 [INFO][3977] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48" Namespace="kube-system" Pod="coredns-6f6b679f8f-v8qdh" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--v8qdh-eth0" May 15 00:07:37.971691 containerd[1475]: 2025-05-15 00:07:37.958 [INFO][3977] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48" Namespace="kube-system" Pod="coredns-6f6b679f8f-v8qdh" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--v8qdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--v8qdh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"53828117-6bad-4dc9-b17e-eb808c9745f5", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48", Pod:"coredns-6f6b679f8f-v8qdh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2157054a75b", MAC:"02:34:72:d0:4a:5b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:07:37.971691 containerd[1475]: 2025-05-15 00:07:37.969 [INFO][3977] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48" Namespace="kube-system" Pod="coredns-6f6b679f8f-v8qdh" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--v8qdh-eth0" May 15 00:07:38.046884 systemd-networkd[1400]: cali62e04c3b0c0: Link UP May 15 00:07:38.047128 systemd-networkd[1400]: cali62e04c3b0c0: Gained carrier May 15 00:07:38.060255 containerd[1475]: 2025-05-15 00:07:37.629 [INFO][3990] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:07:38.060255 containerd[1475]: 2025-05-15 00:07:37.714 [INFO][3990] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5547cb7878--2k2hx-eth0 calico-apiserver-5547cb7878- calico-apiserver 8a32791c-fc18-490f-91e3-aff3170763c0 703 0 2025-05-15 00:07:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5547cb7878 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5547cb7878-2k2hx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali62e04c3b0c0 [] []}} ContainerID="bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81" Namespace="calico-apiserver" Pod="calico-apiserver-5547cb7878-2k2hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5547cb7878--2k2hx-" May 15 00:07:38.060255 containerd[1475]: 2025-05-15 00:07:37.715 [INFO][3990] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81" Namespace="calico-apiserver" Pod="calico-apiserver-5547cb7878-2k2hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5547cb7878--2k2hx-eth0" May 15 00:07:38.060255 containerd[1475]: 2025-05-15 00:07:37.890 [INFO][4039] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81" HandleID="k8s-pod-network.bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81" Workload="localhost-k8s-calico--apiserver--5547cb7878--2k2hx-eth0" May 15 00:07:38.060255 containerd[1475]: 2025-05-15 00:07:37.910 [INFO][4039] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81" HandleID="k8s-pod-network.bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81" Workload="localhost-k8s-calico--apiserver--5547cb7878--2k2hx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ced0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5547cb7878-2k2hx", "timestamp":"2025-05-15 00:07:37.890548254 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:07:38.060255 containerd[1475]: 2025-05-15 00:07:37.910 [INFO][4039] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:07:38.060255 containerd[1475]: 2025-05-15 00:07:37.942 [INFO][4039] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:07:38.060255 containerd[1475]: 2025-05-15 00:07:37.942 [INFO][4039] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:07:38.060255 containerd[1475]: 2025-05-15 00:07:38.012 [INFO][4039] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81" host="localhost" May 15 00:07:38.060255 containerd[1475]: 2025-05-15 00:07:38.018 [INFO][4039] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:07:38.060255 containerd[1475]: 2025-05-15 00:07:38.024 [INFO][4039] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:07:38.060255 containerd[1475]: 2025-05-15 00:07:38.027 [INFO][4039] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:07:38.060255 containerd[1475]: 2025-05-15 00:07:38.029 [INFO][4039] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:07:38.060255 containerd[1475]: 2025-05-15 00:07:38.029 [INFO][4039] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81" host="localhost" May 15 00:07:38.060255 containerd[1475]: 2025-05-15 00:07:38.031 [INFO][4039] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81 May 15 00:07:38.060255 containerd[1475]: 2025-05-15 00:07:38.035 [INFO][4039] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81" host="localhost" May 15 00:07:38.060255 containerd[1475]: 2025-05-15 00:07:38.040 [INFO][4039] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81" host="localhost" May 15 00:07:38.060255 containerd[1475]: 2025-05-15 00:07:38.040 [INFO][4039] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81" host="localhost" May 15 00:07:38.060255 containerd[1475]: 2025-05-15 00:07:38.040 [INFO][4039] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:07:38.060255 containerd[1475]: 2025-05-15 00:07:38.040 [INFO][4039] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81" HandleID="k8s-pod-network.bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81" Workload="localhost-k8s-calico--apiserver--5547cb7878--2k2hx-eth0" May 15 00:07:38.060787 containerd[1475]: 2025-05-15 00:07:38.043 [INFO][3990] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81" Namespace="calico-apiserver" Pod="calico-apiserver-5547cb7878-2k2hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5547cb7878--2k2hx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5547cb7878--2k2hx-eth0", GenerateName:"calico-apiserver-5547cb7878-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a32791c-fc18-490f-91e3-aff3170763c0", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 7, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5547cb7878", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5547cb7878-2k2hx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali62e04c3b0c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:07:38.060787 containerd[1475]: 2025-05-15 00:07:38.043 [INFO][3990] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81" Namespace="calico-apiserver" Pod="calico-apiserver-5547cb7878-2k2hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5547cb7878--2k2hx-eth0" May 15 00:07:38.060787 containerd[1475]: 2025-05-15 00:07:38.043 [INFO][3990] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62e04c3b0c0 ContainerID="bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81" Namespace="calico-apiserver" Pod="calico-apiserver-5547cb7878-2k2hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5547cb7878--2k2hx-eth0" May 15 00:07:38.060787 containerd[1475]: 2025-05-15 00:07:38.045 [INFO][3990] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81" Namespace="calico-apiserver" Pod="calico-apiserver-5547cb7878-2k2hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5547cb7878--2k2hx-eth0" May 15 00:07:38.060787 containerd[1475]: 2025-05-15 00:07:38.046 [INFO][3990] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81" Namespace="calico-apiserver" Pod="calico-apiserver-5547cb7878-2k2hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5547cb7878--2k2hx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5547cb7878--2k2hx-eth0", GenerateName:"calico-apiserver-5547cb7878-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a32791c-fc18-490f-91e3-aff3170763c0", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 7, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5547cb7878", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81", Pod:"calico-apiserver-5547cb7878-2k2hx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali62e04c3b0c0", MAC:"de:0c:fb:f0:50:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:07:38.060787 containerd[1475]: 2025-05-15 00:07:38.058 [INFO][3990] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81" Namespace="calico-apiserver" Pod="calico-apiserver-5547cb7878-2k2hx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5547cb7878--2k2hx-eth0" May 15 00:07:38.066088 containerd[1475]: time="2025-05-15T00:07:38.065801975Z" level=info msg="connecting to shim f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48" address="unix:///run/containerd/s/5d61b1add0927b49719538db3bdc16a55161f53f7a0596be0253a058e598a14a" namespace=k8s.io protocol=ttrpc version=3 May 15 00:07:38.081462 containerd[1475]: time="2025-05-15T00:07:38.081416243Z" level=info msg="connecting to shim bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81" address="unix:///run/containerd/s/6d2928cf558fa9bdbbc41b2581196bf8436f896e1ba3a6207c65880c4de8e1e6" namespace=k8s.io protocol=ttrpc version=3 May 15 00:07:38.089166 systemd[1]: Started cri-containerd-f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48.scope - libcontainer container f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48. May 15 00:07:38.105189 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:07:38.116163 systemd[1]: Started cri-containerd-bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81.scope - libcontainer container bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81. May 15 00:07:38.132775 containerd[1475]: time="2025-05-15T00:07:38.132733022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v8qdh,Uid:53828117-6bad-4dc9-b17e-eb808c9745f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48\"" May 15 00:07:38.133083 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:07:38.134361 kubelet[2595]: E0515 00:07:38.134326 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:38.140970 containerd[1475]: time="2025-05-15T00:07:38.140807205Z" level=info msg="CreateContainer within sandbox \"f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:07:38.155566 containerd[1475]: time="2025-05-15T00:07:38.155507751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5547cb7878-2k2hx,Uid:8a32791c-fc18-490f-91e3-aff3170763c0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81\"" May 15 00:07:38.156459 containerd[1475]: time="2025-05-15T00:07:38.156425753Z" level=info msg="Container c86c91bee4d6770d51d5185f570f4fa0f0a6a7c0a8337b2570428ac4bb7f92b5: CDI devices from CRI Config.CDIDevices: []" May 15 00:07:38.156840 containerd[1475]: time="2025-05-15T00:07:38.156810337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 00:07:38.166996 containerd[1475]: time="2025-05-15T00:07:38.166926794Z" level=info msg="CreateContainer within sandbox \"f21dceb1fa4c1ec8421024fdbbe98610c685579122f048e511bf3e36dd3b1c48\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c86c91bee4d6770d51d5185f570f4fa0f0a6a7c0a8337b2570428ac4bb7f92b5\"" May 15 00:07:38.167384 containerd[1475]: time="2025-05-15T00:07:38.167362896Z" level=info msg="StartContainer for \"c86c91bee4d6770d51d5185f570f4fa0f0a6a7c0a8337b2570428ac4bb7f92b5\"" May 15 00:07:38.168148 containerd[1475]: time="2025-05-15T00:07:38.168117945Z" level=info msg="connecting to shim c86c91bee4d6770d51d5185f570f4fa0f0a6a7c0a8337b2570428ac4bb7f92b5" address="unix:///run/containerd/s/5d61b1add0927b49719538db3bdc16a55161f53f7a0596be0253a058e598a14a" protocol=ttrpc version=3 May 15 00:07:38.189168 systemd[1]: Started cri-containerd-c86c91bee4d6770d51d5185f570f4fa0f0a6a7c0a8337b2570428ac4bb7f92b5.scope - libcontainer container c86c91bee4d6770d51d5185f570f4fa0f0a6a7c0a8337b2570428ac4bb7f92b5. May 15 00:07:38.215469 containerd[1475]: time="2025-05-15T00:07:38.215356533Z" level=info msg="StartContainer for \"c86c91bee4d6770d51d5185f570f4fa0f0a6a7c0a8337b2570428ac4bb7f92b5\" returns successfully" May 15 00:07:38.549552 containerd[1475]: time="2025-05-15T00:07:38.549515867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5547cb7878-sn2c9,Uid:e6220ea6-d626-40a5-9d4e-699231369f1b,Namespace:calico-apiserver,Attempt:0,}" May 15 00:07:38.690098 systemd-networkd[1400]: cali1f8da256c63: Link UP May 15 00:07:38.690311 systemd-networkd[1400]: cali1f8da256c63: Gained carrier May 15 00:07:38.706932 containerd[1475]: 2025-05-15 00:07:38.590 [INFO][4209] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:07:38.706932 containerd[1475]: 2025-05-15 00:07:38.605 [INFO][4209] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5547cb7878--sn2c9-eth0 calico-apiserver-5547cb7878- calico-apiserver e6220ea6-d626-40a5-9d4e-699231369f1b 704 0 2025-05-15 00:07:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5547cb7878 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5547cb7878-sn2c9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1f8da256c63 [] []}} ContainerID="f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5" Namespace="calico-apiserver" Pod="calico-apiserver-5547cb7878-sn2c9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5547cb7878--sn2c9-" May 15 00:07:38.706932 containerd[1475]: 2025-05-15 00:07:38.605 [INFO][4209] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5" Namespace="calico-apiserver" Pod="calico-apiserver-5547cb7878-sn2c9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5547cb7878--sn2c9-eth0" May 15 00:07:38.706932 containerd[1475]: 2025-05-15 00:07:38.629 [INFO][4222] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5" HandleID="k8s-pod-network.f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5" Workload="localhost-k8s-calico--apiserver--5547cb7878--sn2c9-eth0" May 15 00:07:38.706932 containerd[1475]: 2025-05-15 00:07:38.640 [INFO][4222] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5" HandleID="k8s-pod-network.f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5" Workload="localhost-k8s-calico--apiserver--5547cb7878--sn2c9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d390), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5547cb7878-sn2c9", "timestamp":"2025-05-15 00:07:38.629681721 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:07:38.706932 containerd[1475]: 2025-05-15 00:07:38.640 [INFO][4222] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:07:38.706932 containerd[1475]: 2025-05-15 00:07:38.640 [INFO][4222] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:07:38.706932 containerd[1475]: 2025-05-15 00:07:38.640 [INFO][4222] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:07:38.706932 containerd[1475]: 2025-05-15 00:07:38.645 [INFO][4222] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5" host="localhost" May 15 00:07:38.706932 containerd[1475]: 2025-05-15 00:07:38.650 [INFO][4222] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:07:38.706932 containerd[1475]: 2025-05-15 00:07:38.655 [INFO][4222] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:07:38.706932 containerd[1475]: 2025-05-15 00:07:38.659 [INFO][4222] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:07:38.706932 containerd[1475]: 2025-05-15 00:07:38.661 [INFO][4222] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:07:38.706932 containerd[1475]: 2025-05-15 00:07:38.661 [INFO][4222] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5" host="localhost" May 15 00:07:38.706932 containerd[1475]: 2025-05-15 00:07:38.666 [INFO][4222] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5 May 15 00:07:38.706932 containerd[1475]: 2025-05-15 00:07:38.672 [INFO][4222] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5" host="localhost" May 15 00:07:38.706932 containerd[1475]: 2025-05-15 00:07:38.679 [INFO][4222] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5" host="localhost" May 15 00:07:38.706932 containerd[1475]: 2025-05-15 00:07:38.679 [INFO][4222] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5" host="localhost" May 15 00:07:38.706932 containerd[1475]: 2025-05-15 00:07:38.680 [INFO][4222] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:07:38.706932 containerd[1475]: 2025-05-15 00:07:38.680 [INFO][4222] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5" HandleID="k8s-pod-network.f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5" Workload="localhost-k8s-calico--apiserver--5547cb7878--sn2c9-eth0" May 15 00:07:38.707785 containerd[1475]: 2025-05-15 00:07:38.683 [INFO][4209] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5" Namespace="calico-apiserver" Pod="calico-apiserver-5547cb7878-sn2c9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5547cb7878--sn2c9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5547cb7878--sn2c9-eth0", GenerateName:"calico-apiserver-5547cb7878-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6220ea6-d626-40a5-9d4e-699231369f1b", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 7, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5547cb7878", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5547cb7878-sn2c9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f8da256c63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:07:38.707785 containerd[1475]: 2025-05-15 00:07:38.684 [INFO][4209] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5" Namespace="calico-apiserver" Pod="calico-apiserver-5547cb7878-sn2c9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5547cb7878--sn2c9-eth0" May 15 00:07:38.707785 containerd[1475]: 2025-05-15 00:07:38.684 [INFO][4209] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f8da256c63 ContainerID="f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5" Namespace="calico-apiserver" Pod="calico-apiserver-5547cb7878-sn2c9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5547cb7878--sn2c9-eth0" May 15 00:07:38.707785 containerd[1475]: 2025-05-15 00:07:38.688 [INFO][4209] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5" Namespace="calico-apiserver" Pod="calico-apiserver-5547cb7878-sn2c9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5547cb7878--sn2c9-eth0" May 15 00:07:38.707785 containerd[1475]: 2025-05-15 00:07:38.692 [INFO][4209] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5" Namespace="calico-apiserver" Pod="calico-apiserver-5547cb7878-sn2c9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5547cb7878--sn2c9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5547cb7878--sn2c9-eth0", GenerateName:"calico-apiserver-5547cb7878-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6220ea6-d626-40a5-9d4e-699231369f1b", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 7, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5547cb7878", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5", Pod:"calico-apiserver-5547cb7878-sn2c9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f8da256c63", MAC:"a2:c7:f5:49:04:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:07:38.707785 containerd[1475]: 2025-05-15 00:07:38.703 [INFO][4209] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5" Namespace="calico-apiserver" Pod="calico-apiserver-5547cb7878-sn2c9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5547cb7878--sn2c9-eth0" May 15 00:07:38.714960 kubelet[2595]: E0515 00:07:38.714776 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:38.727001 kubelet[2595]: I0515 00:07:38.726920 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-v8qdh" podStartSLOduration=34.726902383 podStartE2EDuration="34.726902383s" podCreationTimestamp="2025-05-15 00:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:07:38.726639234 +0000 UTC m=+41.252314253" watchObservedRunningTime="2025-05-15 00:07:38.726902383 +0000 UTC m=+41.252577442" May 15 00:07:38.743230 containerd[1475]: time="2025-05-15T00:07:38.743184024Z" level=info msg="connecting to shim f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5" address="unix:///run/containerd/s/3d241285cf65446566fad669ca7370a14f0954f518ab42906a6f50d054f428b1" namespace=k8s.io protocol=ttrpc version=3 May 15 00:07:38.779204 systemd[1]: Started cri-containerd-f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5.scope - libcontainer container f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5. May 15 00:07:38.796056 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:07:38.828338 containerd[1475]: time="2025-05-15T00:07:38.828142998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5547cb7878-sn2c9,Uid:e6220ea6-d626-40a5-9d4e-699231369f1b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5\"" May 15 00:07:39.716671 kubelet[2595]: E0515 00:07:39.716585 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:39.718119 systemd-networkd[1400]: cali62e04c3b0c0: Gained IPv6LL May 15 00:07:39.901664 containerd[1475]: time="2025-05-15T00:07:39.901613303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:39.902426 containerd[1475]: time="2025-05-15T00:07:39.902174240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 15 00:07:39.902827 containerd[1475]: time="2025-05-15T00:07:39.902767976Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:39.904805 containerd[1475]: time="2025-05-15T00:07:39.904753936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:39.905327 containerd[1475]: time="2025-05-15T00:07:39.905291954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.748445379s" May 15 00:07:39.905384 containerd[1475]: time="2025-05-15T00:07:39.905327872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 15 00:07:39.906388 containerd[1475]: time="2025-05-15T00:07:39.906339911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 00:07:39.909472 containerd[1475]: time="2025-05-15T00:07:39.909427906Z" level=info msg="CreateContainer within sandbox \"bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 00:07:39.910414 systemd-networkd[1400]: cali2157054a75b: Gained IPv6LL May 15 00:07:39.917003 containerd[1475]: time="2025-05-15T00:07:39.916496578Z" level=info msg="Container 2aac00d20a3b2317fafa4e76b48b0e617fdea50ddc7756909d2e0e30b1595b60: CDI devices from CRI Config.CDIDevices: []" May 15 00:07:39.924424 containerd[1475]: time="2025-05-15T00:07:39.924272582Z" level=info msg="CreateContainer within sandbox \"bce6d8cc285e147c2ea5d9c3b247d7b996aa57a3101ba13a88f03ccb01b19e81\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2aac00d20a3b2317fafa4e76b48b0e617fdea50ddc7756909d2e0e30b1595b60\"" May 15 00:07:39.925110 containerd[1475]: time="2025-05-15T00:07:39.925073790Z" level=info msg="StartContainer for \"2aac00d20a3b2317fafa4e76b48b0e617fdea50ddc7756909d2e0e30b1595b60\"" May 15 00:07:39.926223 containerd[1475]: time="2025-05-15T00:07:39.926178705Z" level=info msg="connecting to shim 2aac00d20a3b2317fafa4e76b48b0e617fdea50ddc7756909d2e0e30b1595b60" address="unix:///run/containerd/s/6d2928cf558fa9bdbbc41b2581196bf8436f896e1ba3a6207c65880c4de8e1e6" protocol=ttrpc version=3 May 15 00:07:39.956191 systemd[1]: Started cri-containerd-2aac00d20a3b2317fafa4e76b48b0e617fdea50ddc7756909d2e0e30b1595b60.scope - libcontainer container 2aac00d20a3b2317fafa4e76b48b0e617fdea50ddc7756909d2e0e30b1595b60. May 15 00:07:39.995588 containerd[1475]: time="2025-05-15T00:07:39.993654483Z" level=info msg="StartContainer for \"2aac00d20a3b2317fafa4e76b48b0e617fdea50ddc7756909d2e0e30b1595b60\" returns successfully" May 15 00:07:40.168368 systemd-networkd[1400]: cali1f8da256c63: Gained IPv6LL May 15 00:07:40.199151 containerd[1475]: time="2025-05-15T00:07:40.199089265Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:40.201981 containerd[1475]: time="2025-05-15T00:07:40.200061786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 15 00:07:40.202107 containerd[1475]: time="2025-05-15T00:07:40.202075546Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 295.701797ms" May 15 00:07:40.202154 containerd[1475]: time="2025-05-15T00:07:40.202110105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 15 00:07:40.204273 containerd[1475]: time="2025-05-15T00:07:40.204238741Z" level=info msg="CreateContainer within sandbox \"f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 00:07:40.213965 containerd[1475]: time="2025-05-15T00:07:40.213903958Z" level=info msg="Container d57e2d990ad72d40b79439c465b7a599ad5df423b2881ce83da29e91e2ab07a5: CDI devices from CRI Config.CDIDevices: []" May 15 00:07:40.220825 containerd[1475]: time="2025-05-15T00:07:40.220774486Z" level=info msg="CreateContainer within sandbox \"f1744ffe7c7257db5645f6e182c66bf5c5679bc35f2bd7ec92049f23e6cfc0c5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d57e2d990ad72d40b79439c465b7a599ad5df423b2881ce83da29e91e2ab07a5\"" May 15 00:07:40.222741 containerd[1475]: time="2025-05-15T00:07:40.221322385Z" level=info msg="StartContainer for \"d57e2d990ad72d40b79439c465b7a599ad5df423b2881ce83da29e91e2ab07a5\"" May 15 00:07:40.222741 containerd[1475]: time="2025-05-15T00:07:40.222402862Z" level=info msg="connecting to shim d57e2d990ad72d40b79439c465b7a599ad5df423b2881ce83da29e91e2ab07a5" address="unix:///run/containerd/s/3d241285cf65446566fad669ca7370a14f0954f518ab42906a6f50d054f428b1" protocol=ttrpc version=3 May 15 00:07:40.240159 systemd[1]: Started cri-containerd-d57e2d990ad72d40b79439c465b7a599ad5df423b2881ce83da29e91e2ab07a5.scope - libcontainer container d57e2d990ad72d40b79439c465b7a599ad5df423b2881ce83da29e91e2ab07a5. May 15 00:07:40.281910 containerd[1475]: time="2025-05-15T00:07:40.281866029Z" level=info msg="StartContainer for \"d57e2d990ad72d40b79439c465b7a599ad5df423b2881ce83da29e91e2ab07a5\" returns successfully" May 15 00:07:40.548839 kubelet[2595]: E0515 00:07:40.548694 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:40.550005 containerd[1475]: time="2025-05-15T00:07:40.549568274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lvpbn,Uid:dd74038d-ce76-4de0-b632-6c340ef7536b,Namespace:kube-system,Attempt:0,}" May 15 00:07:40.723936 kubelet[2595]: E0515 00:07:40.723867 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:40.746789 systemd-networkd[1400]: cali0d9e7994a15: Link UP May 15 00:07:40.747001 systemd-networkd[1400]: cali0d9e7994a15: Gained carrier May 15 00:07:40.770864 kubelet[2595]: I0515 00:07:40.769364 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5547cb7878-2k2hx" podStartSLOduration=24.019751684 podStartE2EDuration="25.769345536s" podCreationTimestamp="2025-05-15 00:07:15 +0000 UTC" firstStartedPulling="2025-05-15 00:07:38.156590186 +0000 UTC m=+40.682265205" lastFinishedPulling="2025-05-15 00:07:39.906183998 +0000 UTC m=+42.431859057" observedRunningTime="2025-05-15 00:07:40.767259939 +0000 UTC m=+43.292934998" watchObservedRunningTime="2025-05-15 00:07:40.769345536 +0000 UTC m=+43.295020595" May 15 00:07:40.806759 containerd[1475]: 2025-05-15 00:07:40.573 [INFO][4423] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:07:40.806759 containerd[1475]: 2025-05-15 00:07:40.591 [INFO][4423] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--lvpbn-eth0 coredns-6f6b679f8f- kube-system dd74038d-ce76-4de0-b632-6c340ef7536b 705 0 2025-05-15 00:07:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-lvpbn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0d9e7994a15 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b" Namespace="kube-system" Pod="coredns-6f6b679f8f-lvpbn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--lvpbn-" May 15 00:07:40.806759 containerd[1475]: 2025-05-15 00:07:40.591 [INFO][4423] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b" Namespace="kube-system" Pod="coredns-6f6b679f8f-lvpbn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--lvpbn-eth0" May 15 00:07:40.806759 containerd[1475]: 2025-05-15 00:07:40.634 [INFO][4437] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b" HandleID="k8s-pod-network.e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b" Workload="localhost-k8s-coredns--6f6b679f8f--lvpbn-eth0" May 15 00:07:40.806759 containerd[1475]: 2025-05-15 00:07:40.644 [INFO][4437] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b" HandleID="k8s-pod-network.e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b" Workload="localhost-k8s-coredns--6f6b679f8f--lvpbn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f3680), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-lvpbn", "timestamp":"2025-05-15 00:07:40.634289761 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:07:40.806759 containerd[1475]: 2025-05-15 00:07:40.644 [INFO][4437] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:07:40.806759 containerd[1475]: 2025-05-15 00:07:40.644 [INFO][4437] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:07:40.806759 containerd[1475]: 2025-05-15 00:07:40.644 [INFO][4437] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:07:40.806759 containerd[1475]: 2025-05-15 00:07:40.646 [INFO][4437] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b" host="localhost" May 15 00:07:40.806759 containerd[1475]: 2025-05-15 00:07:40.654 [INFO][4437] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:07:40.806759 containerd[1475]: 2025-05-15 00:07:40.660 [INFO][4437] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:07:40.806759 containerd[1475]: 2025-05-15 00:07:40.663 [INFO][4437] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:07:40.806759 containerd[1475]: 2025-05-15 00:07:40.665 [INFO][4437] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:07:40.806759 containerd[1475]: 2025-05-15 00:07:40.665 [INFO][4437] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b" host="localhost" May 15 00:07:40.806759 containerd[1475]: 2025-05-15 00:07:40.671 [INFO][4437] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b May 15 00:07:40.806759 containerd[1475]: 2025-05-15 00:07:40.697 [INFO][4437] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b" host="localhost" May 15 00:07:40.806759 containerd[1475]: 2025-05-15 00:07:40.741 [INFO][4437] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b" host="localhost" May 15 00:07:40.806759 containerd[1475]: 2025-05-15 00:07:40.741 [INFO][4437] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b" host="localhost" May 15 00:07:40.806759 containerd[1475]: 2025-05-15 00:07:40.741 [INFO][4437] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:07:40.806759 containerd[1475]: 2025-05-15 00:07:40.741 [INFO][4437] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b" HandleID="k8s-pod-network.e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b" Workload="localhost-k8s-coredns--6f6b679f8f--lvpbn-eth0" May 15 00:07:40.807357 containerd[1475]: 2025-05-15 00:07:40.744 [INFO][4423] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b" Namespace="kube-system" Pod="coredns-6f6b679f8f-lvpbn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--lvpbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--lvpbn-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"dd74038d-ce76-4de0-b632-6c340ef7536b", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-lvpbn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0d9e7994a15", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:07:40.807357 containerd[1475]: 2025-05-15 00:07:40.744 [INFO][4423] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b" Namespace="kube-system" Pod="coredns-6f6b679f8f-lvpbn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--lvpbn-eth0" May 15 00:07:40.807357 containerd[1475]: 2025-05-15 00:07:40.744 [INFO][4423] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d9e7994a15 ContainerID="e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b" Namespace="kube-system" Pod="coredns-6f6b679f8f-lvpbn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--lvpbn-eth0" May 15 00:07:40.807357 containerd[1475]: 2025-05-15 00:07:40.747 [INFO][4423] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b" Namespace="kube-system" Pod="coredns-6f6b679f8f-lvpbn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--lvpbn-eth0" May 15 00:07:40.807357 containerd[1475]: 2025-05-15 00:07:40.747 [INFO][4423] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b" Namespace="kube-system" Pod="coredns-6f6b679f8f-lvpbn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--lvpbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--lvpbn-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"dd74038d-ce76-4de0-b632-6c340ef7536b", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b", Pod:"coredns-6f6b679f8f-lvpbn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0d9e7994a15", MAC:"2e:95:d9:9a:46:78", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:07:40.807357 containerd[1475]: 2025-05-15 00:07:40.802 [INFO][4423] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b" Namespace="kube-system" Pod="coredns-6f6b679f8f-lvpbn" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--lvpbn-eth0" May 15 00:07:40.830392 kubelet[2595]: I0515 00:07:40.830322 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5547cb7878-sn2c9" podStartSLOduration=24.458102661 podStartE2EDuration="25.830305403s" podCreationTimestamp="2025-05-15 00:07:15 +0000 UTC" firstStartedPulling="2025-05-15 00:07:38.830589896 +0000 UTC m=+41.356264955" lastFinishedPulling="2025-05-15 00:07:40.202792638 +0000 UTC m=+42.728467697" observedRunningTime="2025-05-15 00:07:40.829034814 +0000 UTC m=+43.354709873" watchObservedRunningTime="2025-05-15 00:07:40.830305403 +0000 UTC m=+43.355980462" May 15 00:07:40.865591 containerd[1475]: time="2025-05-15T00:07:40.865535969Z" level=info msg="connecting to shim e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b" address="unix:///run/containerd/s/8b89355e353c83f3e147151c917e2444318408f82df3f729e805357592727674" namespace=k8s.io protocol=ttrpc version=3 May 15 00:07:40.889316 systemd[1]: Started cri-containerd-e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b.scope - libcontainer container e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b. May 15 00:07:40.907018 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:07:40.953520 containerd[1475]: time="2025-05-15T00:07:40.953435770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lvpbn,Uid:dd74038d-ce76-4de0-b632-6c340ef7536b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b\"" May 15 00:07:40.955574 kubelet[2595]: E0515 00:07:40.955542 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:40.964300 containerd[1475]: time="2025-05-15T00:07:40.964242463Z" level=info msg="CreateContainer within sandbox \"e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:07:40.983230 containerd[1475]: time="2025-05-15T00:07:40.983172274Z" level=info msg="Container 531700da12e73389959a83370c14a06167195c73576731b414a26ff7191432d2: CDI devices from CRI Config.CDIDevices: []" May 15 00:07:40.991735 containerd[1475]: time="2025-05-15T00:07:40.991668977Z" level=info msg="CreateContainer within sandbox \"e2c9c5d7bc4f4e10990d01004480db96efebb0afd1c28bb0a46c5876bc134c0b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"531700da12e73389959a83370c14a06167195c73576731b414a26ff7191432d2\"" May 15 00:07:40.992637 containerd[1475]: time="2025-05-15T00:07:40.992604740Z" level=info msg="StartContainer for \"531700da12e73389959a83370c14a06167195c73576731b414a26ff7191432d2\"" May 15 00:07:40.995851 containerd[1475]: time="2025-05-15T00:07:40.995812253Z" level=info msg="connecting to shim 531700da12e73389959a83370c14a06167195c73576731b414a26ff7191432d2" address="unix:///run/containerd/s/8b89355e353c83f3e147151c917e2444318408f82df3f729e805357592727674" protocol=ttrpc version=3 May 15 00:07:41.021239 systemd[1]: Started cri-containerd-531700da12e73389959a83370c14a06167195c73576731b414a26ff7191432d2.scope - libcontainer container 531700da12e73389959a83370c14a06167195c73576731b414a26ff7191432d2. May 15 00:07:41.057633 containerd[1475]: time="2025-05-15T00:07:41.057511310Z" level=info msg="StartContainer for \"531700da12e73389959a83370c14a06167195c73576731b414a26ff7191432d2\" returns successfully" May 15 00:07:41.726850 kubelet[2595]: I0515 00:07:41.726809 2595 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:07:41.727812 kubelet[2595]: E0515 00:07:41.727408 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:41.758413 kubelet[2595]: I0515 00:07:41.758301 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-lvpbn" podStartSLOduration=37.758280817 podStartE2EDuration="37.758280817s" podCreationTimestamp="2025-05-15 00:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:07:41.756296974 +0000 UTC m=+44.281972033" watchObservedRunningTime="2025-05-15 00:07:41.758280817 +0000 UTC m=+44.283955876" May 15 00:07:42.278223 systemd-networkd[1400]: cali0d9e7994a15: Gained IPv6LL May 15 00:07:42.549124 containerd[1475]: time="2025-05-15T00:07:42.548941487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8zzgl,Uid:b968e49b-b6e1-4eac-b633-cad76111fc0d,Namespace:calico-system,Attempt:0,}" May 15 00:07:42.549124 containerd[1475]: time="2025-05-15T00:07:42.548941807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7947848598-hdlzj,Uid:1a49a1cf-d7ec-4482-a014-bd9971947209,Namespace:calico-system,Attempt:0,}" May 15 00:07:42.670596 kubelet[2595]: I0515 00:07:42.670540 2595 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:07:42.670913 kubelet[2595]: E0515 00:07:42.670884 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:42.699480 systemd-networkd[1400]: calic7a5056de7f: Link UP May 15 00:07:42.699685 systemd-networkd[1400]: calic7a5056de7f: Gained carrier May 15 00:07:42.717656 containerd[1475]: 2025-05-15 00:07:42.575 [INFO][4609] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:07:42.717656 containerd[1475]: 2025-05-15 00:07:42.597 [INFO][4609] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7947848598--hdlzj-eth0 calico-kube-controllers-7947848598- calico-system 1a49a1cf-d7ec-4482-a014-bd9971947209 706 0 2025-05-15 00:07:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7947848598 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7947848598-hdlzj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic7a5056de7f [] []}} ContainerID="8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036" Namespace="calico-system" Pod="calico-kube-controllers-7947848598-hdlzj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7947848598--hdlzj-" May 15 00:07:42.717656 containerd[1475]: 2025-05-15 00:07:42.598 [INFO][4609] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036" Namespace="calico-system" Pod="calico-kube-controllers-7947848598-hdlzj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7947848598--hdlzj-eth0" May 15 00:07:42.717656 containerd[1475]: 2025-05-15 00:07:42.634 [INFO][4627] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036" HandleID="k8s-pod-network.8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036" Workload="localhost-k8s-calico--kube--controllers--7947848598--hdlzj-eth0" May 15 00:07:42.717656 containerd[1475]: 2025-05-15 00:07:42.654 [INFO][4627] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036" HandleID="k8s-pod-network.8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036" Workload="localhost-k8s-calico--kube--controllers--7947848598--hdlzj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000503650), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7947848598-hdlzj", "timestamp":"2025-05-15 00:07:42.634679427 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:07:42.717656 containerd[1475]: 2025-05-15 00:07:42.654 [INFO][4627] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:07:42.717656 containerd[1475]: 2025-05-15 00:07:42.654 [INFO][4627] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:07:42.717656 containerd[1475]: 2025-05-15 00:07:42.654 [INFO][4627] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:07:42.717656 containerd[1475]: 2025-05-15 00:07:42.657 [INFO][4627] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036" host="localhost" May 15 00:07:42.717656 containerd[1475]: 2025-05-15 00:07:42.663 [INFO][4627] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:07:42.717656 containerd[1475]: 2025-05-15 00:07:42.670 [INFO][4627] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:07:42.717656 containerd[1475]: 2025-05-15 00:07:42.672 [INFO][4627] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:07:42.717656 containerd[1475]: 2025-05-15 00:07:42.675 [INFO][4627] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:07:42.717656 containerd[1475]: 2025-05-15 00:07:42.675 [INFO][4627] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036" host="localhost" May 15 00:07:42.717656 containerd[1475]: 2025-05-15 00:07:42.679 [INFO][4627] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036 May 15 00:07:42.717656 containerd[1475]: 2025-05-15 00:07:42.683 [INFO][4627] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036" host="localhost" May 15 00:07:42.717656 containerd[1475]: 2025-05-15 00:07:42.689 [INFO][4627] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036" host="localhost" May 15 00:07:42.717656 containerd[1475]: 2025-05-15 00:07:42.689 [INFO][4627] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036" host="localhost" May 15 00:07:42.717656 containerd[1475]: 2025-05-15 00:07:42.689 [INFO][4627] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:07:42.717656 containerd[1475]: 2025-05-15 00:07:42.689 [INFO][4627] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036" HandleID="k8s-pod-network.8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036" Workload="localhost-k8s-calico--kube--controllers--7947848598--hdlzj-eth0" May 15 00:07:42.718378 containerd[1475]: 2025-05-15 00:07:42.693 [INFO][4609] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036" Namespace="calico-system" Pod="calico-kube-controllers-7947848598-hdlzj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7947848598--hdlzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7947848598--hdlzj-eth0", GenerateName:"calico-kube-controllers-7947848598-", Namespace:"calico-system", SelfLink:"", UID:"1a49a1cf-d7ec-4482-a014-bd9971947209", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 7, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7947848598", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7947848598-hdlzj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7a5056de7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:07:42.718378 containerd[1475]: 2025-05-15 00:07:42.693 [INFO][4609] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036" Namespace="calico-system" Pod="calico-kube-controllers-7947848598-hdlzj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7947848598--hdlzj-eth0" May 15 00:07:42.718378 containerd[1475]: 2025-05-15 00:07:42.693 [INFO][4609] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7a5056de7f ContainerID="8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036" Namespace="calico-system" Pod="calico-kube-controllers-7947848598-hdlzj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7947848598--hdlzj-eth0" May 15 00:07:42.718378 containerd[1475]: 2025-05-15 00:07:42.695 [INFO][4609] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036" Namespace="calico-system" Pod="calico-kube-controllers-7947848598-hdlzj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7947848598--hdlzj-eth0" May 15 00:07:42.718378 containerd[1475]: 2025-05-15 00:07:42.705 [INFO][4609] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036" Namespace="calico-system" Pod="calico-kube-controllers-7947848598-hdlzj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7947848598--hdlzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7947848598--hdlzj-eth0", GenerateName:"calico-kube-controllers-7947848598-", Namespace:"calico-system", SelfLink:"", UID:"1a49a1cf-d7ec-4482-a014-bd9971947209", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 7, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7947848598", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036", Pod:"calico-kube-controllers-7947848598-hdlzj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7a5056de7f", MAC:"42:65:72:e1:66:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:07:42.718378 containerd[1475]: 2025-05-15 00:07:42.715 [INFO][4609] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036" Namespace="calico-system" Pod="calico-kube-controllers-7947848598-hdlzj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7947848598--hdlzj-eth0" May 15 00:07:42.729747 kubelet[2595]: E0515 00:07:42.729376 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:42.730111 kubelet[2595]: E0515 00:07:42.730040 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:42.751258 containerd[1475]: time="2025-05-15T00:07:42.751193212Z" level=info msg="connecting to shim 8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036" address="unix:///run/containerd/s/9caeb1dc4e071e8c12daa13b8e0bc8984b345f75dabcd954f967fd6b7a93dc4b" namespace=k8s.io protocol=ttrpc version=3 May 15 00:07:42.780148 systemd[1]: Started cri-containerd-8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036.scope - libcontainer container 8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036. May 15 00:07:42.795542 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:07:42.798805 systemd-networkd[1400]: calic9e2cc28fff: Link UP May 15 00:07:42.799594 systemd-networkd[1400]: calic9e2cc28fff: Gained carrier May 15 00:07:42.817373 containerd[1475]: 2025-05-15 00:07:42.577 [INFO][4597] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:07:42.817373 containerd[1475]: 2025-05-15 00:07:42.598 [INFO][4597] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8zzgl-eth0 csi-node-driver- calico-system b968e49b-b6e1-4eac-b633-cad76111fc0d 605 0 2025-05-15 00:07:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8zzgl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic9e2cc28fff [] []}} ContainerID="f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373" Namespace="calico-system" Pod="csi-node-driver-8zzgl" WorkloadEndpoint="localhost-k8s-csi--node--driver--8zzgl-" May 15 00:07:42.817373 containerd[1475]: 2025-05-15 00:07:42.598 [INFO][4597] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373" Namespace="calico-system" Pod="csi-node-driver-8zzgl" WorkloadEndpoint="localhost-k8s-csi--node--driver--8zzgl-eth0" May 15 00:07:42.817373 containerd[1475]: 2025-05-15 00:07:42.656 [INFO][4633] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373" HandleID="k8s-pod-network.f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373" Workload="localhost-k8s-csi--node--driver--8zzgl-eth0" May 15 00:07:42.817373 containerd[1475]: 2025-05-15 00:07:42.672 [INFO][4633] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373" HandleID="k8s-pod-network.f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373" Workload="localhost-k8s-csi--node--driver--8zzgl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000360980), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8zzgl", "timestamp":"2025-05-15 00:07:42.656029945 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:07:42.817373 containerd[1475]: 2025-05-15 00:07:42.672 [INFO][4633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:07:42.817373 containerd[1475]: 2025-05-15 00:07:42.692 [INFO][4633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:07:42.817373 containerd[1475]: 2025-05-15 00:07:42.693 [INFO][4633] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:07:42.817373 containerd[1475]: 2025-05-15 00:07:42.757 [INFO][4633] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373" host="localhost" May 15 00:07:42.817373 containerd[1475]: 2025-05-15 00:07:42.764 [INFO][4633] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:07:42.817373 containerd[1475]: 2025-05-15 00:07:42.770 [INFO][4633] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:07:42.817373 containerd[1475]: 2025-05-15 00:07:42.772 [INFO][4633] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:07:42.817373 containerd[1475]: 2025-05-15 00:07:42.775 [INFO][4633] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:07:42.817373 containerd[1475]: 2025-05-15 00:07:42.775 [INFO][4633] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373" host="localhost" May 15 00:07:42.817373 containerd[1475]: 2025-05-15 00:07:42.777 [INFO][4633] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373 May 15 00:07:42.817373 containerd[1475]: 2025-05-15 00:07:42.781 [INFO][4633] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373" host="localhost" May 15 00:07:42.817373 containerd[1475]: 2025-05-15 00:07:42.791 [INFO][4633] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373" host="localhost" May 15 00:07:42.817373 containerd[1475]: 2025-05-15 00:07:42.791 [INFO][4633] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373" host="localhost" May 15 00:07:42.817373 containerd[1475]: 2025-05-15 00:07:42.792 [INFO][4633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:07:42.817373 containerd[1475]: 2025-05-15 00:07:42.792 [INFO][4633] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373" HandleID="k8s-pod-network.f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373" Workload="localhost-k8s-csi--node--driver--8zzgl-eth0" May 15 00:07:42.817906 containerd[1475]: 2025-05-15 00:07:42.794 [INFO][4597] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373" Namespace="calico-system" Pod="csi-node-driver-8zzgl" WorkloadEndpoint="localhost-k8s-csi--node--driver--8zzgl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8zzgl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b968e49b-b6e1-4eac-b633-cad76111fc0d", ResourceVersion:"605", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 7, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8zzgl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic9e2cc28fff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:07:42.817906 containerd[1475]: 2025-05-15 00:07:42.795 [INFO][4597] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373" Namespace="calico-system" Pod="csi-node-driver-8zzgl" WorkloadEndpoint="localhost-k8s-csi--node--driver--8zzgl-eth0" May 15 00:07:42.817906 containerd[1475]: 2025-05-15 00:07:42.795 [INFO][4597] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9e2cc28fff ContainerID="f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373" Namespace="calico-system" Pod="csi-node-driver-8zzgl" WorkloadEndpoint="localhost-k8s-csi--node--driver--8zzgl-eth0" May 15 00:07:42.817906 containerd[1475]: 2025-05-15 00:07:42.803 [INFO][4597] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373" Namespace="calico-system" Pod="csi-node-driver-8zzgl" WorkloadEndpoint="localhost-k8s-csi--node--driver--8zzgl-eth0" May 15 00:07:42.817906 containerd[1475]: 2025-05-15 00:07:42.803 [INFO][4597] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373" Namespace="calico-system" Pod="csi-node-driver-8zzgl" WorkloadEndpoint="localhost-k8s-csi--node--driver--8zzgl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8zzgl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b968e49b-b6e1-4eac-b633-cad76111fc0d", ResourceVersion:"605", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 7, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373", Pod:"csi-node-driver-8zzgl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic9e2cc28fff", MAC:"ae:38:2a:f1:0a:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:07:42.817906 containerd[1475]: 2025-05-15 00:07:42.815 [INFO][4597] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373" Namespace="calico-system" Pod="csi-node-driver-8zzgl" WorkloadEndpoint="localhost-k8s-csi--node--driver--8zzgl-eth0" May 15 00:07:42.840432 systemd[1]: Started sshd@14-10.0.0.138:22-10.0.0.1:51360.service - OpenSSH per-connection server daemon (10.0.0.1:51360). May 15 00:07:42.845593 containerd[1475]: time="2025-05-15T00:07:42.845385595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7947848598-hdlzj,Uid:1a49a1cf-d7ec-4482-a014-bd9971947209,Namespace:calico-system,Attempt:0,} returns sandbox id \"8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036\"" May 15 00:07:42.849719 containerd[1475]: time="2025-05-15T00:07:42.849629916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 00:07:42.868530 containerd[1475]: time="2025-05-15T00:07:42.868477928Z" level=info msg="connecting to shim f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373" address="unix:///run/containerd/s/9734072a363b7497ef1f48c448411249bcc46bff1088b17da01f1096fc4c84c3" namespace=k8s.io protocol=ttrpc version=3 May 15 00:07:42.895197 systemd[1]: Started cri-containerd-f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373.scope - libcontainer container f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373. May 15 00:07:42.898002 kernel: bpftool[4762]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 15 00:07:42.917959 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:07:42.922951 sshd[4721]: Accepted publickey for core from 10.0.0.1 port 51360 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:07:42.924713 sshd-session[4721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:42.938023 systemd-logind[1452]: New session 15 of user core. May 15 00:07:42.943175 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 00:07:42.948639 containerd[1475]: time="2025-05-15T00:07:42.948584400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8zzgl,Uid:b968e49b-b6e1-4eac-b633-cad76111fc0d,Namespace:calico-system,Attempt:0,} returns sandbox id \"f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373\"" May 15 00:07:43.228955 systemd-networkd[1400]: vxlan.calico: Link UP May 15 00:07:43.228961 systemd-networkd[1400]: vxlan.calico: Gained carrier May 15 00:07:43.246315 sshd[4787]: Connection closed by 10.0.0.1 port 51360 May 15 00:07:43.246798 sshd-session[4721]: pam_unix(sshd:session): session closed for user core May 15 00:07:43.269432 systemd[1]: sshd@14-10.0.0.138:22-10.0.0.1:51360.service: Deactivated successfully. May 15 00:07:43.274716 systemd[1]: session-15.scope: Deactivated successfully. May 15 00:07:43.279777 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. May 15 00:07:43.283386 systemd[1]: Started sshd@15-10.0.0.138:22-10.0.0.1:51376.service - OpenSSH per-connection server daemon (10.0.0.1:51376). May 15 00:07:43.286495 systemd-logind[1452]: Removed session 15. May 15 00:07:43.348849 sshd[4856]: Accepted publickey for core from 10.0.0.1 port 51376 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:07:43.350438 sshd-session[4856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:43.355450 systemd-logind[1452]: New session 16 of user core. May 15 00:07:43.365199 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 00:07:43.715134 sshd[4878]: Connection closed by 10.0.0.1 port 51376 May 15 00:07:43.715755 sshd-session[4856]: pam_unix(sshd:session): session closed for user core May 15 00:07:43.724849 systemd[1]: sshd@15-10.0.0.138:22-10.0.0.1:51376.service: Deactivated successfully. May 15 00:07:43.729142 systemd[1]: session-16.scope: Deactivated successfully. May 15 00:07:43.730322 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. May 15 00:07:43.734954 systemd[1]: Started sshd@16-10.0.0.138:22-10.0.0.1:51388.service - OpenSSH per-connection server daemon (10.0.0.1:51388). May 15 00:07:43.737537 kubelet[2595]: E0515 00:07:43.737346 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:43.737402 systemd-logind[1452]: Removed session 16. May 15 00:07:43.821369 sshd[4925]: Accepted publickey for core from 10.0.0.1 port 51388 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:07:43.823884 sshd-session[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:43.829038 systemd-logind[1452]: New session 17 of user core. May 15 00:07:43.838200 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 00:07:43.878229 systemd-networkd[1400]: calic7a5056de7f: Gained IPv6LL May 15 00:07:44.456964 systemd-networkd[1400]: calic9e2cc28fff: Gained IPv6LL May 15 00:07:44.617717 containerd[1475]: time="2025-05-15T00:07:44.617650467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:44.618655 containerd[1475]: time="2025-05-15T00:07:44.618603033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 15 00:07:44.619947 containerd[1475]: time="2025-05-15T00:07:44.619899987Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:44.622359 containerd[1475]: time="2025-05-15T00:07:44.622320861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:44.624266 containerd[1475]: time="2025-05-15T00:07:44.623905484Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.774213411s" May 15 00:07:44.624266 containerd[1475]: time="2025-05-15T00:07:44.623943203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 15 00:07:44.626068 containerd[1475]: time="2025-05-15T00:07:44.625777898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 15 00:07:44.639911 containerd[1475]: time="2025-05-15T00:07:44.639868515Z" level=info msg="CreateContainer within sandbox \"8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 15 00:07:44.650454 containerd[1475]: time="2025-05-15T00:07:44.650402860Z" level=info msg="Container 009f2b728bb6ce2a74b458f258e9d8994e243c273494bdb1a5195bcbf85db8b5: CDI devices from CRI Config.CDIDevices: []" May 15 00:07:44.658906 containerd[1475]: time="2025-05-15T00:07:44.658778681Z" level=info msg="CreateContainer within sandbox \"8691253ff1ab75446905edb22c881a444d57032d5f18626fe45786129bc63036\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"009f2b728bb6ce2a74b458f258e9d8994e243c273494bdb1a5195bcbf85db8b5\"" May 15 00:07:44.660381 containerd[1475]: time="2025-05-15T00:07:44.660350385Z" level=info msg="StartContainer for \"009f2b728bb6ce2a74b458f258e9d8994e243c273494bdb1a5195bcbf85db8b5\"" May 15 00:07:44.661699 containerd[1475]: time="2025-05-15T00:07:44.661633460Z" level=info msg="connecting to shim 009f2b728bb6ce2a74b458f258e9d8994e243c273494bdb1a5195bcbf85db8b5" address="unix:///run/containerd/s/9caeb1dc4e071e8c12daa13b8e0bc8984b345f75dabcd954f967fd6b7a93dc4b" protocol=ttrpc version=3 May 15 00:07:44.684154 systemd[1]: Started cri-containerd-009f2b728bb6ce2a74b458f258e9d8994e243c273494bdb1a5195bcbf85db8b5.scope - libcontainer container 009f2b728bb6ce2a74b458f258e9d8994e243c273494bdb1a5195bcbf85db8b5. May 15 00:07:44.831827 containerd[1475]: time="2025-05-15T00:07:44.831790234Z" level=info msg="StartContainer for \"009f2b728bb6ce2a74b458f258e9d8994e243c273494bdb1a5195bcbf85db8b5\" returns successfully" May 15 00:07:44.902360 systemd-networkd[1400]: vxlan.calico: Gained IPv6LL May 15 00:07:45.547176 sshd[4928]: Connection closed by 10.0.0.1 port 51388 May 15 00:07:45.546719 sshd-session[4925]: pam_unix(sshd:session): session closed for user core May 15 00:07:45.563643 systemd[1]: sshd@16-10.0.0.138:22-10.0.0.1:51388.service: Deactivated successfully. May 15 00:07:45.566516 systemd[1]: session-17.scope: Deactivated successfully. May 15 00:07:45.572452 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. May 15 00:07:45.578840 systemd[1]: Started sshd@17-10.0.0.138:22-10.0.0.1:51390.service - OpenSSH per-connection server daemon (10.0.0.1:51390). May 15 00:07:45.588688 systemd-logind[1452]: Removed session 17. May 15 00:07:45.644959 sshd[4991]: Accepted publickey for core from 10.0.0.1 port 51390 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:07:45.647470 sshd-session[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:45.652967 systemd-logind[1452]: New session 18 of user core. May 15 00:07:45.659123 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 00:07:45.705007 containerd[1475]: time="2025-05-15T00:07:45.704922668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:45.705527 containerd[1475]: time="2025-05-15T00:07:45.705473049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 15 00:07:45.706560 containerd[1475]: time="2025-05-15T00:07:45.706276621Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:45.708317 containerd[1475]: time="2025-05-15T00:07:45.708265392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:45.709332 containerd[1475]: time="2025-05-15T00:07:45.708743975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.082557732s" May 15 00:07:45.709332 containerd[1475]: time="2025-05-15T00:07:45.708770294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 15 00:07:45.710851 containerd[1475]: time="2025-05-15T00:07:45.710816583Z" level=info msg="CreateContainer within sandbox \"f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 15 00:07:45.720658 containerd[1475]: time="2025-05-15T00:07:45.718263244Z" level=info msg="Container d4d0145025c55eca9dca8969d36fb508da1e791198687ae2326e6d297bd253e8: CDI devices from CRI Config.CDIDevices: []" May 15 00:07:45.742598 containerd[1475]: time="2025-05-15T00:07:45.742542121Z" level=info msg="CreateContainer within sandbox \"f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d4d0145025c55eca9dca8969d36fb508da1e791198687ae2326e6d297bd253e8\"" May 15 00:07:45.744543 containerd[1475]: time="2025-05-15T00:07:45.743079542Z" level=info msg="StartContainer for \"d4d0145025c55eca9dca8969d36fb508da1e791198687ae2326e6d297bd253e8\"" May 15 00:07:45.755523 containerd[1475]: time="2025-05-15T00:07:45.755388235Z" level=info msg="connecting to shim d4d0145025c55eca9dca8969d36fb508da1e791198687ae2326e6d297bd253e8" address="unix:///run/containerd/s/9734072a363b7497ef1f48c448411249bcc46bff1088b17da01f1096fc4c84c3" protocol=ttrpc version=3 May 15 00:07:45.803128 systemd[1]: Started cri-containerd-d4d0145025c55eca9dca8969d36fb508da1e791198687ae2326e6d297bd253e8.scope - libcontainer container d4d0145025c55eca9dca8969d36fb508da1e791198687ae2326e6d297bd253e8. May 15 00:07:45.818860 containerd[1475]: time="2025-05-15T00:07:45.818763593Z" level=info msg="TaskExit event in podsandbox handler container_id:\"009f2b728bb6ce2a74b458f258e9d8994e243c273494bdb1a5195bcbf85db8b5\" id:\"4ad15544e6c1e9f1eb56cb478e87c687e674a0a317f88ec06ce2131fcc15bbb6\" pid:5035 exited_at:{seconds:1747267665 nanos:818316969}" May 15 00:07:45.836891 kubelet[2595]: I0515 00:07:45.836722 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7947848598-hdlzj" podStartSLOduration=29.060010728 podStartE2EDuration="30.836639212s" podCreationTimestamp="2025-05-15 00:07:15 +0000 UTC" firstStartedPulling="2025-05-15 00:07:42.849007299 +0000 UTC m=+45.374682318" lastFinishedPulling="2025-05-15 00:07:44.625635743 +0000 UTC m=+47.151310802" observedRunningTime="2025-05-15 00:07:45.770658384 +0000 UTC m=+48.296333443" watchObservedRunningTime="2025-05-15 00:07:45.836639212 +0000 UTC m=+48.362314271" May 15 00:07:45.876613 containerd[1475]: time="2025-05-15T00:07:45.876395311Z" level=info msg="StartContainer for \"d4d0145025c55eca9dca8969d36fb508da1e791198687ae2326e6d297bd253e8\" returns successfully" May 15 00:07:45.879727 containerd[1475]: time="2025-05-15T00:07:45.879691876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 00:07:46.035792 sshd[5000]: Connection closed by 10.0.0.1 port 51390 May 15 00:07:46.036828 sshd-session[4991]: pam_unix(sshd:session): session closed for user core May 15 00:07:46.048348 systemd[1]: sshd@17-10.0.0.138:22-10.0.0.1:51390.service: Deactivated successfully. May 15 00:07:46.053380 systemd[1]: session-18.scope: Deactivated successfully. May 15 00:07:46.055259 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. May 15 00:07:46.058933 systemd[1]: Started sshd@18-10.0.0.138:22-10.0.0.1:51406.service - OpenSSH per-connection server daemon (10.0.0.1:51406). May 15 00:07:46.061680 systemd-logind[1452]: Removed session 18. May 15 00:07:46.128828 sshd[5068]: Accepted publickey for core from 10.0.0.1 port 51406 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:07:46.130463 sshd-session[5068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:46.135617 systemd-logind[1452]: New session 19 of user core. May 15 00:07:46.145167 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 00:07:46.288513 sshd[5073]: Connection closed by 10.0.0.1 port 51406 May 15 00:07:46.288883 sshd-session[5068]: pam_unix(sshd:session): session closed for user core May 15 00:07:46.292495 systemd[1]: sshd@18-10.0.0.138:22-10.0.0.1:51406.service: Deactivated successfully. May 15 00:07:46.294574 systemd[1]: session-19.scope: Deactivated successfully. May 15 00:07:46.296788 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. May 15 00:07:46.298605 systemd-logind[1452]: Removed session 19. May 15 00:07:47.049586 containerd[1475]: time="2025-05-15T00:07:47.049536238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:47.050537 containerd[1475]: time="2025-05-15T00:07:47.050027702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 15 00:07:47.051071 containerd[1475]: time="2025-05-15T00:07:47.051032589Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:47.053128 containerd[1475]: time="2025-05-15T00:07:47.053094561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:47.053889 containerd[1475]: time="2025-05-15T00:07:47.053679701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.173699635s" May 15 00:07:47.053889 containerd[1475]: time="2025-05-15T00:07:47.053715620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 15 00:07:47.055929 containerd[1475]: time="2025-05-15T00:07:47.055852630Z" level=info msg="CreateContainer within sandbox \"f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 15 00:07:47.062832 containerd[1475]: time="2025-05-15T00:07:47.062783081Z" level=info msg="Container a1ff0f40404ada51080c5ccc2df73298c1afd04b2e29873bf78ddf7b6d13a20b: CDI devices from CRI Config.CDIDevices: []" May 15 00:07:47.070744 containerd[1475]: time="2025-05-15T00:07:47.070688580Z" level=info msg="CreateContainer within sandbox \"f23eb5f5678b05551485d31ae7fd7314f1a9545257f404746dfb6bf0faf80373\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a1ff0f40404ada51080c5ccc2df73298c1afd04b2e29873bf78ddf7b6d13a20b\"" May 15 00:07:47.071545 containerd[1475]: time="2025-05-15T00:07:47.071268361Z" level=info msg="StartContainer for \"a1ff0f40404ada51080c5ccc2df73298c1afd04b2e29873bf78ddf7b6d13a20b\"" May 15 00:07:47.072876 containerd[1475]: time="2025-05-15T00:07:47.072809950Z" level=info msg="connecting to shim a1ff0f40404ada51080c5ccc2df73298c1afd04b2e29873bf78ddf7b6d13a20b" address="unix:///run/containerd/s/9734072a363b7497ef1f48c448411249bcc46bff1088b17da01f1096fc4c84c3" protocol=ttrpc version=3 May 15 00:07:47.095180 systemd[1]: Started cri-containerd-a1ff0f40404ada51080c5ccc2df73298c1afd04b2e29873bf78ddf7b6d13a20b.scope - libcontainer container a1ff0f40404ada51080c5ccc2df73298c1afd04b2e29873bf78ddf7b6d13a20b. May 15 00:07:47.133758 containerd[1475]: time="2025-05-15T00:07:47.133712340Z" level=info msg="StartContainer for \"a1ff0f40404ada51080c5ccc2df73298c1afd04b2e29873bf78ddf7b6d13a20b\" returns successfully" May 15 00:07:47.641121 kubelet[2595]: I0515 00:07:47.641081 2595 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 15 00:07:47.641487 kubelet[2595]: I0515 00:07:47.641159 2595 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 15 00:07:47.669836 containerd[1475]: time="2025-05-15T00:07:47.669798243Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46113305547cc74bbabf2bdf704ffb9a696d66d2f7386401c79f7a625e5dcf97\" id:\"7a5cfbf1ea578f1fc554892fa70f626ab83c537a338d7fbda996f1e12e2b3549\" pid:5137 exited_at:{seconds:1747267667 nanos:669494693}" May 15 00:07:47.673103 kubelet[2595]: E0515 00:07:47.673081 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:07:47.774395 kubelet[2595]: I0515 00:07:47.774246 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8zzgl" podStartSLOduration=28.669345523 podStartE2EDuration="32.774227596s" podCreationTimestamp="2025-05-15 00:07:15 +0000 UTC" firstStartedPulling="2025-05-15 00:07:42.949740597 +0000 UTC m=+45.475415656" lastFinishedPulling="2025-05-15 00:07:47.05462271 +0000 UTC m=+49.580297729" observedRunningTime="2025-05-15 00:07:47.773817169 +0000 UTC m=+50.299492188" watchObservedRunningTime="2025-05-15 00:07:47.774227596 +0000 UTC m=+50.299902655" May 15 00:07:51.302185 systemd[1]: Started sshd@19-10.0.0.138:22-10.0.0.1:51418.service - OpenSSH per-connection server daemon (10.0.0.1:51418). May 15 00:07:51.416676 sshd[5161]: Accepted publickey for core from 10.0.0.1 port 51418 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:07:51.418696 sshd-session[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:51.425767 systemd-logind[1452]: New session 20 of user core. May 15 00:07:51.431236 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 00:07:51.629118 sshd[5163]: Connection closed by 10.0.0.1 port 51418 May 15 00:07:51.631734 sshd-session[5161]: pam_unix(sshd:session): session closed for user core May 15 00:07:51.635799 systemd[1]: sshd@19-10.0.0.138:22-10.0.0.1:51418.service: Deactivated successfully. May 15 00:07:51.638817 systemd[1]: session-20.scope: Deactivated successfully. May 15 00:07:51.639777 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. May 15 00:07:51.641096 systemd-logind[1452]: Removed session 20. May 15 00:07:53.553262 kernel: hrtimer: interrupt took 2875479 ns May 15 00:07:56.642695 systemd[1]: Started sshd@20-10.0.0.138:22-10.0.0.1:51736.service - OpenSSH per-connection server daemon (10.0.0.1:51736). May 15 00:07:56.696534 sshd[5189]: Accepted publickey for core from 10.0.0.1 port 51736 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:07:56.697948 sshd-session[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:56.702941 systemd-logind[1452]: New session 21 of user core. May 15 00:07:56.713211 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 00:07:56.903240 sshd[5191]: Connection closed by 10.0.0.1 port 51736 May 15 00:07:56.903675 sshd-session[5189]: pam_unix(sshd:session): session closed for user core May 15 00:07:56.907624 systemd[1]: sshd@20-10.0.0.138:22-10.0.0.1:51736.service: Deactivated successfully. May 15 00:07:56.909801 systemd[1]: session-21.scope: Deactivated successfully. May 15 00:07:56.910856 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. May 15 00:07:56.913357 systemd-logind[1452]: Removed session 21. May 15 00:08:01.560684 containerd[1475]: time="2025-05-15T00:08:01.560644754Z" level=info msg="TaskExit event in podsandbox handler container_id:\"009f2b728bb6ce2a74b458f258e9d8994e243c273494bdb1a5195bcbf85db8b5\" id:\"35598334ce15086e021394c88f2254d8b5beec7605996a28352d8b221974344b\" pid:5218 exited_at:{seconds:1747267681 nanos:560384800}" May 15 00:08:01.915081 systemd[1]: Started sshd@21-10.0.0.138:22-10.0.0.1:51748.service - OpenSSH per-connection server daemon (10.0.0.1:51748). May 15 00:08:01.978689 sshd[5229]: Accepted publickey for core from 10.0.0.1 port 51748 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:08:01.980947 sshd-session[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:01.986377 systemd-logind[1452]: New session 22 of user core. May 15 00:08:01.998200 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 00:08:02.136801 sshd[5231]: Connection closed by 10.0.0.1 port 51748 May 15 00:08:02.137463 sshd-session[5229]: pam_unix(sshd:session): session closed for user core May 15 00:08:02.142364 systemd[1]: sshd@21-10.0.0.138:22-10.0.0.1:51748.service: Deactivated successfully. May 15 00:08:02.146791 systemd[1]: session-22.scope: Deactivated successfully. May 15 00:08:02.148428 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. May 15 00:08:02.149736 systemd-logind[1452]: Removed session 22. May 15 00:08:03.842393 containerd[1475]: time="2025-05-15T00:08:03.842338038Z" level=info msg="TaskExit event in podsandbox handler container_id:\"009f2b728bb6ce2a74b458f258e9d8994e243c273494bdb1a5195bcbf85db8b5\" id:\"c0ea176f78ca39b18363227e4d69338617167e4cd4c57aa472df0b277a1232c9\" pid:5264 exited_at:{seconds:1747267683 nanos:842113963}" May 15 00:08:07.159151 systemd[1]: Started sshd@22-10.0.0.138:22-10.0.0.1:36026.service - OpenSSH per-connection server daemon (10.0.0.1:36026). May 15 00:08:07.230504 sshd[5278]: Accepted publickey for core from 10.0.0.1 port 36026 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 15 00:08:07.232082 sshd-session[5278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:07.236672 systemd-logind[1452]: New session 23 of user core. May 15 00:08:07.255598 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 00:08:07.443706 sshd[5280]: Connection closed by 10.0.0.1 port 36026 May 15 00:08:07.443990 sshd-session[5278]: pam_unix(sshd:session): session closed for user core May 15 00:08:07.447770 systemd[1]: sshd@22-10.0.0.138:22-10.0.0.1:36026.service: Deactivated successfully. May 15 00:08:07.450850 systemd[1]: session-23.scope: Deactivated successfully. May 15 00:08:07.452881 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. May 15 00:08:07.454206 systemd-logind[1452]: Removed session 23. May 15 00:08:07.549329 kubelet[2595]: E0515 00:08:07.549223 2595 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"