May 9 23:59:04.998258 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 9 23:59:04.998279 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri May 9 22:24:49 -00 2025 May 9 23:59:04.998288 kernel: KASLR enabled May 9 23:59:04.998294 kernel: efi: EFI v2.7 by EDK II May 9 23:59:04.998300 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 May 9 23:59:04.998306 kernel: random: crng init done May 9 23:59:04.998313 kernel: secureboot: Secure boot disabled May 9 23:59:04.998319 kernel: ACPI: Early table checksum verification disabled May 9 23:59:04.998325 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 9 23:59:04.998333 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 9 23:59:04.998339 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:59:04.998345 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:59:04.998351 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:59:04.998385 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:59:04.998393 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:59:04.998402 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:59:04.998408 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:59:04.998415 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:59:04.998421 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:59:04.998427 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 9 23:59:04.998434 kernel: NUMA: Failed to initialise from firmware May 9 23:59:04.998440 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 9 23:59:04.998447 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] May 9 23:59:04.998453 kernel: Zone ranges: May 9 23:59:04.998459 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 9 23:59:04.998466 kernel: DMA32 empty May 9 23:59:04.998472 kernel: Normal empty May 9 23:59:04.998478 kernel: Movable zone start for each node May 9 23:59:04.998485 kernel: Early memory node ranges May 9 23:59:04.998491 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 9 23:59:04.998497 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 9 23:59:04.998504 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 9 23:59:04.998510 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 9 23:59:04.998516 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 9 23:59:04.998529 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 9 23:59:04.998536 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 9 23:59:04.998543 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 9 23:59:04.998550 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 9 23:59:04.998557 kernel: psci: probing for conduit method from ACPI. May 9 23:59:04.998563 kernel: psci: PSCIv1.1 detected in firmware. May 9 23:59:04.998572 kernel: psci: Using standard PSCI v0.2 function IDs May 9 23:59:04.998579 kernel: psci: Trusted OS migration not required May 9 23:59:04.998586 kernel: psci: SMC Calling Convention v1.1 May 9 23:59:04.998594 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 9 23:59:04.998601 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 9 23:59:04.998608 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 9 23:59:04.998615 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 9 23:59:04.998622 kernel: Detected PIPT I-cache on CPU0 May 9 23:59:04.998629 kernel: CPU features: detected: GIC system register CPU interface May 9 23:59:04.998635 kernel: CPU features: detected: Hardware dirty bit management May 9 23:59:04.998642 kernel: CPU features: detected: Spectre-v4 May 9 23:59:04.998649 kernel: CPU features: detected: Spectre-BHB May 9 23:59:04.998656 kernel: CPU features: kernel page table isolation forced ON by KASLR May 9 23:59:04.998664 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 9 23:59:04.998671 kernel: CPU features: detected: ARM erratum 1418040 May 9 23:59:04.998678 kernel: CPU features: detected: SSBS not fully self-synchronizing May 9 23:59:04.998685 kernel: alternatives: applying boot alternatives May 9 23:59:04.998693 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9a99b6d651f8aeb5d7bfd4370bc36449b7e5138d2f42e40e0aede009df00f5a4 May 9 23:59:04.998700 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 23:59:04.998707 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 23:59:04.998714 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 23:59:04.998720 kernel: Fallback order for Node 0: 0 May 9 23:59:04.998727 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 9 23:59:04.998734 kernel: Policy zone: DMA May 9 23:59:04.998743 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 23:59:04.998749 kernel: software IO TLB: area num 4. May 9 23:59:04.998756 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 9 23:59:04.998763 kernel: Memory: 2386256K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39744K init, 897K bss, 186032K reserved, 0K cma-reserved) May 9 23:59:04.998770 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 23:59:04.998777 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 23:59:04.998785 kernel: rcu: RCU event tracing is enabled. May 9 23:59:04.998792 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 23:59:04.998799 kernel: Trampoline variant of Tasks RCU enabled. May 9 23:59:04.998806 kernel: Tracing variant of Tasks RCU enabled. May 9 23:59:04.998813 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 23:59:04.998820 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 23:59:04.998828 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 23:59:04.998835 kernel: GICv3: 256 SPIs implemented May 9 23:59:04.998842 kernel: GICv3: 0 Extended SPIs implemented May 9 23:59:04.998848 kernel: Root IRQ handler: gic_handle_irq May 9 23:59:04.998855 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 9 23:59:04.998862 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 9 23:59:04.998869 kernel: ITS [mem 0x08080000-0x0809ffff] May 9 23:59:04.998875 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 9 23:59:04.998882 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 9 23:59:04.998889 kernel: GICv3: using LPI property table @0x00000000400f0000 May 9 23:59:04.998896 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 9 23:59:04.998904 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 23:59:04.998911 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:59:04.998918 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 9 23:59:04.998925 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 9 23:59:04.998932 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 9 23:59:04.998939 kernel: arm-pv: using stolen time PV May 9 23:59:04.998946 kernel: Console: colour dummy device 80x25 May 9 23:59:04.998953 kernel: ACPI: Core revision 20230628 May 9 23:59:04.998960 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 9 23:59:04.998967 kernel: pid_max: default: 32768 minimum: 301 May 9 23:59:04.998975 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 23:59:04.998982 kernel: landlock: Up and running. May 9 23:59:04.998989 kernel: SELinux: Initializing. May 9 23:59:04.998996 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:59:04.999003 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:59:04.999010 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 9 23:59:04.999017 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 23:59:04.999024 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 23:59:04.999032 kernel: rcu: Hierarchical SRCU implementation. May 9 23:59:04.999040 kernel: rcu: Max phase no-delay instances is 400. May 9 23:59:04.999047 kernel: Platform MSI: ITS@0x8080000 domain created May 9 23:59:04.999054 kernel: PCI/MSI: ITS@0x8080000 domain created May 9 23:59:04.999061 kernel: Remapping and enabling EFI services. May 9 23:59:04.999068 kernel: smp: Bringing up secondary CPUs ... May 9 23:59:04.999075 kernel: Detected PIPT I-cache on CPU1 May 9 23:59:04.999082 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 9 23:59:04.999090 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 9 23:59:04.999097 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:59:04.999104 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 9 23:59:04.999112 kernel: Detected PIPT I-cache on CPU2 May 9 23:59:04.999119 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 9 23:59:04.999132 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 9 23:59:04.999140 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:59:04.999147 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 9 23:59:04.999154 kernel: Detected PIPT I-cache on CPU3 May 9 23:59:04.999162 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 9 23:59:04.999169 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 9 23:59:04.999177 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:59:04.999184 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 9 23:59:04.999193 kernel: smp: Brought up 1 node, 4 CPUs May 9 23:59:04.999200 kernel: SMP: Total of 4 processors activated. May 9 23:59:04.999207 kernel: CPU features: detected: 32-bit EL0 Support May 9 23:59:04.999215 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 9 23:59:04.999222 kernel: CPU features: detected: Common not Private translations May 9 23:59:04.999229 kernel: CPU features: detected: CRC32 instructions May 9 23:59:04.999237 kernel: CPU features: detected: Enhanced Virtualization Traps May 9 23:59:04.999245 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 9 23:59:04.999252 kernel: CPU features: detected: LSE atomic instructions May 9 23:59:04.999259 kernel: CPU features: detected: Privileged Access Never May 9 23:59:04.999267 kernel: CPU features: detected: RAS Extension Support May 9 23:59:04.999274 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 9 23:59:04.999281 kernel: CPU: All CPU(s) started at EL1 May 9 23:59:04.999288 kernel: alternatives: applying system-wide alternatives May 9 23:59:04.999295 kernel: devtmpfs: initialized May 9 23:59:04.999302 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 23:59:04.999311 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 23:59:04.999318 kernel: pinctrl core: initialized pinctrl subsystem May 9 23:59:04.999325 kernel: SMBIOS 3.0.0 present. May 9 23:59:04.999332 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 9 23:59:04.999339 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 23:59:04.999346 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 23:59:04.999414 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 23:59:04.999422 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 23:59:04.999429 kernel: audit: initializing netlink subsys (disabled) May 9 23:59:04.999438 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 May 9 23:59:04.999445 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 23:59:04.999453 kernel: cpuidle: using governor menu May 9 23:59:04.999460 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 23:59:04.999467 kernel: ASID allocator initialised with 32768 entries May 9 23:59:04.999474 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 23:59:04.999481 kernel: Serial: AMBA PL011 UART driver May 9 23:59:04.999488 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 9 23:59:04.999495 kernel: Modules: 0 pages in range for non-PLT usage May 9 23:59:04.999504 kernel: Modules: 508944 pages in range for PLT usage May 9 23:59:04.999511 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 23:59:04.999518 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 23:59:04.999530 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 23:59:04.999538 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 23:59:04.999545 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 23:59:04.999552 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 23:59:04.999559 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 23:59:04.999566 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 23:59:04.999576 kernel: ACPI: Added _OSI(Module Device) May 9 23:59:04.999583 kernel: ACPI: Added _OSI(Processor Device) May 9 23:59:04.999591 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 23:59:04.999598 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 23:59:04.999606 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 23:59:04.999613 kernel: ACPI: Interpreter enabled May 9 23:59:04.999621 kernel: ACPI: Using GIC for interrupt routing May 9 23:59:04.999628 kernel: ACPI: MCFG table detected, 1 entries May 9 23:59:04.999635 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 9 23:59:04.999644 kernel: printk: console [ttyAMA0] enabled May 9 23:59:04.999651 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 23:59:04.999797 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 23:59:04.999871 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 9 23:59:04.999937 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 9 23:59:05.000000 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 9 23:59:05.000064 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 9 23:59:05.000076 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 9 23:59:05.000084 kernel: PCI host bridge to bus 0000:00 May 9 23:59:05.000153 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 9 23:59:05.000212 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 9 23:59:05.000268 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 9 23:59:05.000325 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 23:59:05.000419 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 9 23:59:05.000501 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 9 23:59:05.000583 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 9 23:59:05.000668 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 9 23:59:05.000736 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 9 23:59:05.000800 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 9 23:59:05.000863 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 9 23:59:05.000927 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 9 23:59:05.000990 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 9 23:59:05.001047 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 9 23:59:05.001105 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 9 23:59:05.001115 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 9 23:59:05.001122 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 9 23:59:05.001130 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 9 23:59:05.001137 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 9 23:59:05.001144 kernel: iommu: Default domain type: Translated May 9 23:59:05.001154 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 23:59:05.001161 kernel: efivars: Registered efivars operations May 9 23:59:05.001168 kernel: vgaarb: loaded May 9 23:59:05.001176 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 23:59:05.001183 kernel: VFS: Disk quotas dquot_6.6.0 May 9 23:59:05.001190 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 23:59:05.001197 kernel: pnp: PnP ACPI init May 9 23:59:05.001267 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 9 23:59:05.001279 kernel: pnp: PnP ACPI: found 1 devices May 9 23:59:05.001286 kernel: NET: Registered PF_INET protocol family May 9 23:59:05.001293 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 23:59:05.001301 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 23:59:05.001309 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 23:59:05.001316 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 23:59:05.001323 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 23:59:05.001330 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 23:59:05.001337 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:59:05.001346 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:59:05.001423 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 23:59:05.001432 kernel: PCI: CLS 0 bytes, default 64 May 9 23:59:05.001440 kernel: kvm [1]: HYP mode not available May 9 23:59:05.001447 kernel: Initialise system trusted keyrings May 9 23:59:05.001455 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 23:59:05.001463 kernel: Key type asymmetric registered May 9 23:59:05.001470 kernel: Asymmetric key parser 'x509' registered May 9 23:59:05.001477 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 9 23:59:05.001487 kernel: io scheduler mq-deadline registered May 9 23:59:05.001494 kernel: io scheduler kyber registered May 9 23:59:05.001501 kernel: io scheduler bfq registered May 9 23:59:05.001509 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 9 23:59:05.001517 kernel: ACPI: button: Power Button [PWRB] May 9 23:59:05.001531 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 9 23:59:05.001611 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 9 23:59:05.001622 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 23:59:05.001629 kernel: thunder_xcv, ver 1.0 May 9 23:59:05.001639 kernel: thunder_bgx, ver 1.0 May 9 23:59:05.001646 kernel: nicpf, ver 1.0 May 9 23:59:05.001653 kernel: nicvf, ver 1.0 May 9 23:59:05.001734 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 23:59:05.001796 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T23:59:04 UTC (1746835144) May 9 23:59:05.001806 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 23:59:05.001813 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 9 23:59:05.001820 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 23:59:05.001830 kernel: watchdog: Hard watchdog permanently disabled May 9 23:59:05.001837 kernel: NET: Registered PF_INET6 protocol family May 9 23:59:05.001845 kernel: Segment Routing with IPv6 May 9 23:59:05.001852 kernel: In-situ OAM (IOAM) with IPv6 May 9 23:59:05.001859 kernel: NET: Registered PF_PACKET protocol family May 9 23:59:05.001866 kernel: Key type dns_resolver registered May 9 23:59:05.001873 kernel: registered taskstats version 1 May 9 23:59:05.001880 kernel: Loading compiled-in X.509 certificates May 9 23:59:05.001887 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: ce481d22c53070871912748985d4044dfd149966' May 9 23:59:05.001896 kernel: Key type .fscrypt registered May 9 23:59:05.001903 kernel: Key type fscrypt-provisioning registered May 9 23:59:05.001910 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 23:59:05.001917 kernel: ima: Allocated hash algorithm: sha1 May 9 23:59:05.001925 kernel: ima: No architecture policies found May 9 23:59:05.001932 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 23:59:05.001939 kernel: clk: Disabling unused clocks May 9 23:59:05.001946 kernel: Freeing unused kernel memory: 39744K May 9 23:59:05.001953 kernel: Run /init as init process May 9 23:59:05.001961 kernel: with arguments: May 9 23:59:05.001968 kernel: /init May 9 23:59:05.001975 kernel: with environment: May 9 23:59:05.001982 kernel: HOME=/ May 9 23:59:05.001989 kernel: TERM=linux May 9 23:59:05.001996 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 23:59:05.002005 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:59:05.002014 systemd[1]: Detected virtualization kvm. May 9 23:59:05.002023 systemd[1]: Detected architecture arm64. May 9 23:59:05.002031 systemd[1]: Running in initrd. May 9 23:59:05.002038 systemd[1]: No hostname configured, using default hostname. May 9 23:59:05.002045 systemd[1]: Hostname set to . May 9 23:59:05.002053 systemd[1]: Initializing machine ID from VM UUID. May 9 23:59:05.002061 systemd[1]: Queued start job for default target initrd.target. May 9 23:59:05.002068 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:59:05.002076 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:59:05.002086 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 23:59:05.002093 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:59:05.002101 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 23:59:05.002109 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 23:59:05.002118 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 23:59:05.002126 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 23:59:05.002135 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:59:05.002142 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:59:05.002150 systemd[1]: Reached target paths.target - Path Units. May 9 23:59:05.002158 systemd[1]: Reached target slices.target - Slice Units. May 9 23:59:05.002165 systemd[1]: Reached target swap.target - Swaps. May 9 23:59:05.002173 systemd[1]: Reached target timers.target - Timer Units. May 9 23:59:05.002181 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:59:05.002189 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:59:05.002196 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 23:59:05.002205 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 23:59:05.002213 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:59:05.002221 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:59:05.002229 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:59:05.002236 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:59:05.002244 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 23:59:05.002252 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:59:05.002259 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 23:59:05.002267 systemd[1]: Starting systemd-fsck-usr.service... May 9 23:59:05.002276 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:59:05.002284 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:59:05.002291 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:05.002299 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 23:59:05.002307 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:59:05.002315 systemd[1]: Finished systemd-fsck-usr.service. May 9 23:59:05.002325 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 23:59:05.002349 systemd-journald[239]: Collecting audit messages is disabled. May 9 23:59:05.002380 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:59:05.002389 systemd-journald[239]: Journal started May 9 23:59:05.002407 systemd-journald[239]: Runtime Journal (/run/log/journal/188101025dbf461daaa125a4747dbd56) is 5.9M, max 47.3M, 41.4M free. May 9 23:59:04.993420 systemd-modules-load[240]: Inserted module 'overlay' May 9 23:59:05.005004 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:59:05.005512 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:05.010390 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 23:59:05.012650 systemd-modules-load[240]: Inserted module 'br_netfilter' May 9 23:59:05.013629 kernel: Bridge firewalling registered May 9 23:59:05.015556 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:59:05.018805 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:59:05.020609 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:59:05.022269 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:59:05.025776 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:59:05.030902 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:59:05.036270 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:59:05.038493 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:05.048507 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:59:05.049561 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:59:05.052273 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 23:59:05.066519 dracut-cmdline[282]: dracut-dracut-053 May 9 23:59:05.069160 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9a99b6d651f8aeb5d7bfd4370bc36449b7e5138d2f42e40e0aede009df00f5a4 May 9 23:59:05.080024 systemd-resolved[278]: Positive Trust Anchors: May 9 23:59:05.080102 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:59:05.080135 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:59:05.085032 systemd-resolved[278]: Defaulting to hostname 'linux'. May 9 23:59:05.086035 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:59:05.090028 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:59:05.145387 kernel: SCSI subsystem initialized May 9 23:59:05.149371 kernel: Loading iSCSI transport class v2.0-870. May 9 23:59:05.157376 kernel: iscsi: registered transport (tcp) May 9 23:59:05.170445 kernel: iscsi: registered transport (qla4xxx) May 9 23:59:05.170461 kernel: QLogic iSCSI HBA Driver May 9 23:59:05.217567 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 23:59:05.234530 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 23:59:05.251218 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 23:59:05.251305 kernel: device-mapper: uevent: version 1.0.3 May 9 23:59:05.252585 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 23:59:05.301387 kernel: raid6: neonx8 gen() 15769 MB/s May 9 23:59:05.318377 kernel: raid6: neonx4 gen() 15644 MB/s May 9 23:59:05.335704 kernel: raid6: neonx2 gen() 13239 MB/s May 9 23:59:05.352383 kernel: raid6: neonx1 gen() 10483 MB/s May 9 23:59:05.369393 kernel: raid6: int64x8 gen() 6959 MB/s May 9 23:59:05.386378 kernel: raid6: int64x4 gen() 7334 MB/s May 9 23:59:05.403377 kernel: raid6: int64x2 gen() 6128 MB/s May 9 23:59:05.420376 kernel: raid6: int64x1 gen() 5050 MB/s May 9 23:59:05.420416 kernel: raid6: using algorithm neonx8 gen() 15769 MB/s May 9 23:59:05.437913 kernel: raid6: .... xor() 11919 MB/s, rmw enabled May 9 23:59:05.437960 kernel: raid6: using neon recovery algorithm May 9 23:59:05.442934 kernel: xor: measuring software checksum speed May 9 23:59:05.442984 kernel: 8regs : 19821 MB/sec May 9 23:59:05.442995 kernel: 32regs : 19650 MB/sec May 9 23:59:05.443387 kernel: arm64_neon : 25393 MB/sec May 9 23:59:05.444584 kernel: xor: using function: arm64_neon (25393 MB/sec) May 9 23:59:05.496395 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 23:59:05.508182 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 23:59:05.525644 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:59:05.543350 systemd-udevd[464]: Using default interface naming scheme 'v255'. May 9 23:59:05.546863 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:59:05.549247 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 23:59:05.569986 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation May 9 23:59:05.610438 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:59:05.620558 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:59:05.665241 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:59:05.673765 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 23:59:05.687661 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 23:59:05.689247 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:59:05.690957 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:59:05.693089 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:59:05.702809 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 23:59:05.711273 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 23:59:05.720412 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 9 23:59:05.720628 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 23:59:05.722784 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 23:59:05.722812 kernel: GPT:9289727 != 19775487 May 9 23:59:05.722830 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 23:59:05.723018 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:59:05.725370 kernel: GPT:9289727 != 19775487 May 9 23:59:05.725393 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 23:59:05.725411 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 23:59:05.723147 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:59:05.728111 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:59:05.729288 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:59:05.729489 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:05.731799 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:05.741634 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:05.751380 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (521) May 9 23:59:05.756416 kernel: BTRFS: device fsid 278061fd-7ea0-499f-a3bc-343431c2d8fa devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (525) May 9 23:59:05.755942 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 23:59:05.761399 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:05.769143 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 23:59:05.774069 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 23:59:05.778163 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 23:59:05.779475 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 23:59:05.789537 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 23:59:05.791373 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:59:05.797043 disk-uuid[554]: Primary Header is updated. May 9 23:59:05.797043 disk-uuid[554]: Secondary Entries is updated. May 9 23:59:05.797043 disk-uuid[554]: Secondary Header is updated. May 9 23:59:05.800409 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 23:59:05.817759 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:59:06.810871 disk-uuid[555]: The operation has completed successfully. May 9 23:59:06.812538 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 23:59:06.837591 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 23:59:06.837711 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 23:59:06.851551 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 23:59:06.856088 sh[575]: Success May 9 23:59:06.865380 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 23:59:06.895376 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 23:59:06.908895 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 23:59:06.910502 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 23:59:06.922172 kernel: BTRFS info (device dm-0): first mount of filesystem 278061fd-7ea0-499f-a3bc-343431c2d8fa May 9 23:59:06.922237 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 23:59:06.922249 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 23:59:06.922259 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 23:59:06.922825 kernel: BTRFS info (device dm-0): using free space tree May 9 23:59:06.926879 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 23:59:06.928096 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 23:59:06.928902 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 23:59:06.931637 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 23:59:06.942119 kernel: BTRFS info (device vda6): first mount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:59:06.942175 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:59:06.942187 kernel: BTRFS info (device vda6): using free space tree May 9 23:59:06.945395 kernel: BTRFS info (device vda6): auto enabling async discard May 9 23:59:06.952974 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 23:59:06.954482 kernel: BTRFS info (device vda6): last unmount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:59:06.960224 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 23:59:06.972613 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 23:59:07.043732 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:59:07.051658 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:59:07.102492 systemd-networkd[761]: lo: Link UP May 9 23:59:07.102503 systemd-networkd[761]: lo: Gained carrier May 9 23:59:07.103439 systemd-networkd[761]: Enumeration completed May 9 23:59:07.103985 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:59:07.103988 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:59:07.105158 systemd-networkd[761]: eth0: Link UP May 9 23:59:07.105162 systemd-networkd[761]: eth0: Gained carrier May 9 23:59:07.105169 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:59:07.106397 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:59:07.107500 systemd[1]: Reached target network.target - Network. May 9 23:59:07.127984 ignition[669]: Ignition 2.20.0 May 9 23:59:07.127998 ignition[669]: Stage: fetch-offline May 9 23:59:07.128037 ignition[669]: no configs at "/usr/lib/ignition/base.d" May 9 23:59:07.128046 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:59:07.128433 ignition[669]: parsed url from cmdline: "" May 9 23:59:07.131419 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 23:59:07.128437 ignition[669]: no config URL provided May 9 23:59:07.128442 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" May 9 23:59:07.128450 ignition[669]: no config at "/usr/lib/ignition/user.ign" May 9 23:59:07.128480 ignition[669]: op(1): [started] loading QEMU firmware config module May 9 23:59:07.128489 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 23:59:07.144406 ignition[669]: op(1): [finished] loading QEMU firmware config module May 9 23:59:07.150375 ignition[669]: parsing config with SHA512: 4bd5a4acceba8bd859369c8dea22d0719ecf069ae00f44a98de75c2fab5eb367c367ed65ccae4862362b21e00897b2b55c3ff5191e504bb15f5320a72ff6647f May 9 23:59:07.154672 unknown[669]: fetched base config from "system" May 9 23:59:07.154682 unknown[669]: fetched user config from "qemu" May 9 23:59:07.154964 ignition[669]: fetch-offline: fetch-offline passed May 9 23:59:07.157083 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:59:07.155030 ignition[669]: Ignition finished successfully May 9 23:59:07.158565 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 23:59:07.164529 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 23:59:07.176292 ignition[771]: Ignition 2.20.0 May 9 23:59:07.176305 ignition[771]: Stage: kargs May 9 23:59:07.176500 ignition[771]: no configs at "/usr/lib/ignition/base.d" May 9 23:59:07.176511 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:59:07.179417 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 23:59:07.177228 ignition[771]: kargs: kargs passed May 9 23:59:07.177274 ignition[771]: Ignition finished successfully May 9 23:59:07.189586 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 23:59:07.200106 ignition[779]: Ignition 2.20.0 May 9 23:59:07.200117 ignition[779]: Stage: disks May 9 23:59:07.200283 ignition[779]: no configs at "/usr/lib/ignition/base.d" May 9 23:59:07.200293 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:59:07.201007 ignition[779]: disks: disks passed May 9 23:59:07.203064 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 23:59:07.201052 ignition[779]: Ignition finished successfully May 9 23:59:07.204880 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 23:59:07.206091 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 23:59:07.207848 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:59:07.209289 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:59:07.210987 systemd[1]: Reached target basic.target - Basic System. May 9 23:59:07.213616 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 23:59:07.234756 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 23:59:07.240790 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 23:59:07.253477 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 23:59:07.300323 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 23:59:07.301739 kernel: EXT4-fs (vda9): mounted filesystem caef9e74-1f21-4595-8586-7560f5103527 r/w with ordered data mode. Quota mode: none. May 9 23:59:07.301673 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 23:59:07.313486 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:59:07.315780 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 23:59:07.316883 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 23:59:07.316926 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 23:59:07.316949 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:59:07.322680 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 23:59:07.324459 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 23:59:07.327368 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (797) May 9 23:59:07.329716 kernel: BTRFS info (device vda6): first mount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:59:07.329751 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:59:07.329762 kernel: BTRFS info (device vda6): using free space tree May 9 23:59:07.332371 kernel: BTRFS info (device vda6): auto enabling async discard May 9 23:59:07.342326 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:59:07.380114 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory May 9 23:59:07.383453 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory May 9 23:59:07.387898 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory May 9 23:59:07.391041 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory May 9 23:59:07.470535 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 23:59:07.481523 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 23:59:07.483020 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 23:59:07.488382 kernel: BTRFS info (device vda6): last unmount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:59:07.507052 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 23:59:07.508621 ignition[911]: INFO : Ignition 2.20.0 May 9 23:59:07.508621 ignition[911]: INFO : Stage: mount May 9 23:59:07.508621 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:59:07.508621 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:59:07.513454 ignition[911]: INFO : mount: mount passed May 9 23:59:07.513454 ignition[911]: INFO : Ignition finished successfully May 9 23:59:07.510314 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 23:59:07.518451 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 23:59:07.920712 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 23:59:07.929570 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:59:07.935860 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (924) May 9 23:59:07.935903 kernel: BTRFS info (device vda6): first mount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:59:07.935914 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:59:07.937368 kernel: BTRFS info (device vda6): using free space tree May 9 23:59:07.939366 kernel: BTRFS info (device vda6): auto enabling async discard May 9 23:59:07.940298 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:59:07.962184 ignition[941]: INFO : Ignition 2.20.0 May 9 23:59:07.962184 ignition[941]: INFO : Stage: files May 9 23:59:07.963685 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:59:07.963685 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:59:07.963685 ignition[941]: DEBUG : files: compiled without relabeling support, skipping May 9 23:59:07.966455 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 23:59:07.966455 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 23:59:07.969569 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 23:59:07.970653 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 23:59:07.970653 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 23:59:07.970186 unknown[941]: wrote ssh authorized keys file for user: core May 9 23:59:07.973838 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 9 23:59:07.973838 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 9 23:59:07.973838 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:59:07.973838 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:59:07.973838 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:59:07.973838 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:59:07.973838 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:59:07.973838 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 9 23:59:08.261334 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 9 23:59:08.579527 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:59:08.579527 ignition[941]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 9 23:59:08.582762 ignition[941]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 23:59:08.582762 ignition[941]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 23:59:08.582762 ignition[941]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 9 23:59:08.582762 ignition[941]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 9 23:59:08.613474 ignition[941]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 23:59:08.617970 ignition[941]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 23:59:08.620395 ignition[941]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 9 23:59:08.620395 ignition[941]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 23:59:08.620395 ignition[941]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 23:59:08.620395 ignition[941]: INFO : files: files passed May 9 23:59:08.620395 ignition[941]: INFO : Ignition finished successfully May 9 23:59:08.620766 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 23:59:08.632594 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 23:59:08.635280 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 23:59:08.636721 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 23:59:08.636811 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 23:59:08.643113 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory May 9 23:59:08.646348 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:59:08.646348 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 23:59:08.649999 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:59:08.649318 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:59:08.651029 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 23:59:08.660574 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 23:59:08.682796 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 23:59:08.682937 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 23:59:08.685037 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 23:59:08.686763 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 23:59:08.688382 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 23:59:08.698539 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 23:59:08.710396 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:59:08.712813 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 23:59:08.725570 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 23:59:08.726520 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:59:08.728303 systemd[1]: Stopped target timers.target - Timer Units. May 9 23:59:08.729964 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 23:59:08.730095 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:59:08.732420 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 23:59:08.734194 systemd[1]: Stopped target basic.target - Basic System. May 9 23:59:08.735694 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 23:59:08.737265 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:59:08.739047 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 23:59:08.740805 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 23:59:08.742408 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:59:08.744164 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 23:59:08.746021 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 23:59:08.747549 systemd[1]: Stopped target swap.target - Swaps. May 9 23:59:08.748900 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 23:59:08.749030 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 23:59:08.751174 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 23:59:08.752966 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:59:08.754653 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 23:59:08.759432 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:59:08.760418 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 23:59:08.760556 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 23:59:08.763209 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 23:59:08.763317 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:59:08.765154 systemd[1]: Stopped target paths.target - Path Units. May 9 23:59:08.766612 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 23:59:08.766722 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:59:08.768433 systemd[1]: Stopped target slices.target - Slice Units. May 9 23:59:08.769815 systemd[1]: Stopped target sockets.target - Socket Units. May 9 23:59:08.771437 systemd[1]: iscsid.socket: Deactivated successfully. May 9 23:59:08.771542 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:59:08.773599 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 23:59:08.773678 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:59:08.775081 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 23:59:08.775192 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:59:08.776709 systemd[1]: ignition-files.service: Deactivated successfully. May 9 23:59:08.776805 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 23:59:08.793587 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 23:59:08.794333 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 23:59:08.794485 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:59:08.797118 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 23:59:08.797827 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 23:59:08.797945 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:59:08.799867 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 23:59:08.799966 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:59:08.800916 systemd-networkd[761]: eth0: Gained IPv6LL May 9 23:59:08.807480 ignition[996]: INFO : Ignition 2.20.0 May 9 23:59:08.807480 ignition[996]: INFO : Stage: umount May 9 23:59:08.807480 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:59:08.807480 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:59:08.807480 ignition[996]: INFO : umount: umount passed May 9 23:59:08.807480 ignition[996]: INFO : Ignition finished successfully May 9 23:59:08.807420 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 23:59:08.807532 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 23:59:08.808797 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 23:59:08.808896 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 23:59:08.810712 systemd[1]: Stopped target network.target - Network. May 9 23:59:08.811753 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 23:59:08.811854 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 23:59:08.813658 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 23:59:08.813703 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 23:59:08.815558 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 23:59:08.815600 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 23:59:08.817296 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 23:59:08.817339 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 23:59:08.819108 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 23:59:08.821605 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 23:59:08.823272 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 23:59:08.829205 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 23:59:08.829435 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 23:59:08.833417 systemd-networkd[761]: eth0: DHCPv6 lease lost May 9 23:59:08.833820 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 23:59:08.833933 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:59:08.835885 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 23:59:08.837240 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 23:59:08.839004 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 23:59:08.839080 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 23:59:08.852478 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 23:59:08.853199 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 23:59:08.853266 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:59:08.855227 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:59:08.855271 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:08.857139 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 23:59:08.857187 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 23:59:08.859723 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:59:08.864993 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 23:59:08.865086 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 23:59:08.868159 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 23:59:08.868267 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 23:59:08.871558 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 23:59:08.871661 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 23:59:08.875971 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 23:59:08.876098 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:59:08.879271 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 23:59:08.879340 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 23:59:08.880228 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 23:59:08.880264 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:59:08.882046 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 23:59:08.882100 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 23:59:08.885088 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 23:59:08.885133 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 23:59:08.887761 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:59:08.887805 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:59:08.903561 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 23:59:08.904391 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 23:59:08.904455 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:59:08.906336 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:59:08.906446 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:08.911788 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 23:59:08.913161 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 23:59:08.915683 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 23:59:08.917411 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 23:59:08.927372 systemd[1]: Switching root. May 9 23:59:08.958350 systemd-journald[239]: Journal stopped May 9 23:59:09.675278 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 9 23:59:09.675334 kernel: SELinux: policy capability network_peer_controls=1 May 9 23:59:09.675346 kernel: SELinux: policy capability open_perms=1 May 9 23:59:09.675489 kernel: SELinux: policy capability extended_socket_class=1 May 9 23:59:09.675512 kernel: SELinux: policy capability always_check_network=0 May 9 23:59:09.675523 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 23:59:09.675532 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 23:59:09.675542 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 23:59:09.675551 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 23:59:09.675560 kernel: audit: type=1403 audit(1746835149.087:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 23:59:09.675570 systemd[1]: Successfully loaded SELinux policy in 33.350ms. May 9 23:59:09.675592 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.811ms. May 9 23:59:09.675604 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:59:09.675617 systemd[1]: Detected virtualization kvm. May 9 23:59:09.675627 systemd[1]: Detected architecture arm64. May 9 23:59:09.675637 systemd[1]: Detected first boot. May 9 23:59:09.675647 systemd[1]: Initializing machine ID from VM UUID. May 9 23:59:09.675657 zram_generator::config[1040]: No configuration found. May 9 23:59:09.675672 systemd[1]: Populated /etc with preset unit settings. May 9 23:59:09.675683 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 23:59:09.675693 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 23:59:09.675705 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 23:59:09.675716 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 23:59:09.675727 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 23:59:09.675742 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 23:59:09.675757 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 23:59:09.675768 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 23:59:09.675780 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 23:59:09.675791 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 23:59:09.675801 systemd[1]: Created slice user.slice - User and Session Slice. May 9 23:59:09.675811 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:59:09.675822 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:59:09.675832 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 23:59:09.675843 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 23:59:09.675853 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 23:59:09.675865 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:59:09.675875 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 9 23:59:09.675886 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:59:09.675896 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 23:59:09.675907 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 23:59:09.675918 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 23:59:09.675928 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 23:59:09.675938 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:59:09.675950 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:59:09.675961 systemd[1]: Reached target slices.target - Slice Units. May 9 23:59:09.675971 systemd[1]: Reached target swap.target - Swaps. May 9 23:59:09.675982 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 23:59:09.675992 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 23:59:09.676002 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:59:09.676012 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:59:09.676022 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:59:09.676032 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 23:59:09.676044 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 23:59:09.676054 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 23:59:09.676065 systemd[1]: Mounting media.mount - External Media Directory... May 9 23:59:09.676075 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 23:59:09.676085 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 23:59:09.676095 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 23:59:09.676106 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 23:59:09.676116 systemd[1]: Reached target machines.target - Containers. May 9 23:59:09.676127 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 23:59:09.676139 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:59:09.676151 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:59:09.676162 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 23:59:09.676173 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:59:09.676184 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:59:09.676194 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:59:09.676206 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 23:59:09.676217 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:59:09.676229 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 23:59:09.676240 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 23:59:09.676250 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 23:59:09.676260 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 23:59:09.676270 systemd[1]: Stopped systemd-fsck-usr.service. May 9 23:59:09.676280 kernel: fuse: init (API version 7.39) May 9 23:59:09.676290 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:59:09.676301 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:59:09.676310 kernel: ACPI: bus type drm_connector registered May 9 23:59:09.676322 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 23:59:09.676332 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 23:59:09.676342 kernel: loop: module loaded May 9 23:59:09.676359 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:59:09.676373 systemd[1]: verity-setup.service: Deactivated successfully. May 9 23:59:09.676383 systemd[1]: Stopped verity-setup.service. May 9 23:59:09.676393 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 23:59:09.676404 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 23:59:09.676434 systemd-journald[1100]: Collecting audit messages is disabled. May 9 23:59:09.676458 systemd[1]: Mounted media.mount - External Media Directory. May 9 23:59:09.676469 systemd-journald[1100]: Journal started May 9 23:59:09.676490 systemd-journald[1100]: Runtime Journal (/run/log/journal/188101025dbf461daaa125a4747dbd56) is 5.9M, max 47.3M, 41.4M free. May 9 23:59:09.476525 systemd[1]: Queued start job for default target multi-user.target. May 9 23:59:09.497008 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 23:59:09.497391 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 23:59:09.679388 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:59:09.679998 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 23:59:09.681138 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 23:59:09.682169 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 23:59:09.684396 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:59:09.685783 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 23:59:09.685924 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 23:59:09.687102 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:59:09.687246 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:59:09.688551 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:59:09.688684 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:59:09.689726 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:59:09.689852 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:59:09.691236 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 23:59:09.691396 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 23:59:09.692583 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:59:09.692706 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:59:09.693836 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:59:09.696776 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 23:59:09.698161 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 23:59:09.708955 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 23:59:09.720510 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 23:59:09.722878 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 23:59:09.723877 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 23:59:09.723926 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:59:09.725898 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 23:59:09.728174 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 23:59:09.730230 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 23:59:09.731212 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:59:09.738000 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 23:59:09.742565 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 23:59:09.745810 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:59:09.750459 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 23:59:09.751609 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:59:09.754681 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:59:09.759879 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 23:59:09.762930 systemd-journald[1100]: Time spent on flushing to /var/log/journal/188101025dbf461daaa125a4747dbd56 is 30.332ms for 838 entries. May 9 23:59:09.762930 systemd-journald[1100]: System Journal (/var/log/journal/188101025dbf461daaa125a4747dbd56) is 8.0M, max 195.6M, 187.6M free. May 9 23:59:09.819428 systemd-journald[1100]: Received client request to flush runtime journal. May 9 23:59:09.819522 kernel: loop0: detected capacity change from 0 to 113536 May 9 23:59:09.819566 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 23:59:09.765971 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:59:09.767401 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 23:59:09.768423 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 23:59:09.769885 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 23:59:09.780906 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 23:59:09.783966 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 23:59:09.785894 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 23:59:09.796664 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 23:59:09.806652 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 23:59:09.821758 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 23:59:09.824021 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 23:59:09.825645 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:09.830711 udevadm[1154]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 9 23:59:09.836132 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 23:59:09.837653 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 23:59:09.845288 kernel: loop1: detected capacity change from 0 to 194096 May 9 23:59:09.850144 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 23:59:09.860633 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:59:09.873731 kernel: loop2: detected capacity change from 0 to 116808 May 9 23:59:09.882835 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. May 9 23:59:09.882854 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. May 9 23:59:09.887625 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:59:09.903414 kernel: loop3: detected capacity change from 0 to 113536 May 9 23:59:09.911405 kernel: loop4: detected capacity change from 0 to 194096 May 9 23:59:09.923393 kernel: loop5: detected capacity change from 0 to 116808 May 9 23:59:09.935138 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 23:59:09.935574 (sd-merge)[1179]: Merged extensions into '/usr'. May 9 23:59:09.940017 systemd[1]: Reloading requested from client PID 1144 ('systemd-sysext') (unit systemd-sysext.service)... May 9 23:59:09.940145 systemd[1]: Reloading... May 9 23:59:10.005407 zram_generator::config[1205]: No configuration found. May 9 23:59:10.104568 ldconfig[1132]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 23:59:10.114044 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:59:10.150650 systemd[1]: Reloading finished in 210 ms. May 9 23:59:10.185665 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 23:59:10.192400 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 23:59:10.211589 systemd[1]: Starting ensure-sysext.service... May 9 23:59:10.213576 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:59:10.241121 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... May 9 23:59:10.241140 systemd[1]: Reloading... May 9 23:59:10.245479 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 23:59:10.245762 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 23:59:10.246423 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 23:59:10.246652 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. May 9 23:59:10.246697 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. May 9 23:59:10.253306 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:59:10.253319 systemd-tmpfiles[1240]: Skipping /boot May 9 23:59:10.261485 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:59:10.261513 systemd-tmpfiles[1240]: Skipping /boot May 9 23:59:10.304395 zram_generator::config[1264]: No configuration found. May 9 23:59:10.390878 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:59:10.427549 systemd[1]: Reloading finished in 186 ms. May 9 23:59:10.445061 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 23:59:10.461451 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:59:10.470466 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 23:59:10.472917 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 23:59:10.475319 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 23:59:10.479772 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:59:10.484826 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:59:10.490832 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 23:59:10.499138 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:59:10.503695 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:59:10.506300 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:59:10.509817 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:59:10.511097 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:59:10.515731 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 23:59:10.519282 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 23:59:10.521171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:59:10.522006 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:59:10.523758 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:59:10.523897 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:59:10.527073 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:59:10.527211 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:59:10.530955 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 23:59:10.531847 systemd-udevd[1308]: Using default interface naming scheme 'v255'. May 9 23:59:10.536738 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:59:10.542701 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:59:10.549963 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:59:10.552409 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:59:10.557214 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:59:10.558700 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 23:59:10.560094 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:59:10.564143 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 23:59:10.565676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:59:10.565807 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:59:10.567931 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:59:10.568090 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:59:10.580013 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 23:59:10.586472 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:59:10.586649 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:59:10.590885 systemd[1]: Finished ensure-sysext.service. May 9 23:59:10.592961 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 23:59:10.596958 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:59:10.602612 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:59:10.606724 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:59:10.612908 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:59:10.615723 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:59:10.622114 augenrules[1375]: No rules May 9 23:59:10.623916 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:59:10.626349 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:59:10.629998 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 23:59:10.632666 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 23:59:10.633170 systemd[1]: audit-rules.service: Deactivated successfully. May 9 23:59:10.633827 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 23:59:10.637802 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:59:10.639781 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:59:10.642084 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:59:10.642231 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:59:10.644825 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:59:10.645009 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:59:10.646182 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 9 23:59:10.653393 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1338) May 9 23:59:10.656687 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:59:10.691188 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 23:59:10.700239 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 23:59:10.735349 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 23:59:10.739931 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 23:59:10.741436 systemd[1]: Reached target time-set.target - System Time Set. May 9 23:59:10.748050 systemd-resolved[1306]: Positive Trust Anchors: May 9 23:59:10.748463 systemd-resolved[1306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:59:10.748591 systemd-resolved[1306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:59:10.754645 systemd-networkd[1373]: lo: Link UP May 9 23:59:10.754653 systemd-networkd[1373]: lo: Gained carrier May 9 23:59:10.755560 systemd-networkd[1373]: Enumeration completed May 9 23:59:10.765674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:10.766726 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:59:10.769185 systemd-resolved[1306]: Defaulting to hostname 'linux'. May 9 23:59:10.771122 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:59:10.771131 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:59:10.772928 systemd-networkd[1373]: eth0: Link UP May 9 23:59:10.772932 systemd-networkd[1373]: eth0: Gained carrier May 9 23:59:10.772948 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:59:10.773945 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 23:59:10.775089 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:59:10.777157 systemd[1]: Reached target network.target - Network. May 9 23:59:10.778087 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:59:10.784458 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 23:59:10.788554 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 23:59:10.798439 systemd-networkd[1373]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 23:59:10.799772 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. May 9 23:59:10.800770 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 23:59:10.800886 systemd-timesyncd[1382]: Initial clock synchronization to Fri 2025-05-09 23:59:10.735797 UTC. May 9 23:59:10.816522 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:59:10.831470 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:10.858444 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 23:59:10.859679 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:59:10.860556 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:59:10.861487 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 23:59:10.862449 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 23:59:10.863604 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 23:59:10.864506 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 23:59:10.865574 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 23:59:10.866482 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 23:59:10.866522 systemd[1]: Reached target paths.target - Path Units. May 9 23:59:10.867206 systemd[1]: Reached target timers.target - Timer Units. May 9 23:59:10.869020 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 23:59:10.871665 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 23:59:10.882476 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 23:59:10.884843 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 23:59:10.886594 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 23:59:10.887843 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:59:10.888834 systemd[1]: Reached target basic.target - Basic System. May 9 23:59:10.889870 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 23:59:10.889901 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 23:59:10.890924 systemd[1]: Starting containerd.service - containerd container runtime... May 9 23:59:10.893430 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:59:10.893017 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 23:59:10.897519 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 23:59:10.899319 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 23:59:10.900190 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 23:59:10.901264 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 23:59:10.911597 jq[1413]: false May 9 23:59:10.912528 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 23:59:10.915229 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 23:59:10.921481 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 23:59:10.923186 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 23:59:10.923713 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 23:59:10.925484 systemd[1]: Starting update-engine.service - Update Engine... May 9 23:59:10.931322 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 23:59:10.933000 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 23:59:10.934935 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 23:59:10.935105 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 23:59:10.935376 systemd[1]: motdgen.service: Deactivated successfully. May 9 23:59:10.935528 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 23:59:10.935978 jq[1427]: true May 9 23:59:10.937865 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 23:59:10.938032 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 23:59:10.942660 extend-filesystems[1414]: Found loop3 May 9 23:59:10.953672 extend-filesystems[1414]: Found loop4 May 9 23:59:10.953672 extend-filesystems[1414]: Found loop5 May 9 23:59:10.953672 extend-filesystems[1414]: Found vda May 9 23:59:10.953672 extend-filesystems[1414]: Found vda1 May 9 23:59:10.953672 extend-filesystems[1414]: Found vda2 May 9 23:59:10.953672 extend-filesystems[1414]: Found vda3 May 9 23:59:10.953672 extend-filesystems[1414]: Found usr May 9 23:59:10.953672 extend-filesystems[1414]: Found vda4 May 9 23:59:10.953672 extend-filesystems[1414]: Found vda6 May 9 23:59:10.953672 extend-filesystems[1414]: Found vda7 May 9 23:59:10.953672 extend-filesystems[1414]: Found vda9 May 9 23:59:10.953672 extend-filesystems[1414]: Checking size of /dev/vda9 May 9 23:59:10.950523 (ntainerd)[1432]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 23:59:10.966304 jq[1430]: true May 9 23:59:10.972135 dbus-daemon[1412]: [system] SELinux support is enabled May 9 23:59:10.972614 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 23:59:10.987389 extend-filesystems[1414]: Resized partition /dev/vda9 May 9 23:59:10.991289 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 23:59:10.991330 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 23:59:10.992951 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 23:59:10.992971 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 23:59:10.995059 extend-filesystems[1445]: resize2fs 1.47.1 (20-May-2024) May 9 23:59:10.999389 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 23:59:10.999469 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1356) May 9 23:59:10.999808 update_engine[1426]: I20250509 23:59:10.999643 1426 main.cc:92] Flatcar Update Engine starting May 9 23:59:11.010410 systemd[1]: Started update-engine.service - Update Engine. May 9 23:59:11.010561 update_engine[1426]: I20250509 23:59:11.010474 1426 update_check_scheduler.cc:74] Next update check in 9m55s May 9 23:59:11.013597 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (Power Button) May 9 23:59:11.015582 systemd-logind[1424]: New seat seat0. May 9 23:59:11.025680 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 23:59:11.028476 systemd[1]: Started systemd-logind.service - User Login Management. May 9 23:59:11.077438 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 23:59:11.092543 locksmithd[1455]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 23:59:11.102708 extend-filesystems[1445]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 23:59:11.102708 extend-filesystems[1445]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 23:59:11.102708 extend-filesystems[1445]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 23:59:11.107244 extend-filesystems[1414]: Resized filesystem in /dev/vda9 May 9 23:59:11.104656 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 23:59:11.104829 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 23:59:11.109402 bash[1462]: Updated "/home/core/.ssh/authorized_keys" May 9 23:59:11.111403 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 23:59:11.113436 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 23:59:11.205560 containerd[1432]: time="2025-05-09T23:59:11.205421228Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 9 23:59:11.231095 containerd[1432]: time="2025-05-09T23:59:11.231035465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 23:59:11.232568 containerd[1432]: time="2025-05-09T23:59:11.232502493Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 23:59:11.232568 containerd[1432]: time="2025-05-09T23:59:11.232541414Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 23:59:11.232568 containerd[1432]: time="2025-05-09T23:59:11.232567548Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 23:59:11.232773 containerd[1432]: time="2025-05-09T23:59:11.232743790Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 23:59:11.232773 containerd[1432]: time="2025-05-09T23:59:11.232769087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 23:59:11.232843 containerd[1432]: time="2025-05-09T23:59:11.232827569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:59:11.232863 containerd[1432]: time="2025-05-09T23:59:11.232844939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 23:59:11.233045 containerd[1432]: time="2025-05-09T23:59:11.233018074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:59:11.233045 containerd[1432]: time="2025-05-09T23:59:11.233040781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 23:59:11.233089 containerd[1432]: time="2025-05-09T23:59:11.233055959Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:59:11.233089 containerd[1432]: time="2025-05-09T23:59:11.233068070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 23:59:11.233161 containerd[1432]: time="2025-05-09T23:59:11.233146112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 23:59:11.233388 containerd[1432]: time="2025-05-09T23:59:11.233347572Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 23:59:11.233498 containerd[1432]: time="2025-05-09T23:59:11.233482144Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:59:11.233526 containerd[1432]: time="2025-05-09T23:59:11.233500749Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 23:59:11.233627 containerd[1432]: time="2025-05-09T23:59:11.233610382Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 23:59:11.233677 containerd[1432]: time="2025-05-09T23:59:11.233665478Z" level=info msg="metadata content store policy set" policy=shared May 9 23:59:11.241692 containerd[1432]: time="2025-05-09T23:59:11.241645429Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 23:59:11.241779 containerd[1432]: time="2025-05-09T23:59:11.241710325Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 23:59:11.241779 containerd[1432]: time="2025-05-09T23:59:11.241728133Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 23:59:11.241779 containerd[1432]: time="2025-05-09T23:59:11.241747135Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 23:59:11.241779 containerd[1432]: time="2025-05-09T23:59:11.241762871Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 23:59:11.241976 containerd[1432]: time="2025-05-09T23:59:11.241947918Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 23:59:11.245168 containerd[1432]: time="2025-05-09T23:59:11.242234153Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 23:59:11.245168 containerd[1432]: time="2025-05-09T23:59:11.242427326Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 23:59:11.245168 containerd[1432]: time="2025-05-09T23:59:11.242446448Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 23:59:11.245168 containerd[1432]: time="2025-05-09T23:59:11.242462304Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 23:59:11.245168 containerd[1432]: time="2025-05-09T23:59:11.242476805Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 23:59:11.245168 containerd[1432]: time="2025-05-09T23:59:11.242489911Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 23:59:11.245168 containerd[1432]: time="2025-05-09T23:59:11.242503377Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 23:59:11.245168 containerd[1432]: time="2025-05-09T23:59:11.242525765Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 23:59:11.245168 containerd[1432]: time="2025-05-09T23:59:11.242542298Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 23:59:11.245168 containerd[1432]: time="2025-05-09T23:59:11.242555245Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 23:59:11.245168 containerd[1432]: time="2025-05-09T23:59:11.242566878Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 23:59:11.245168 containerd[1432]: time="2025-05-09T23:59:11.242579507Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 23:59:11.245168 containerd[1432]: time="2025-05-09T23:59:11.242599983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 23:59:11.245168 containerd[1432]: time="2025-05-09T23:59:11.242621137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 23:59:11.245560 containerd[1432]: time="2025-05-09T23:59:11.242638945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 23:59:11.245560 containerd[1432]: time="2025-05-09T23:59:11.242654003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 23:59:11.245560 containerd[1432]: time="2025-05-09T23:59:11.242666194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 23:59:11.245560 containerd[1432]: time="2025-05-09T23:59:11.242680456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 23:59:11.245560 containerd[1432]: time="2025-05-09T23:59:11.242702486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 23:59:11.245560 containerd[1432]: time="2025-05-09T23:59:11.242715473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 23:59:11.245560 containerd[1432]: time="2025-05-09T23:59:11.242728659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 23:59:11.245560 containerd[1432]: time="2025-05-09T23:59:11.242746148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 23:59:11.245560 containerd[1432]: time="2025-05-09T23:59:11.242760609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 23:59:11.245560 containerd[1432]: time="2025-05-09T23:59:11.242771326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 23:59:11.245560 containerd[1432]: time="2025-05-09T23:59:11.242783556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 23:59:11.245560 containerd[1432]: time="2025-05-09T23:59:11.242798017Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 23:59:11.245560 containerd[1432]: time="2025-05-09T23:59:11.242819689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 23:59:11.245560 containerd[1432]: time="2025-05-09T23:59:11.242833751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 23:59:11.245560 containerd[1432]: time="2025-05-09T23:59:11.242845065Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 23:59:11.245835 containerd[1432]: time="2025-05-09T23:59:11.243099988Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 23:59:11.245835 containerd[1432]: time="2025-05-09T23:59:11.243118632Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 23:59:11.245835 containerd[1432]: time="2025-05-09T23:59:11.243150382Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 23:59:11.245835 containerd[1432]: time="2025-05-09T23:59:11.243171337Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 23:59:11.245835 containerd[1432]: time="2025-05-09T23:59:11.243180619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 23:59:11.245835 containerd[1432]: time="2025-05-09T23:59:11.243193925Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 23:59:11.245835 containerd[1432]: time="2025-05-09T23:59:11.243204084Z" level=info msg="NRI interface is disabled by configuration." May 9 23:59:11.245835 containerd[1432]: time="2025-05-09T23:59:11.243217748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 23:59:11.245978 containerd[1432]: time="2025-05-09T23:59:11.243589793Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 23:59:11.245978 containerd[1432]: time="2025-05-09T23:59:11.243641304Z" level=info msg="Connect containerd service" May 9 23:59:11.245978 containerd[1432]: time="2025-05-09T23:59:11.243679110Z" level=info msg="using legacy CRI server" May 9 23:59:11.245978 containerd[1432]: time="2025-05-09T23:59:11.243686520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 23:59:11.245978 containerd[1432]: time="2025-05-09T23:59:11.243928534Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 23:59:11.245978 containerd[1432]: time="2025-05-09T23:59:11.244635815Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 23:59:11.245978 containerd[1432]: time="2025-05-09T23:59:11.244866755Z" level=info msg="Start subscribing containerd event" May 9 23:59:11.245978 containerd[1432]: time="2025-05-09T23:59:11.244924002Z" level=info msg="Start recovering state" May 9 23:59:11.245978 containerd[1432]: time="2025-05-09T23:59:11.245003199Z" level=info msg="Start event monitor" May 9 23:59:11.245978 containerd[1432]: time="2025-05-09T23:59:11.245015828Z" level=info msg="Start snapshots syncer" May 9 23:59:11.245978 containerd[1432]: time="2025-05-09T23:59:11.245024632Z" level=info msg="Start cni network conf syncer for default" May 9 23:59:11.245978 containerd[1432]: time="2025-05-09T23:59:11.245032838Z" level=info msg="Start streaming server" May 9 23:59:11.249220 containerd[1432]: time="2025-05-09T23:59:11.249166290Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 23:59:11.249286 containerd[1432]: time="2025-05-09T23:59:11.249250587Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 23:59:11.249433 systemd[1]: Started containerd.service - containerd container runtime. May 9 23:59:11.250407 containerd[1432]: time="2025-05-09T23:59:11.250375368Z" level=info msg="containerd successfully booted in 0.046154s" May 9 23:59:11.871469 systemd-networkd[1373]: eth0: Gained IPv6LL May 9 23:59:11.878205 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 23:59:11.879869 systemd[1]: Reached target network-online.target - Network is Online. May 9 23:59:11.889669 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 23:59:11.892143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:59:11.894157 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 23:59:11.922415 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 23:59:11.923799 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 23:59:11.923952 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 23:59:11.926766 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 23:59:12.122735 sshd_keygen[1433]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 23:59:12.142785 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 23:59:12.158663 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 23:59:12.163755 systemd[1]: issuegen.service: Deactivated successfully. May 9 23:59:12.164551 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 23:59:12.168083 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 23:59:12.184930 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 23:59:12.187656 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 23:59:12.189799 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 9 23:59:12.190898 systemd[1]: Reached target getty.target - Login Prompts. May 9 23:59:12.447379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:59:12.448646 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 23:59:12.451959 (kubelet)[1518]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:59:12.453413 systemd[1]: Startup finished in 685ms (kernel) + 4.335s (initrd) + 3.399s (userspace) = 8.421s. May 9 23:59:12.996993 kubelet[1518]: E0509 23:59:12.996934 1518 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:59:12.999508 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:59:12.999667 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:59:17.318472 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 23:59:17.319612 systemd[1]: Started sshd@0-10.0.0.100:22-10.0.0.1:49790.service - OpenSSH per-connection server daemon (10.0.0.1:49790). May 9 23:59:17.382628 sshd[1533]: Accepted publickey for core from 10.0.0.1 port 49790 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:59:17.384372 sshd-session[1533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:17.396444 systemd-logind[1424]: New session 1 of user core. May 9 23:59:17.397539 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 23:59:17.410630 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 23:59:17.421220 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 23:59:17.422936 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 23:59:17.429713 (systemd)[1537]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 23:59:17.503783 systemd[1537]: Queued start job for default target default.target. May 9 23:59:17.512246 systemd[1537]: Created slice app.slice - User Application Slice. May 9 23:59:17.512275 systemd[1537]: Reached target paths.target - Paths. May 9 23:59:17.512287 systemd[1537]: Reached target timers.target - Timers. May 9 23:59:17.513476 systemd[1537]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 23:59:17.522780 systemd[1537]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 23:59:17.522841 systemd[1537]: Reached target sockets.target - Sockets. May 9 23:59:17.522853 systemd[1537]: Reached target basic.target - Basic System. May 9 23:59:17.522887 systemd[1537]: Reached target default.target - Main User Target. May 9 23:59:17.522912 systemd[1537]: Startup finished in 87ms. May 9 23:59:17.523200 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 23:59:17.531497 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 23:59:17.590802 systemd[1]: Started sshd@1-10.0.0.100:22-10.0.0.1:49806.service - OpenSSH per-connection server daemon (10.0.0.1:49806). May 9 23:59:17.631447 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 49806 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:59:17.632662 sshd-session[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:17.636425 systemd-logind[1424]: New session 2 of user core. May 9 23:59:17.644501 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 23:59:17.696198 sshd[1550]: Connection closed by 10.0.0.1 port 49806 May 9 23:59:17.696618 sshd-session[1548]: pam_unix(sshd:session): session closed for user core May 9 23:59:17.712883 systemd[1]: sshd@1-10.0.0.100:22-10.0.0.1:49806.service: Deactivated successfully. May 9 23:59:17.714297 systemd[1]: session-2.scope: Deactivated successfully. May 9 23:59:17.716684 systemd-logind[1424]: Session 2 logged out. Waiting for processes to exit. May 9 23:59:17.718250 systemd-logind[1424]: Removed session 2. May 9 23:59:17.720461 systemd[1]: Started sshd@2-10.0.0.100:22-10.0.0.1:49808.service - OpenSSH per-connection server daemon (10.0.0.1:49808). May 9 23:59:17.760904 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 49808 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:59:17.762067 sshd-session[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:17.766850 systemd-logind[1424]: New session 3 of user core. May 9 23:59:17.781534 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 23:59:17.830919 sshd[1557]: Connection closed by 10.0.0.1 port 49808 May 9 23:59:17.831226 sshd-session[1555]: pam_unix(sshd:session): session closed for user core May 9 23:59:17.844686 systemd[1]: sshd@2-10.0.0.100:22-10.0.0.1:49808.service: Deactivated successfully. May 9 23:59:17.846055 systemd[1]: session-3.scope: Deactivated successfully. May 9 23:59:17.848427 systemd-logind[1424]: Session 3 logged out. Waiting for processes to exit. May 9 23:59:17.861613 systemd[1]: Started sshd@3-10.0.0.100:22-10.0.0.1:49814.service - OpenSSH per-connection server daemon (10.0.0.1:49814). May 9 23:59:17.862733 systemd-logind[1424]: Removed session 3. May 9 23:59:17.900812 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 49814 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:59:17.901990 sshd-session[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:17.905388 systemd-logind[1424]: New session 4 of user core. May 9 23:59:17.913516 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 23:59:17.964740 sshd[1564]: Connection closed by 10.0.0.1 port 49814 May 9 23:59:17.965192 sshd-session[1562]: pam_unix(sshd:session): session closed for user core May 9 23:59:17.982765 systemd[1]: sshd@3-10.0.0.100:22-10.0.0.1:49814.service: Deactivated successfully. May 9 23:59:17.984151 systemd[1]: session-4.scope: Deactivated successfully. May 9 23:59:17.985348 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit. May 9 23:59:17.986514 systemd[1]: Started sshd@4-10.0.0.100:22-10.0.0.1:49820.service - OpenSSH per-connection server daemon (10.0.0.1:49820). May 9 23:59:17.987267 systemd-logind[1424]: Removed session 4. May 9 23:59:18.027159 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 49820 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:59:18.028310 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:18.032289 systemd-logind[1424]: New session 5 of user core. May 9 23:59:18.045542 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 23:59:18.114492 sudo[1572]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 23:59:18.116932 sudo[1572]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:59:18.132240 sudo[1572]: pam_unix(sudo:session): session closed for user root May 9 23:59:18.135710 sshd[1571]: Connection closed by 10.0.0.1 port 49820 May 9 23:59:18.136548 sshd-session[1569]: pam_unix(sshd:session): session closed for user core May 9 23:59:18.144693 systemd[1]: sshd@4-10.0.0.100:22-10.0.0.1:49820.service: Deactivated successfully. May 9 23:59:18.146063 systemd[1]: session-5.scope: Deactivated successfully. May 9 23:59:18.147290 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit. May 9 23:59:18.148544 systemd[1]: Started sshd@5-10.0.0.100:22-10.0.0.1:49832.service - OpenSSH per-connection server daemon (10.0.0.1:49832). May 9 23:59:18.149293 systemd-logind[1424]: Removed session 5. May 9 23:59:18.189639 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 49832 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:59:18.190874 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:18.194420 systemd-logind[1424]: New session 6 of user core. May 9 23:59:18.209509 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 23:59:18.264690 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 23:59:18.264958 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:59:18.268046 sudo[1581]: pam_unix(sudo:session): session closed for user root May 9 23:59:18.272544 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 9 23:59:18.272809 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:59:18.298703 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 23:59:18.321976 augenrules[1603]: No rules May 9 23:59:18.323238 systemd[1]: audit-rules.service: Deactivated successfully. May 9 23:59:18.323436 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 23:59:18.324493 sudo[1580]: pam_unix(sudo:session): session closed for user root May 9 23:59:18.326542 sshd[1579]: Connection closed by 10.0.0.1 port 49832 May 9 23:59:18.326411 sshd-session[1577]: pam_unix(sshd:session): session closed for user core May 9 23:59:18.336976 systemd[1]: sshd@5-10.0.0.100:22-10.0.0.1:49832.service: Deactivated successfully. May 9 23:59:18.338773 systemd[1]: session-6.scope: Deactivated successfully. May 9 23:59:18.341348 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit. May 9 23:59:18.342583 systemd[1]: Started sshd@6-10.0.0.100:22-10.0.0.1:49848.service - OpenSSH per-connection server daemon (10.0.0.1:49848). May 9 23:59:18.343320 systemd-logind[1424]: Removed session 6. May 9 23:59:18.384826 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 49848 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:59:18.385936 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:18.389413 systemd-logind[1424]: New session 7 of user core. May 9 23:59:18.400547 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 23:59:18.452122 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 23:59:18.452750 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:59:18.472713 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 23:59:18.489219 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 23:59:18.490432 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 23:59:19.026600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:59:19.043581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:59:19.059120 systemd[1]: Reloading requested from client PID 1664 ('systemctl') (unit session-7.scope)... May 9 23:59:19.059136 systemd[1]: Reloading... May 9 23:59:19.125382 zram_generator::config[1702]: No configuration found. May 9 23:59:19.313609 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:59:19.364855 systemd[1]: Reloading finished in 305 ms. May 9 23:59:19.401511 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 23:59:19.401576 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 23:59:19.403432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:59:19.406037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:59:19.506584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:59:19.510494 (kubelet)[1748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 23:59:19.546309 kubelet[1748]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:59:19.546309 kubelet[1748]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 23:59:19.546309 kubelet[1748]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:59:19.547398 kubelet[1748]: I0509 23:59:19.547334 1748 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 23:59:20.253429 kubelet[1748]: I0509 23:59:20.253391 1748 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 23:59:20.254396 kubelet[1748]: I0509 23:59:20.253576 1748 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 23:59:20.254396 kubelet[1748]: I0509 23:59:20.253794 1748 server.go:927] "Client rotation is on, will bootstrap in background" May 9 23:59:20.305276 kubelet[1748]: I0509 23:59:20.305038 1748 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 23:59:20.318076 kubelet[1748]: I0509 23:59:20.317967 1748 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 23:59:20.319413 kubelet[1748]: I0509 23:59:20.319295 1748 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 23:59:20.319568 kubelet[1748]: I0509 23:59:20.319365 1748 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.100","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 23:59:20.319683 kubelet[1748]: I0509 23:59:20.319629 1748 topology_manager.go:138] "Creating topology manager with none policy" May 9 23:59:20.319683 kubelet[1748]: I0509 23:59:20.319639 1748 container_manager_linux.go:301] "Creating device plugin manager" May 9 23:59:20.319948 kubelet[1748]: I0509 23:59:20.319916 1748 state_mem.go:36] "Initialized new in-memory state store" May 9 23:59:20.321146 kubelet[1748]: I0509 23:59:20.321113 1748 kubelet.go:400] "Attempting to sync node with API server" May 9 23:59:20.321146 kubelet[1748]: I0509 23:59:20.321141 1748 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 23:59:20.321298 kubelet[1748]: I0509 23:59:20.321276 1748 kubelet.go:312] "Adding apiserver pod source" May 9 23:59:20.321407 kubelet[1748]: I0509 23:59:20.321390 1748 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 23:59:20.321623 kubelet[1748]: E0509 23:59:20.321586 1748 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:20.321718 kubelet[1748]: E0509 23:59:20.321702 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:20.322897 kubelet[1748]: I0509 23:59:20.322865 1748 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 9 23:59:20.323295 kubelet[1748]: I0509 23:59:20.323271 1748 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 23:59:20.323406 kubelet[1748]: W0509 23:59:20.323394 1748 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 23:59:20.324241 kubelet[1748]: I0509 23:59:20.324211 1748 server.go:1264] "Started kubelet" May 9 23:59:20.324739 kubelet[1748]: I0509 23:59:20.324588 1748 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 23:59:20.324903 kubelet[1748]: I0509 23:59:20.324884 1748 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 23:59:20.324949 kubelet[1748]: I0509 23:59:20.324927 1748 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 23:59:20.327496 kubelet[1748]: I0509 23:59:20.325641 1748 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 23:59:20.327496 kubelet[1748]: I0509 23:59:20.325995 1748 server.go:455] "Adding debug handlers to kubelet server" May 9 23:59:20.327496 kubelet[1748]: I0509 23:59:20.326564 1748 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 23:59:20.327496 kubelet[1748]: I0509 23:59:20.326683 1748 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 23:59:20.329158 kubelet[1748]: I0509 23:59:20.329127 1748 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 23:59:20.329541 kubelet[1748]: I0509 23:59:20.329512 1748 reconciler.go:26] "Reconciler: start to sync state" May 9 23:59:20.330925 kubelet[1748]: E0509 23:59:20.330902 1748 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 23:59:20.332370 kubelet[1748]: I0509 23:59:20.332327 1748 factory.go:221] Registration of the containerd container factory successfully May 9 23:59:20.333372 kubelet[1748]: I0509 23:59:20.332452 1748 factory.go:221] Registration of the systemd container factory successfully May 9 23:59:20.343884 kubelet[1748]: E0509 23:59:20.343843 1748 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.100\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 9 23:59:20.346159 kubelet[1748]: I0509 23:59:20.345883 1748 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 23:59:20.346159 kubelet[1748]: I0509 23:59:20.345902 1748 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 23:59:20.346159 kubelet[1748]: I0509 23:59:20.345920 1748 state_mem.go:36] "Initialized new in-memory state store" May 9 23:59:20.422908 kubelet[1748]: W0509 23:59:20.422451 1748 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.100" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 9 23:59:20.422908 kubelet[1748]: E0509 23:59:20.422492 1748 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.100" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 9 23:59:20.422908 kubelet[1748]: E0509 23:59:20.422574 1748 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.100.183e0152bc493080 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.100,UID:10.0.0.100,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.100,},FirstTimestamp:2025-05-09 23:59:20.324190336 +0000 UTC m=+0.810892289,LastTimestamp:2025-05-09 23:59:20.324190336 +0000 UTC m=+0.810892289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.100,}" May 9 23:59:20.423406 kubelet[1748]: I0509 23:59:20.423387 1748 policy_none.go:49] "None policy: Start" May 9 23:59:20.424684 kubelet[1748]: I0509 23:59:20.424659 1748 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 23:59:20.424684 kubelet[1748]: I0509 23:59:20.424688 1748 state_mem.go:35] "Initializing new in-memory state store" May 9 23:59:20.426300 kubelet[1748]: W0509 23:59:20.426276 1748 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 9 23:59:20.426430 kubelet[1748]: E0509 23:59:20.426417 1748 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 9 23:59:20.426647 kubelet[1748]: W0509 23:59:20.426469 1748 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 9 23:59:20.426647 kubelet[1748]: E0509 23:59:20.426494 1748 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 9 23:59:20.427301 kubelet[1748]: I0509 23:59:20.427282 1748 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.100" May 9 23:59:20.438057 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 23:59:20.452130 kubelet[1748]: I0509 23:59:20.452037 1748 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.100" May 9 23:59:20.453872 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 23:59:20.457788 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 23:59:20.464081 kubelet[1748]: I0509 23:59:20.464021 1748 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 23:59:20.465379 kubelet[1748]: I0509 23:59:20.465201 1748 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 23:59:20.465459 kubelet[1748]: I0509 23:59:20.465418 1748 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 23:59:20.465547 kubelet[1748]: I0509 23:59:20.465526 1748 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 23:59:20.465700 kubelet[1748]: I0509 23:59:20.465677 1748 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 23:59:20.466184 kubelet[1748]: I0509 23:59:20.465788 1748 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 23:59:20.466184 kubelet[1748]: I0509 23:59:20.465814 1748 kubelet.go:2337] "Starting kubelet main sync loop" May 9 23:59:20.466184 kubelet[1748]: E0509 23:59:20.465861 1748 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 9 23:59:20.467671 kubelet[1748]: E0509 23:59:20.467636 1748 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.100\" not found" May 9 23:59:20.493822 kubelet[1748]: E0509 23:59:20.493761 1748 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.100\" not found" May 9 23:59:20.532688 sudo[1614]: pam_unix(sudo:session): session closed for user root May 9 23:59:20.533868 sshd[1613]: Connection closed by 10.0.0.1 port 49848 May 9 23:59:20.534150 sshd-session[1611]: pam_unix(sshd:session): session closed for user core May 9 23:59:20.538156 systemd[1]: sshd@6-10.0.0.100:22-10.0.0.1:49848.service: Deactivated successfully. May 9 23:59:20.539799 systemd[1]: session-7.scope: Deactivated successfully. May 9 23:59:20.540493 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit. May 9 23:59:20.541455 systemd-logind[1424]: Removed session 7. May 9 23:59:20.594758 kubelet[1748]: E0509 23:59:20.594694 1748 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.100\" not found" May 9 23:59:20.695050 kubelet[1748]: E0509 23:59:20.694995 1748 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.100\" not found" May 9 23:59:20.795559 kubelet[1748]: E0509 23:59:20.795411 1748 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.100\" not found" May 9 23:59:20.896174 kubelet[1748]: E0509 23:59:20.896123 1748 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.100\" not found" May 9 23:59:20.996577 kubelet[1748]: E0509 23:59:20.996534 1748 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.100\" not found" May 9 23:59:21.097041 kubelet[1748]: E0509 23:59:21.096946 1748 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.100\" not found" May 9 23:59:21.197395 kubelet[1748]: E0509 23:59:21.197348 1748 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.100\" not found" May 9 23:59:21.255822 kubelet[1748]: I0509 23:59:21.255784 1748 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 9 23:59:21.255973 kubelet[1748]: W0509 23:59:21.255936 1748 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 9 23:59:21.297956 kubelet[1748]: E0509 23:59:21.297926 1748 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.100\" not found" May 9 23:59:21.322280 kubelet[1748]: E0509 23:59:21.322252 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:21.398803 kubelet[1748]: E0509 23:59:21.398712 1748 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.100\" not found" May 9 23:59:21.499032 kubelet[1748]: E0509 23:59:21.498988 1748 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.100\" not found" May 9 23:59:21.599510 kubelet[1748]: E0509 23:59:21.599466 1748 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.100\" not found" May 9 23:59:21.700458 kubelet[1748]: I0509 23:59:21.700338 1748 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 9 23:59:21.700970 containerd[1432]: time="2025-05-09T23:59:21.700913969Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 23:59:21.701263 kubelet[1748]: I0509 23:59:21.701109 1748 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 9 23:59:22.322502 kubelet[1748]: I0509 23:59:22.322465 1748 apiserver.go:52] "Watching apiserver" May 9 23:59:22.322502 kubelet[1748]: E0509 23:59:22.322495 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:22.328986 kubelet[1748]: I0509 23:59:22.328941 1748 topology_manager.go:215] "Topology Admit Handler" podUID="6e47b212-ad2b-4337-883c-3969e347b3a3" podNamespace="kube-system" podName="cilium-gl4kf" May 9 23:59:22.329129 kubelet[1748]: I0509 23:59:22.329106 1748 topology_manager.go:215] "Topology Admit Handler" podUID="3d00d554-3d01-4f63-9781-5961397b5165" podNamespace="kube-system" podName="kube-proxy-l7jjq" May 9 23:59:22.334315 systemd[1]: Created slice kubepods-besteffort-pod3d00d554_3d01_4f63_9781_5961397b5165.slice - libcontainer container kubepods-besteffort-pod3d00d554_3d01_4f63_9781_5961397b5165.slice. May 9 23:59:22.347989 systemd[1]: Created slice kubepods-burstable-pod6e47b212_ad2b_4337_883c_3969e347b3a3.slice - libcontainer container kubepods-burstable-pod6e47b212_ad2b_4337_883c_3969e347b3a3.slice. May 9 23:59:22.427557 kubelet[1748]: I0509 23:59:22.427501 1748 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 23:59:22.440345 kubelet[1748]: I0509 23:59:22.440309 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-cilium-cgroup\") pod \"cilium-gl4kf\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " pod="kube-system/cilium-gl4kf" May 9 23:59:22.440427 kubelet[1748]: I0509 23:59:22.440365 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-xtables-lock\") pod \"cilium-gl4kf\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " pod="kube-system/cilium-gl4kf" May 9 23:59:22.440427 kubelet[1748]: I0509 23:59:22.440387 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d00d554-3d01-4f63-9781-5961397b5165-xtables-lock\") pod \"kube-proxy-l7jjq\" (UID: \"3d00d554-3d01-4f63-9781-5961397b5165\") " pod="kube-system/kube-proxy-l7jjq" May 9 23:59:22.440427 kubelet[1748]: I0509 23:59:22.440403 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-cni-path\") pod \"cilium-gl4kf\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " pod="kube-system/cilium-gl4kf" May 9 23:59:22.440427 kubelet[1748]: I0509 23:59:22.440420 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb9vt\" (UniqueName: \"kubernetes.io/projected/6e47b212-ad2b-4337-883c-3969e347b3a3-kube-api-access-jb9vt\") pod \"cilium-gl4kf\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " pod="kube-system/cilium-gl4kf" May 9 23:59:22.440541 kubelet[1748]: I0509 23:59:22.440436 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d00d554-3d01-4f63-9781-5961397b5165-lib-modules\") pod \"kube-proxy-l7jjq\" (UID: \"3d00d554-3d01-4f63-9781-5961397b5165\") " pod="kube-system/kube-proxy-l7jjq" May 9 23:59:22.440541 kubelet[1748]: I0509 23:59:22.440451 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-cilium-run\") pod \"cilium-gl4kf\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " pod="kube-system/cilium-gl4kf" May 9 23:59:22.440541 kubelet[1748]: I0509 23:59:22.440465 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-bpf-maps\") pod \"cilium-gl4kf\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " pod="kube-system/cilium-gl4kf" May 9 23:59:22.440541 kubelet[1748]: I0509 23:59:22.440493 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-etc-cni-netd\") pod \"cilium-gl4kf\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " pod="kube-system/cilium-gl4kf" May 9 23:59:22.440541 kubelet[1748]: I0509 23:59:22.440509 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-lib-modules\") pod \"cilium-gl4kf\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " pod="kube-system/cilium-gl4kf" May 9 23:59:22.440541 kubelet[1748]: I0509 23:59:22.440524 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-host-proc-sys-net\") pod \"cilium-gl4kf\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " pod="kube-system/cilium-gl4kf" May 9 23:59:22.440658 kubelet[1748]: I0509 23:59:22.440537 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6e47b212-ad2b-4337-883c-3969e347b3a3-hubble-tls\") pod \"cilium-gl4kf\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " pod="kube-system/cilium-gl4kf" May 9 23:59:22.440658 kubelet[1748]: I0509 23:59:22.440554 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3d00d554-3d01-4f63-9781-5961397b5165-kube-proxy\") pod \"kube-proxy-l7jjq\" (UID: \"3d00d554-3d01-4f63-9781-5961397b5165\") " pod="kube-system/kube-proxy-l7jjq" May 9 23:59:22.440658 kubelet[1748]: I0509 23:59:22.440570 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-hostproc\") pod \"cilium-gl4kf\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " pod="kube-system/cilium-gl4kf" May 9 23:59:22.440658 kubelet[1748]: I0509 23:59:22.440585 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6e47b212-ad2b-4337-883c-3969e347b3a3-clustermesh-secrets\") pod \"cilium-gl4kf\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " pod="kube-system/cilium-gl4kf" May 9 23:59:22.440658 kubelet[1748]: I0509 23:59:22.440601 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e47b212-ad2b-4337-883c-3969e347b3a3-cilium-config-path\") pod \"cilium-gl4kf\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " pod="kube-system/cilium-gl4kf" May 9 23:59:22.440658 kubelet[1748]: I0509 23:59:22.440615 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-host-proc-sys-kernel\") pod \"cilium-gl4kf\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " pod="kube-system/cilium-gl4kf" May 9 23:59:22.440777 kubelet[1748]: I0509 23:59:22.440632 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgcgn\" (UniqueName: \"kubernetes.io/projected/3d00d554-3d01-4f63-9781-5961397b5165-kube-api-access-bgcgn\") pod \"kube-proxy-l7jjq\" (UID: \"3d00d554-3d01-4f63-9781-5961397b5165\") " pod="kube-system/kube-proxy-l7jjq" May 9 23:59:22.646562 kubelet[1748]: E0509 23:59:22.646176 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:59:22.647387 containerd[1432]: time="2025-05-09T23:59:22.646951990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l7jjq,Uid:3d00d554-3d01-4f63-9781-5961397b5165,Namespace:kube-system,Attempt:0,}" May 9 23:59:22.663632 kubelet[1748]: E0509 23:59:22.663598 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:59:22.664070 containerd[1432]: time="2025-05-09T23:59:22.664024366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gl4kf,Uid:6e47b212-ad2b-4337-883c-3969e347b3a3,Namespace:kube-system,Attempt:0,}" May 9 23:59:23.198004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4289134584.mount: Deactivated successfully. May 9 23:59:23.207574 containerd[1432]: time="2025-05-09T23:59:23.207523387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:59:23.209168 containerd[1432]: time="2025-05-09T23:59:23.209118799Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 9 23:59:23.213016 containerd[1432]: time="2025-05-09T23:59:23.212947230Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:59:23.213828 containerd[1432]: time="2025-05-09T23:59:23.213770169Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 23:59:23.214438 containerd[1432]: time="2025-05-09T23:59:23.214390408Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:59:23.219158 containerd[1432]: time="2025-05-09T23:59:23.219106936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:59:23.219876 containerd[1432]: time="2025-05-09T23:59:23.219761311Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 572.726325ms" May 9 23:59:23.222254 containerd[1432]: time="2025-05-09T23:59:23.222216154Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 558.103085ms" May 9 23:59:23.326524 kubelet[1748]: E0509 23:59:23.326456 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:23.372763 containerd[1432]: time="2025-05-09T23:59:23.372572684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:23.373194 containerd[1432]: time="2025-05-09T23:59:23.373137108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:23.373194 containerd[1432]: time="2025-05-09T23:59:23.373157270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:23.373305 containerd[1432]: time="2025-05-09T23:59:23.373242151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:23.378380 containerd[1432]: time="2025-05-09T23:59:23.375923889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:23.378380 containerd[1432]: time="2025-05-09T23:59:23.376028533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:23.378380 containerd[1432]: time="2025-05-09T23:59:23.376065544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:23.378380 containerd[1432]: time="2025-05-09T23:59:23.376678077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:23.463602 systemd[1]: Started cri-containerd-67a88c1b002f5464df5a49373f9110f2854b2d1ea1e051fb8d6126ef4bee1cad.scope - libcontainer container 67a88c1b002f5464df5a49373f9110f2854b2d1ea1e051fb8d6126ef4bee1cad. May 9 23:59:23.465345 systemd[1]: Started cri-containerd-cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611.scope - libcontainer container cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611. May 9 23:59:23.486675 containerd[1432]: time="2025-05-09T23:59:23.486635257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l7jjq,Uid:3d00d554-3d01-4f63-9781-5961397b5165,Namespace:kube-system,Attempt:0,} returns sandbox id \"67a88c1b002f5464df5a49373f9110f2854b2d1ea1e051fb8d6126ef4bee1cad\"" May 9 23:59:23.488159 kubelet[1748]: E0509 23:59:23.488121 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:59:23.489503 containerd[1432]: time="2025-05-09T23:59:23.489463321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gl4kf,Uid:6e47b212-ad2b-4337-883c-3969e347b3a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\"" May 9 23:59:23.489642 containerd[1432]: time="2025-05-09T23:59:23.489547923Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 9 23:59:23.490189 kubelet[1748]: E0509 23:59:23.490169 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:59:24.326647 kubelet[1748]: E0509 23:59:24.326585 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:24.472681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2009993496.mount: Deactivated successfully. May 9 23:59:24.677427 containerd[1432]: time="2025-05-09T23:59:24.677292687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:24.678391 containerd[1432]: time="2025-05-09T23:59:24.678270011Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 9 23:59:24.679003 containerd[1432]: time="2025-05-09T23:59:24.678970462Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:24.681708 containerd[1432]: time="2025-05-09T23:59:24.681498425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:24.683021 containerd[1432]: time="2025-05-09T23:59:24.682988809Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.19337397s" May 9 23:59:24.683143 containerd[1432]: time="2025-05-09T23:59:24.683125449Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 9 23:59:24.684441 containerd[1432]: time="2025-05-09T23:59:24.684374896Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 9 23:59:24.686187 containerd[1432]: time="2025-05-09T23:59:24.686158565Z" level=info msg="CreateContainer within sandbox \"67a88c1b002f5464df5a49373f9110f2854b2d1ea1e051fb8d6126ef4bee1cad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 23:59:24.699682 containerd[1432]: time="2025-05-09T23:59:24.699633911Z" level=info msg="CreateContainer within sandbox \"67a88c1b002f5464df5a49373f9110f2854b2d1ea1e051fb8d6126ef4bee1cad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"721cd80dca7561f1c8afd9b8346d72143f5160e8f57d1843a8d221ccdee02895\"" May 9 23:59:24.700299 containerd[1432]: time="2025-05-09T23:59:24.700268318Z" level=info msg="StartContainer for \"721cd80dca7561f1c8afd9b8346d72143f5160e8f57d1843a8d221ccdee02895\"" May 9 23:59:24.726530 systemd[1]: Started cri-containerd-721cd80dca7561f1c8afd9b8346d72143f5160e8f57d1843a8d221ccdee02895.scope - libcontainer container 721cd80dca7561f1c8afd9b8346d72143f5160e8f57d1843a8d221ccdee02895. May 9 23:59:24.750349 containerd[1432]: time="2025-05-09T23:59:24.750299737Z" level=info msg="StartContainer for \"721cd80dca7561f1c8afd9b8346d72143f5160e8f57d1843a8d221ccdee02895\" returns successfully" May 9 23:59:25.326765 kubelet[1748]: E0509 23:59:25.326682 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:25.480659 kubelet[1748]: E0509 23:59:25.480613 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:59:25.493438 kubelet[1748]: I0509 23:59:25.493323 1748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l7jjq" podStartSLOduration=4.298323049 podStartE2EDuration="5.49330766s" podCreationTimestamp="2025-05-09 23:59:20 +0000 UTC" firstStartedPulling="2025-05-09 23:59:23.489169432 +0000 UTC m=+3.975871345" lastFinishedPulling="2025-05-09 23:59:24.684154003 +0000 UTC m=+5.170855956" observedRunningTime="2025-05-09 23:59:25.493198959 +0000 UTC m=+5.979900912" watchObservedRunningTime="2025-05-09 23:59:25.49330766 +0000 UTC m=+5.980009613" May 9 23:59:26.327853 kubelet[1748]: E0509 23:59:26.327792 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:26.481484 kubelet[1748]: E0509 23:59:26.481452 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:59:27.273348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167322003.mount: Deactivated successfully. May 9 23:59:27.328694 kubelet[1748]: E0509 23:59:27.328650 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:28.329170 kubelet[1748]: E0509 23:59:28.329122 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:28.495231 containerd[1432]: time="2025-05-09T23:59:28.495177980Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:28.495814 containerd[1432]: time="2025-05-09T23:59:28.495764705Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 9 23:59:28.496933 containerd[1432]: time="2025-05-09T23:59:28.496907636Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:28.498516 containerd[1432]: time="2025-05-09T23:59:28.498432409Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 3.814016221s" May 9 23:59:28.498516 containerd[1432]: time="2025-05-09T23:59:28.498465365Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 9 23:59:28.500931 containerd[1432]: time="2025-05-09T23:59:28.500899425Z" level=info msg="CreateContainer within sandbox \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 23:59:28.510148 containerd[1432]: time="2025-05-09T23:59:28.510101513Z" level=info msg="CreateContainer within sandbox \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf\"" May 9 23:59:28.510638 containerd[1432]: time="2025-05-09T23:59:28.510604591Z" level=info msg="StartContainer for \"6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf\"" May 9 23:59:28.546562 systemd[1]: Started cri-containerd-6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf.scope - libcontainer container 6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf. May 9 23:59:28.567248 containerd[1432]: time="2025-05-09T23:59:28.567166647Z" level=info msg="StartContainer for \"6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf\" returns successfully" May 9 23:59:28.612860 systemd[1]: cri-containerd-6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf.scope: Deactivated successfully. May 9 23:59:28.812165 containerd[1432]: time="2025-05-09T23:59:28.812100107Z" level=info msg="shim disconnected" id=6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf namespace=k8s.io May 9 23:59:28.812165 containerd[1432]: time="2025-05-09T23:59:28.812156670Z" level=warning msg="cleaning up after shim disconnected" id=6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf namespace=k8s.io May 9 23:59:28.812165 containerd[1432]: time="2025-05-09T23:59:28.812164500Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:59:29.330055 kubelet[1748]: E0509 23:59:29.330001 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:29.487408 kubelet[1748]: E0509 23:59:29.487139 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:59:29.488923 containerd[1432]: time="2025-05-09T23:59:29.488875329Z" level=info msg="CreateContainer within sandbox \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 23:59:29.500565 containerd[1432]: time="2025-05-09T23:59:29.500518736Z" level=info msg="CreateContainer within sandbox \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd\"" May 9 23:59:29.501155 containerd[1432]: time="2025-05-09T23:59:29.501073591Z" level=info msg="StartContainer for \"7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd\"" May 9 23:59:29.507534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf-rootfs.mount: Deactivated successfully. May 9 23:59:29.532530 systemd[1]: Started cri-containerd-7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd.scope - libcontainer container 7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd. May 9 23:59:29.552558 containerd[1432]: time="2025-05-09T23:59:29.552487665Z" level=info msg="StartContainer for \"7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd\" returns successfully" May 9 23:59:29.580638 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:59:29.580858 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:29.580925 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 9 23:59:29.588656 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:59:29.588833 systemd[1]: cri-containerd-7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd.scope: Deactivated successfully. May 9 23:59:29.600689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd-rootfs.mount: Deactivated successfully. May 9 23:59:29.602003 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:29.627864 containerd[1432]: time="2025-05-09T23:59:29.627795302Z" level=info msg="shim disconnected" id=7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd namespace=k8s.io May 9 23:59:29.627864 containerd[1432]: time="2025-05-09T23:59:29.627852868Z" level=warning msg="cleaning up after shim disconnected" id=7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd namespace=k8s.io May 9 23:59:29.627864 containerd[1432]: time="2025-05-09T23:59:29.627862097Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:59:30.331064 kubelet[1748]: E0509 23:59:30.331014 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:30.490198 kubelet[1748]: E0509 23:59:30.490169 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:59:30.496059 containerd[1432]: time="2025-05-09T23:59:30.494507801Z" level=info msg="CreateContainer within sandbox \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 23:59:30.509021 containerd[1432]: time="2025-05-09T23:59:30.508959268Z" level=info msg="CreateContainer within sandbox \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117\"" May 9 23:59:30.510049 containerd[1432]: time="2025-05-09T23:59:30.509666546Z" level=info msg="StartContainer for \"d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117\"" May 9 23:59:30.547602 systemd[1]: Started cri-containerd-d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117.scope - libcontainer container d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117. May 9 23:59:30.574643 containerd[1432]: time="2025-05-09T23:59:30.571575688Z" level=info msg="StartContainer for \"d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117\" returns successfully" May 9 23:59:30.608794 systemd[1]: cri-containerd-d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117.scope: Deactivated successfully. May 9 23:59:30.625771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117-rootfs.mount: Deactivated successfully. May 9 23:59:30.634791 containerd[1432]: time="2025-05-09T23:59:30.634599582Z" level=info msg="shim disconnected" id=d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117 namespace=k8s.io May 9 23:59:30.634791 containerd[1432]: time="2025-05-09T23:59:30.634656235Z" level=warning msg="cleaning up after shim disconnected" id=d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117 namespace=k8s.io May 9 23:59:30.634791 containerd[1432]: time="2025-05-09T23:59:30.634664585Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:59:31.331817 kubelet[1748]: E0509 23:59:31.331738 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:31.493583 kubelet[1748]: E0509 23:59:31.493550 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:59:31.495379 containerd[1432]: time="2025-05-09T23:59:31.495331811Z" level=info msg="CreateContainer within sandbox \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 23:59:31.511930 containerd[1432]: time="2025-05-09T23:59:31.511874221Z" level=info msg="CreateContainer within sandbox \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c\"" May 9 23:59:31.516261 containerd[1432]: time="2025-05-09T23:59:31.515884583Z" level=info msg="StartContainer for \"4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c\"" May 9 23:59:31.533577 systemd[1]: run-containerd-runc-k8s.io-4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c-runc.l9YXEK.mount: Deactivated successfully. May 9 23:59:31.542573 systemd[1]: Started cri-containerd-4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c.scope - libcontainer container 4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c. May 9 23:59:31.581540 systemd[1]: cri-containerd-4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c.scope: Deactivated successfully. May 9 23:59:31.585697 containerd[1432]: time="2025-05-09T23:59:31.585206982Z" level=info msg="StartContainer for \"4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c\" returns successfully" May 9 23:59:31.602174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c-rootfs.mount: Deactivated successfully. May 9 23:59:31.610129 containerd[1432]: time="2025-05-09T23:59:31.610075335Z" level=info msg="shim disconnected" id=4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c namespace=k8s.io May 9 23:59:31.610520 containerd[1432]: time="2025-05-09T23:59:31.610301523Z" level=warning msg="cleaning up after shim disconnected" id=4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c namespace=k8s.io May 9 23:59:31.610520 containerd[1432]: time="2025-05-09T23:59:31.610320661Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:59:32.332708 kubelet[1748]: E0509 23:59:32.332659 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:32.499517 kubelet[1748]: E0509 23:59:32.499472 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:59:32.502523 containerd[1432]: time="2025-05-09T23:59:32.502453083Z" level=info msg="CreateContainer within sandbox \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 23:59:32.522758 containerd[1432]: time="2025-05-09T23:59:32.522712358Z" level=info msg="CreateContainer within sandbox \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3\"" May 9 23:59:32.523575 containerd[1432]: time="2025-05-09T23:59:32.523409708Z" level=info msg="StartContainer for \"abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3\"" May 9 23:59:32.547329 systemd[1]: run-containerd-runc-k8s.io-abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3-runc.NPDBaf.mount: Deactivated successfully. May 9 23:59:32.557591 systemd[1]: Started cri-containerd-abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3.scope - libcontainer container abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3. May 9 23:59:32.588388 containerd[1432]: time="2025-05-09T23:59:32.588170844Z" level=info msg="StartContainer for \"abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3\" returns successfully" May 9 23:59:32.707637 kubelet[1748]: I0509 23:59:32.707607 1748 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 9 23:59:33.333532 kubelet[1748]: E0509 23:59:33.333487 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:33.504665 kubelet[1748]: E0509 23:59:33.504634 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:59:33.807395 kernel: Initializing XFRM netlink socket May 9 23:59:34.334448 kubelet[1748]: E0509 23:59:34.334392 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:34.506565 kubelet[1748]: E0509 23:59:34.506478 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:59:35.335350 kubelet[1748]: E0509 23:59:35.335290 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:35.487493 systemd-networkd[1373]: cilium_host: Link UP May 9 23:59:35.487980 systemd-networkd[1373]: cilium_net: Link UP May 9 23:59:35.488948 systemd-networkd[1373]: cilium_net: Gained carrier May 9 23:59:35.489122 systemd-networkd[1373]: cilium_host: Gained carrier May 9 23:59:35.509992 kubelet[1748]: E0509 23:59:35.509672 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:59:35.662342 systemd-networkd[1373]: cilium_vxlan: Link UP May 9 23:59:35.662350 systemd-networkd[1373]: cilium_vxlan: Gained carrier May 9 23:59:35.674514 systemd-networkd[1373]: cilium_net: Gained IPv6LL May 9 23:59:36.336487 kubelet[1748]: E0509 23:59:36.336436 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:36.511582 systemd-networkd[1373]: cilium_host: Gained IPv6LL May 9 23:59:36.531250 kernel: NET: Registered PF_ALG protocol family May 9 23:59:36.561006 kubelet[1748]: E0509 23:59:36.560955 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:59:36.780298 kubelet[1748]: I0509 23:59:36.780236 1748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gl4kf" podStartSLOduration=11.771217602 podStartE2EDuration="16.780215996s" podCreationTimestamp="2025-05-09 23:59:20 +0000 UTC" firstStartedPulling="2025-05-09 23:59:23.490674893 +0000 UTC m=+3.977376846" lastFinishedPulling="2025-05-09 23:59:28.499673287 +0000 UTC m=+8.986375240" observedRunningTime="2025-05-09 23:59:33.523737165 +0000 UTC m=+14.010439198" watchObservedRunningTime="2025-05-09 23:59:36.780215996 +0000 UTC m=+17.266917949" May 9 23:59:36.780621 kubelet[1748]: I0509 23:59:36.780600 1748 topology_manager.go:215] "Topology Admit Handler" podUID="5547fd53-e7b6-4033-b48e-ee675fb11c39" podNamespace="default" podName="nginx-deployment-85f456d6dd-fv5cn" May 9 23:59:36.786041 systemd[1]: Created slice kubepods-besteffort-pod5547fd53_e7b6_4033_b48e_ee675fb11c39.slice - libcontainer container kubepods-besteffort-pod5547fd53_e7b6_4033_b48e_ee675fb11c39.slice. May 9 23:59:36.826793 kubelet[1748]: I0509 23:59:36.826743 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9tds\" (UniqueName: \"kubernetes.io/projected/5547fd53-e7b6-4033-b48e-ee675fb11c39-kube-api-access-c9tds\") pod \"nginx-deployment-85f456d6dd-fv5cn\" (UID: \"5547fd53-e7b6-4033-b48e-ee675fb11c39\") " pod="default/nginx-deployment-85f456d6dd-fv5cn" May 9 23:59:37.092701 containerd[1432]: time="2025-05-09T23:59:37.092383949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-fv5cn,Uid:5547fd53-e7b6-4033-b48e-ee675fb11c39,Namespace:default,Attempt:0,}" May 9 23:59:37.183725 systemd-networkd[1373]: lxc_health: Link UP May 9 23:59:37.195304 systemd-networkd[1373]: lxc_health: Gained carrier May 9 23:59:37.336773 kubelet[1748]: E0509 23:59:37.336716 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:37.407491 systemd-networkd[1373]: cilium_vxlan: Gained IPv6LL May 9 23:59:37.659982 systemd-networkd[1373]: lxc65aa03386a21: Link UP May 9 23:59:37.669524 kernel: eth0: renamed from tmpcff3e May 9 23:59:37.683549 systemd-networkd[1373]: lxc65aa03386a21: Gained carrier May 9 23:59:38.337137 kubelet[1748]: E0509 23:59:38.337087 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:38.665719 kubelet[1748]: E0509 23:59:38.665684 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:59:39.007535 systemd-networkd[1373]: lxc_health: Gained IPv6LL May 9 23:59:39.135541 systemd-networkd[1373]: lxc65aa03386a21: Gained IPv6LL May 9 23:59:39.337628 kubelet[1748]: E0509 23:59:39.337381 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:39.517466 kubelet[1748]: E0509 23:59:39.517387 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:59:40.321791 kubelet[1748]: E0509 23:59:40.321741 1748 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:40.338159 kubelet[1748]: E0509 23:59:40.338090 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:41.338835 kubelet[1748]: E0509 23:59:41.338783 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:41.473448 containerd[1432]: time="2025-05-09T23:59:41.473261289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:41.473448 containerd[1432]: time="2025-05-09T23:59:41.473319495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:41.473448 containerd[1432]: time="2025-05-09T23:59:41.473331008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:41.473448 containerd[1432]: time="2025-05-09T23:59:41.473424074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:41.496570 systemd[1]: Started cri-containerd-cff3ed4c27c0eb18bec0499ceed56f3be78ba866c5b4c2a1528bcdf2e2ddb86a.scope - libcontainer container cff3ed4c27c0eb18bec0499ceed56f3be78ba866c5b4c2a1528bcdf2e2ddb86a. May 9 23:59:41.507099 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 23:59:41.523111 containerd[1432]: time="2025-05-09T23:59:41.523066619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-fv5cn,Uid:5547fd53-e7b6-4033-b48e-ee675fb11c39,Namespace:default,Attempt:0,} returns sandbox id \"cff3ed4c27c0eb18bec0499ceed56f3be78ba866c5b4c2a1528bcdf2e2ddb86a\"" May 9 23:59:41.524768 containerd[1432]: time="2025-05-09T23:59:41.524739440Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 9 23:59:42.339116 kubelet[1748]: E0509 23:59:42.339072 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:43.183447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1224233951.mount: Deactivated successfully. May 9 23:59:43.339576 kubelet[1748]: E0509 23:59:43.339525 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:43.940672 containerd[1432]: time="2025-05-09T23:59:43.940622675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:43.941671 containerd[1432]: time="2025-05-09T23:59:43.941590897Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69948859" May 9 23:59:43.942327 containerd[1432]: time="2025-05-09T23:59:43.942293735Z" level=info msg="ImageCreate event name:\"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:43.947862 containerd[1432]: time="2025-05-09T23:59:43.946786105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:43.947862 containerd[1432]: time="2025-05-09T23:59:43.947819253Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 2.423042914s" May 9 23:59:43.948016 containerd[1432]: time="2025-05-09T23:59:43.947911446Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 9 23:59:43.950383 containerd[1432]: time="2025-05-09T23:59:43.950290022Z" level=info msg="CreateContainer within sandbox \"cff3ed4c27c0eb18bec0499ceed56f3be78ba866c5b4c2a1528bcdf2e2ddb86a\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 9 23:59:43.961340 containerd[1432]: time="2025-05-09T23:59:43.961289365Z" level=info msg="CreateContainer within sandbox \"cff3ed4c27c0eb18bec0499ceed56f3be78ba866c5b4c2a1528bcdf2e2ddb86a\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"1177cc59f4a546e563841524fb97324e63e1de28894bd19de4099697b4d41d43\"" May 9 23:59:43.961859 containerd[1432]: time="2025-05-09T23:59:43.961829927Z" level=info msg="StartContainer for \"1177cc59f4a546e563841524fb97324e63e1de28894bd19de4099697b4d41d43\"" May 9 23:59:44.001567 systemd[1]: Started cri-containerd-1177cc59f4a546e563841524fb97324e63e1de28894bd19de4099697b4d41d43.scope - libcontainer container 1177cc59f4a546e563841524fb97324e63e1de28894bd19de4099697b4d41d43. May 9 23:59:44.029869 containerd[1432]: time="2025-05-09T23:59:44.029770894Z" level=info msg="StartContainer for \"1177cc59f4a546e563841524fb97324e63e1de28894bd19de4099697b4d41d43\" returns successfully" May 9 23:59:44.339780 kubelet[1748]: E0509 23:59:44.339653 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:44.536493 kubelet[1748]: I0509 23:59:44.536422 1748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-fv5cn" podStartSLOduration=6.111805503 podStartE2EDuration="8.536405795s" podCreationTimestamp="2025-05-09 23:59:36 +0000 UTC" firstStartedPulling="2025-05-09 23:59:41.524469678 +0000 UTC m=+22.011171631" lastFinishedPulling="2025-05-09 23:59:43.94906997 +0000 UTC m=+24.435771923" observedRunningTime="2025-05-09 23:59:44.536060602 +0000 UTC m=+25.022762555" watchObservedRunningTime="2025-05-09 23:59:44.536405795 +0000 UTC m=+25.023107748" May 9 23:59:45.340045 kubelet[1748]: E0509 23:59:45.339996 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:46.341036 kubelet[1748]: E0509 23:59:46.340746 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:47.341530 kubelet[1748]: E0509 23:59:47.341062 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:48.342121 kubelet[1748]: E0509 23:59:48.342049 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:49.124346 kubelet[1748]: I0509 23:59:49.124294 1748 topology_manager.go:215] "Topology Admit Handler" podUID="04b3b7af-9977-437b-9f30-24222b236699" podNamespace="default" podName="nfs-server-provisioner-0" May 9 23:59:49.129901 systemd[1]: Created slice kubepods-besteffort-pod04b3b7af_9977_437b_9f30_24222b236699.slice - libcontainer container kubepods-besteffort-pod04b3b7af_9977_437b_9f30_24222b236699.slice. May 9 23:59:49.200291 kubelet[1748]: I0509 23:59:49.200218 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/04b3b7af-9977-437b-9f30-24222b236699-data\") pod \"nfs-server-provisioner-0\" (UID: \"04b3b7af-9977-437b-9f30-24222b236699\") " pod="default/nfs-server-provisioner-0" May 9 23:59:49.200291 kubelet[1748]: I0509 23:59:49.200266 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bckst\" (UniqueName: \"kubernetes.io/projected/04b3b7af-9977-437b-9f30-24222b236699-kube-api-access-bckst\") pod \"nfs-server-provisioner-0\" (UID: \"04b3b7af-9977-437b-9f30-24222b236699\") " pod="default/nfs-server-provisioner-0" May 9 23:59:49.342767 kubelet[1748]: E0509 23:59:49.342719 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:49.432911 containerd[1432]: time="2025-05-09T23:59:49.432853061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:04b3b7af-9977-437b-9f30-24222b236699,Namespace:default,Attempt:0,}" May 9 23:59:49.472535 systemd-networkd[1373]: lxc7b9070a1ca53: Link UP May 9 23:59:49.478396 kernel: eth0: renamed from tmpc97e1 May 9 23:59:49.489016 systemd-networkd[1373]: lxc7b9070a1ca53: Gained carrier May 9 23:59:49.652488 containerd[1432]: time="2025-05-09T23:59:49.652310235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:49.652488 containerd[1432]: time="2025-05-09T23:59:49.652396165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:49.652488 containerd[1432]: time="2025-05-09T23:59:49.652413679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:49.652804 containerd[1432]: time="2025-05-09T23:59:49.652507727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:49.677605 systemd[1]: Started cri-containerd-c97e1b0caa801f0d7c95d1cc8aa4f6bcdf9b1ba4364c2feccb3bfb6b13160495.scope - libcontainer container c97e1b0caa801f0d7c95d1cc8aa4f6bcdf9b1ba4364c2feccb3bfb6b13160495. May 9 23:59:49.690794 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 23:59:49.708652 containerd[1432]: time="2025-05-09T23:59:49.708602980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:04b3b7af-9977-437b-9f30-24222b236699,Namespace:default,Attempt:0,} returns sandbox id \"c97e1b0caa801f0d7c95d1cc8aa4f6bcdf9b1ba4364c2feccb3bfb6b13160495\"" May 9 23:59:49.710185 containerd[1432]: time="2025-05-09T23:59:49.710150400Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 9 23:59:50.342896 kubelet[1748]: E0509 23:59:50.342820 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:50.849192 systemd-networkd[1373]: lxc7b9070a1ca53: Gained IPv6LL May 9 23:59:51.343572 kubelet[1748]: E0509 23:59:51.343392 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:51.451646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount95463322.mount: Deactivated successfully. May 9 23:59:52.343900 kubelet[1748]: E0509 23:59:52.343796 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:52.812588 containerd[1432]: time="2025-05-09T23:59:52.812530210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:52.813598 containerd[1432]: time="2025-05-09T23:59:52.813338938Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" May 9 23:59:52.814338 containerd[1432]: time="2025-05-09T23:59:52.814303060Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:52.822483 containerd[1432]: time="2025-05-09T23:59:52.822428803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:52.825385 containerd[1432]: time="2025-05-09T23:59:52.825207283Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.114879424s" May 9 23:59:52.825385 containerd[1432]: time="2025-05-09T23:59:52.825252830Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 9 23:59:52.827515 containerd[1432]: time="2025-05-09T23:59:52.827478350Z" level=info msg="CreateContainer within sandbox \"c97e1b0caa801f0d7c95d1cc8aa4f6bcdf9b1ba4364c2feccb3bfb6b13160495\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 9 23:59:52.841146 containerd[1432]: time="2025-05-09T23:59:52.841090474Z" level=info msg="CreateContainer within sandbox \"c97e1b0caa801f0d7c95d1cc8aa4f6bcdf9b1ba4364c2feccb3bfb6b13160495\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4fdeb8a0db4d206dfb76f0472b701e8132c5c277476d1d1d640c38d662321ebc\"" May 9 23:59:52.841699 containerd[1432]: time="2025-05-09T23:59:52.841654792Z" level=info msg="StartContainer for \"4fdeb8a0db4d206dfb76f0472b701e8132c5c277476d1d1d640c38d662321ebc\"" May 9 23:59:52.924557 systemd[1]: Started cri-containerd-4fdeb8a0db4d206dfb76f0472b701e8132c5c277476d1d1d640c38d662321ebc.scope - libcontainer container 4fdeb8a0db4d206dfb76f0472b701e8132c5c277476d1d1d640c38d662321ebc. May 9 23:59:52.977563 containerd[1432]: time="2025-05-09T23:59:52.977482757Z" level=info msg="StartContainer for \"4fdeb8a0db4d206dfb76f0472b701e8132c5c277476d1d1d640c38d662321ebc\" returns successfully" May 9 23:59:53.344917 kubelet[1748]: E0509 23:59:53.344866 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:53.557868 kubelet[1748]: I0509 23:59:53.557794 1748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.441785904 podStartE2EDuration="4.557778665s" podCreationTimestamp="2025-05-09 23:59:49 +0000 UTC" firstStartedPulling="2025-05-09 23:59:49.709925638 +0000 UTC m=+30.196627591" lastFinishedPulling="2025-05-09 23:59:52.825918439 +0000 UTC m=+33.312620352" observedRunningTime="2025-05-09 23:59:53.557456472 +0000 UTC m=+34.044158425" watchObservedRunningTime="2025-05-09 23:59:53.557778665 +0000 UTC m=+34.044480618" May 9 23:59:54.345960 kubelet[1748]: E0509 23:59:54.345904 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:55.346197 kubelet[1748]: E0509 23:59:55.346151 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:56.036256 update_engine[1426]: I20250509 23:59:56.036180 1426 update_attempter.cc:509] Updating boot flags... May 9 23:59:56.095395 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3151) May 9 23:59:56.133278 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3151) May 9 23:59:56.346824 kubelet[1748]: E0509 23:59:56.346664 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:57.347005 kubelet[1748]: E0509 23:59:57.346962 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:58.347472 kubelet[1748]: E0509 23:59:58.347417 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:59:59.348064 kubelet[1748]: E0509 23:59:59.348013 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:00.321640 kubelet[1748]: E0510 00:00:00.321597 1748 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:00.348972 kubelet[1748]: E0510 00:00:00.348929 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:01.350075 kubelet[1748]: E0510 00:00:01.350030 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:02.351191 kubelet[1748]: E0510 00:00:02.351151 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:03.237471 kubelet[1748]: I0510 00:00:03.237427 1748 topology_manager.go:215] "Topology Admit Handler" podUID="b3c6e4c8-7a47-4d8d-b0fa-1a3302e0e66f" podNamespace="default" podName="test-pod-1" May 10 00:00:03.250681 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 10 00:00:03.254785 systemd[1]: Created slice kubepods-besteffort-podb3c6e4c8_7a47_4d8d_b0fa_1a3302e0e66f.slice - libcontainer container kubepods-besteffort-podb3c6e4c8_7a47_4d8d_b0fa_1a3302e0e66f.slice. May 10 00:00:03.262616 systemd[1]: logrotate.service: Deactivated successfully. May 10 00:00:03.351749 kubelet[1748]: E0510 00:00:03.351694 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:03.371513 kubelet[1748]: I0510 00:00:03.371387 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7aa34031-2ef3-4fbc-9eec-03dc580a4e6b\" (UniqueName: \"kubernetes.io/nfs/b3c6e4c8-7a47-4d8d-b0fa-1a3302e0e66f-pvc-7aa34031-2ef3-4fbc-9eec-03dc580a4e6b\") pod \"test-pod-1\" (UID: \"b3c6e4c8-7a47-4d8d-b0fa-1a3302e0e66f\") " pod="default/test-pod-1" May 10 00:00:03.371513 kubelet[1748]: I0510 00:00:03.371440 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlcf2\" (UniqueName: \"kubernetes.io/projected/b3c6e4c8-7a47-4d8d-b0fa-1a3302e0e66f-kube-api-access-nlcf2\") pod \"test-pod-1\" (UID: \"b3c6e4c8-7a47-4d8d-b0fa-1a3302e0e66f\") " pod="default/test-pod-1" May 10 00:00:03.495384 kernel: FS-Cache: Loaded May 10 00:00:03.518475 kernel: RPC: Registered named UNIX socket transport module. May 10 00:00:03.518636 kernel: RPC: Registered udp transport module. May 10 00:00:03.518657 kernel: RPC: Registered tcp transport module. May 10 00:00:03.519809 kernel: RPC: Registered tcp-with-tls transport module. May 10 00:00:03.519847 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 10 00:00:03.700708 kernel: NFS: Registering the id_resolver key type May 10 00:00:03.700860 kernel: Key type id_resolver registered May 10 00:00:03.700880 kernel: Key type id_legacy registered May 10 00:00:03.721756 nfsidmap[3179]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 10 00:00:03.725315 nfsidmap[3182]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 10 00:00:03.858379 containerd[1432]: time="2025-05-10T00:00:03.858250589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b3c6e4c8-7a47-4d8d-b0fa-1a3302e0e66f,Namespace:default,Attempt:0,}" May 10 00:00:03.888669 systemd-networkd[1373]: lxcbd928d14f423: Link UP May 10 00:00:03.898388 kernel: eth0: renamed from tmpf7885 May 10 00:00:03.907335 systemd-networkd[1373]: lxcbd928d14f423: Gained carrier May 10 00:00:04.097877 containerd[1432]: time="2025-05-10T00:00:04.097768696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:00:04.097877 containerd[1432]: time="2025-05-10T00:00:04.097845206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:00:04.097877 containerd[1432]: time="2025-05-10T00:00:04.097860884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:04.098494 containerd[1432]: time="2025-05-10T00:00:04.098451206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:04.117559 systemd[1]: Started cri-containerd-f7885c5bfa704070e14a358f8b461c80d81b650d2a4a9b10822947bc4ac75250.scope - libcontainer container f7885c5bfa704070e14a358f8b461c80d81b650d2a4a9b10822947bc4ac75250. May 10 00:00:04.127599 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 10 00:00:04.163883 containerd[1432]: time="2025-05-10T00:00:04.163838536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b3c6e4c8-7a47-4d8d-b0fa-1a3302e0e66f,Namespace:default,Attempt:0,} returns sandbox id \"f7885c5bfa704070e14a358f8b461c80d81b650d2a4a9b10822947bc4ac75250\"" May 10 00:00:04.165702 containerd[1432]: time="2025-05-10T00:00:04.165668654Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 10 00:00:04.352524 kubelet[1748]: E0510 00:00:04.352482 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:05.119570 systemd-networkd[1373]: lxcbd928d14f423: Gained IPv6LL May 10 00:00:05.146559 containerd[1432]: time="2025-05-10T00:00:05.146505443Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:05.147825 containerd[1432]: time="2025-05-10T00:00:05.147781125Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" May 10 00:00:05.151180 containerd[1432]: time="2025-05-10T00:00:05.151137387Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 985.438338ms" May 10 00:00:05.151180 containerd[1432]: time="2025-05-10T00:00:05.151172663Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 10 00:00:05.155029 containerd[1432]: time="2025-05-10T00:00:05.154756418Z" level=info msg="CreateContainer within sandbox \"f7885c5bfa704070e14a358f8b461c80d81b650d2a4a9b10822947bc4ac75250\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 10 00:00:05.166042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1259195218.mount: Deactivated successfully. May 10 00:00:05.171986 containerd[1432]: time="2025-05-10T00:00:05.171880969Z" level=info msg="CreateContainer within sandbox \"f7885c5bfa704070e14a358f8b461c80d81b650d2a4a9b10822947bc4ac75250\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"2e7d4791c3cafce7d4be0f49732e74c88da00a9b042d83d47c2115ae1ad75731\"" May 10 00:00:05.176388 containerd[1432]: time="2025-05-10T00:00:05.172578922Z" level=info msg="StartContainer for \"2e7d4791c3cafce7d4be0f49732e74c88da00a9b042d83d47c2115ae1ad75731\"" May 10 00:00:05.214585 systemd[1]: Started cri-containerd-2e7d4791c3cafce7d4be0f49732e74c88da00a9b042d83d47c2115ae1ad75731.scope - libcontainer container 2e7d4791c3cafce7d4be0f49732e74c88da00a9b042d83d47c2115ae1ad75731. May 10 00:00:05.237687 containerd[1432]: time="2025-05-10T00:00:05.237619158Z" level=info msg="StartContainer for \"2e7d4791c3cafce7d4be0f49732e74c88da00a9b042d83d47c2115ae1ad75731\" returns successfully" May 10 00:00:05.353451 kubelet[1748]: E0510 00:00:05.353400 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:05.581541 kubelet[1748]: I0510 00:00:05.581476 1748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.594812675 podStartE2EDuration="16.581458299s" podCreationTimestamp="2025-05-09 23:59:49 +0000 UTC" firstStartedPulling="2025-05-10 00:00:04.165262068 +0000 UTC m=+44.651964021" lastFinishedPulling="2025-05-10 00:00:05.151907692 +0000 UTC m=+45.638609645" observedRunningTime="2025-05-10 00:00:05.581214489 +0000 UTC m=+46.067916442" watchObservedRunningTime="2025-05-10 00:00:05.581458299 +0000 UTC m=+46.068160212" May 10 00:00:06.353985 kubelet[1748]: E0510 00:00:06.353939 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:07.354946 kubelet[1748]: E0510 00:00:07.354901 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:08.355711 kubelet[1748]: E0510 00:00:08.355661 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:09.356047 kubelet[1748]: E0510 00:00:09.355997 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:10.356168 kubelet[1748]: E0510 00:00:10.356113 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:11.356921 kubelet[1748]: E0510 00:00:11.356869 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:12.357526 kubelet[1748]: E0510 00:00:12.357450 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:12.710658 containerd[1432]: time="2025-05-10T00:00:12.710591456Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:00:12.716127 containerd[1432]: time="2025-05-10T00:00:12.716076382Z" level=info msg="StopContainer for \"abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3\" with timeout 2 (s)" May 10 00:00:12.716410 containerd[1432]: time="2025-05-10T00:00:12.716389877Z" level=info msg="Stop container \"abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3\" with signal terminated" May 10 00:00:12.722046 systemd-networkd[1373]: lxc_health: Link DOWN May 10 00:00:12.722269 systemd-networkd[1373]: lxc_health: Lost carrier May 10 00:00:12.747876 systemd[1]: cri-containerd-abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3.scope: Deactivated successfully. May 10 00:00:12.748844 systemd[1]: cri-containerd-abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3.scope: Consumed 8.325s CPU time. May 10 00:00:12.763860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3-rootfs.mount: Deactivated successfully. May 10 00:00:12.773715 containerd[1432]: time="2025-05-10T00:00:12.773652867Z" level=info msg="shim disconnected" id=abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3 namespace=k8s.io May 10 00:00:12.774150 containerd[1432]: time="2025-05-10T00:00:12.773962962Z" level=warning msg="cleaning up after shim disconnected" id=abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3 namespace=k8s.io May 10 00:00:12.774150 containerd[1432]: time="2025-05-10T00:00:12.773977481Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:12.786966 containerd[1432]: time="2025-05-10T00:00:12.786919617Z" level=info msg="StopContainer for \"abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3\" returns successfully" May 10 00:00:12.787867 containerd[1432]: time="2025-05-10T00:00:12.787838704Z" level=info msg="StopPodSandbox for \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\"" May 10 00:00:12.790400 containerd[1432]: time="2025-05-10T00:00:12.790335347Z" level=info msg="Container to stop \"7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:12.790400 containerd[1432]: time="2025-05-10T00:00:12.790393542Z" level=info msg="Container to stop \"d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:12.790486 containerd[1432]: time="2025-05-10T00:00:12.790406821Z" level=info msg="Container to stop \"abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:12.790486 containerd[1432]: time="2025-05-10T00:00:12.790416420Z" level=info msg="Container to stop \"4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:12.790486 containerd[1432]: time="2025-05-10T00:00:12.790425500Z" level=info msg="Container to stop \"6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:12.791942 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611-shm.mount: Deactivated successfully. May 10 00:00:12.795672 systemd[1]: cri-containerd-cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611.scope: Deactivated successfully. May 10 00:00:12.813217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611-rootfs.mount: Deactivated successfully. May 10 00:00:12.816474 containerd[1432]: time="2025-05-10T00:00:12.816199501Z" level=info msg="shim disconnected" id=cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611 namespace=k8s.io May 10 00:00:12.816474 containerd[1432]: time="2025-05-10T00:00:12.816268375Z" level=warning msg="cleaning up after shim disconnected" id=cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611 namespace=k8s.io May 10 00:00:12.816474 containerd[1432]: time="2025-05-10T00:00:12.816276934Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:12.827603 containerd[1432]: time="2025-05-10T00:00:12.827544403Z" level=info msg="TearDown network for sandbox \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\" successfully" May 10 00:00:12.827603 containerd[1432]: time="2025-05-10T00:00:12.827585520Z" level=info msg="StopPodSandbox for \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\" returns successfully" May 10 00:00:12.931164 kubelet[1748]: I0510 00:00:12.931090 1748 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-cilium-cgroup\") pod \"6e47b212-ad2b-4337-883c-3969e347b3a3\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " May 10 00:00:12.931164 kubelet[1748]: I0510 00:00:12.931135 1748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6e47b212-ad2b-4337-883c-3969e347b3a3" (UID: "6e47b212-ad2b-4337-883c-3969e347b3a3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:12.931164 kubelet[1748]: I0510 00:00:12.931165 1748 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-etc-cni-netd\") pod \"6e47b212-ad2b-4337-883c-3969e347b3a3\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " May 10 00:00:12.931392 kubelet[1748]: I0510 00:00:12.931186 1748 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-lib-modules\") pod \"6e47b212-ad2b-4337-883c-3969e347b3a3\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " May 10 00:00:12.931392 kubelet[1748]: I0510 00:00:12.931203 1748 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-xtables-lock\") pod \"6e47b212-ad2b-4337-883c-3969e347b3a3\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " May 10 00:00:12.931392 kubelet[1748]: I0510 00:00:12.931206 1748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6e47b212-ad2b-4337-883c-3969e347b3a3" (UID: "6e47b212-ad2b-4337-883c-3969e347b3a3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:12.931392 kubelet[1748]: I0510 00:00:12.931224 1748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6e47b212-ad2b-4337-883c-3969e347b3a3" (UID: "6e47b212-ad2b-4337-883c-3969e347b3a3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:12.931392 kubelet[1748]: I0510 00:00:12.931242 1748 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-cni-path\") pod \"6e47b212-ad2b-4337-883c-3969e347b3a3\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " May 10 00:00:12.931516 kubelet[1748]: I0510 00:00:12.931252 1748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6e47b212-ad2b-4337-883c-3969e347b3a3" (UID: "6e47b212-ad2b-4337-883c-3969e347b3a3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:12.931516 kubelet[1748]: I0510 00:00:12.931260 1748 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-bpf-maps\") pod \"6e47b212-ad2b-4337-883c-3969e347b3a3\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " May 10 00:00:12.931516 kubelet[1748]: I0510 00:00:12.931266 1748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-cni-path" (OuterVolumeSpecName: "cni-path") pod "6e47b212-ad2b-4337-883c-3969e347b3a3" (UID: "6e47b212-ad2b-4337-883c-3969e347b3a3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:12.931516 kubelet[1748]: I0510 00:00:12.931280 1748 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e47b212-ad2b-4337-883c-3969e347b3a3-cilium-config-path\") pod \"6e47b212-ad2b-4337-883c-3969e347b3a3\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " May 10 00:00:12.931516 kubelet[1748]: I0510 00:00:12.931309 1748 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-host-proc-sys-kernel\") pod \"6e47b212-ad2b-4337-883c-3969e347b3a3\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " May 10 00:00:12.931621 kubelet[1748]: I0510 00:00:12.931330 1748 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb9vt\" (UniqueName: \"kubernetes.io/projected/6e47b212-ad2b-4337-883c-3969e347b3a3-kube-api-access-jb9vt\") pod \"6e47b212-ad2b-4337-883c-3969e347b3a3\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " May 10 00:00:12.931621 kubelet[1748]: I0510 00:00:12.931285 1748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6e47b212-ad2b-4337-883c-3969e347b3a3" (UID: "6e47b212-ad2b-4337-883c-3969e347b3a3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:12.931621 kubelet[1748]: I0510 00:00:12.931345 1748 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-host-proc-sys-net\") pod \"6e47b212-ad2b-4337-883c-3969e347b3a3\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " May 10 00:00:12.931621 kubelet[1748]: I0510 00:00:12.931390 1748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6e47b212-ad2b-4337-883c-3969e347b3a3" (UID: "6e47b212-ad2b-4337-883c-3969e347b3a3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:12.931621 kubelet[1748]: I0510 00:00:12.931400 1748 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-cilium-run\") pod \"6e47b212-ad2b-4337-883c-3969e347b3a3\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " May 10 00:00:12.931721 kubelet[1748]: I0510 00:00:12.931414 1748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6e47b212-ad2b-4337-883c-3969e347b3a3" (UID: "6e47b212-ad2b-4337-883c-3969e347b3a3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:12.931721 kubelet[1748]: I0510 00:00:12.931429 1748 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6e47b212-ad2b-4337-883c-3969e347b3a3-hubble-tls\") pod \"6e47b212-ad2b-4337-883c-3969e347b3a3\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " May 10 00:00:12.931721 kubelet[1748]: I0510 00:00:12.931449 1748 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-hostproc\") pod \"6e47b212-ad2b-4337-883c-3969e347b3a3\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " May 10 00:00:12.931721 kubelet[1748]: I0510 00:00:12.931467 1748 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6e47b212-ad2b-4337-883c-3969e347b3a3-clustermesh-secrets\") pod \"6e47b212-ad2b-4337-883c-3969e347b3a3\" (UID: \"6e47b212-ad2b-4337-883c-3969e347b3a3\") " May 10 00:00:12.931721 kubelet[1748]: I0510 00:00:12.931494 1748 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-bpf-maps\") on node \"10.0.0.100\" DevicePath \"\"" May 10 00:00:12.931721 kubelet[1748]: I0510 00:00:12.931512 1748 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-host-proc-sys-kernel\") on node \"10.0.0.100\" DevicePath \"\"" May 10 00:00:12.931841 kubelet[1748]: I0510 00:00:12.931521 1748 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-xtables-lock\") on node \"10.0.0.100\" DevicePath \"\"" May 10 00:00:12.931841 kubelet[1748]: I0510 00:00:12.931529 1748 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-cni-path\") on node \"10.0.0.100\" DevicePath \"\"" May 10 00:00:12.931841 kubelet[1748]: I0510 00:00:12.931537 1748 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-host-proc-sys-net\") on node \"10.0.0.100\" DevicePath \"\"" May 10 00:00:12.931841 kubelet[1748]: I0510 00:00:12.931544 1748 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-lib-modules\") on node \"10.0.0.100\" DevicePath \"\"" May 10 00:00:12.931841 kubelet[1748]: I0510 00:00:12.931551 1748 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-cilium-cgroup\") on node \"10.0.0.100\" DevicePath \"\"" May 10 00:00:12.931841 kubelet[1748]: I0510 00:00:12.931558 1748 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-etc-cni-netd\") on node \"10.0.0.100\" DevicePath \"\"" May 10 00:00:12.931841 kubelet[1748]: I0510 00:00:12.931748 1748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6e47b212-ad2b-4337-883c-3969e347b3a3" (UID: "6e47b212-ad2b-4337-883c-3969e347b3a3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:12.933014 kubelet[1748]: I0510 00:00:12.932917 1748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-hostproc" (OuterVolumeSpecName: "hostproc") pod "6e47b212-ad2b-4337-883c-3969e347b3a3" (UID: "6e47b212-ad2b-4337-883c-3969e347b3a3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:12.934019 kubelet[1748]: I0510 00:00:12.933929 1748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e47b212-ad2b-4337-883c-3969e347b3a3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6e47b212-ad2b-4337-883c-3969e347b3a3" (UID: "6e47b212-ad2b-4337-883c-3969e347b3a3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:00:12.936393 kubelet[1748]: I0510 00:00:12.935878 1748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e47b212-ad2b-4337-883c-3969e347b3a3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6e47b212-ad2b-4337-883c-3969e347b3a3" (UID: "6e47b212-ad2b-4337-883c-3969e347b3a3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:00:12.936180 systemd[1]: var-lib-kubelet-pods-6e47b212\x2dad2b\x2d4337\x2d883c\x2d3969e347b3a3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:00:12.936791 kubelet[1748]: I0510 00:00:12.936529 1748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e47b212-ad2b-4337-883c-3969e347b3a3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6e47b212-ad2b-4337-883c-3969e347b3a3" (UID: "6e47b212-ad2b-4337-883c-3969e347b3a3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:00:12.936906 kubelet[1748]: I0510 00:00:12.936879 1748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e47b212-ad2b-4337-883c-3969e347b3a3-kube-api-access-jb9vt" (OuterVolumeSpecName: "kube-api-access-jb9vt") pod "6e47b212-ad2b-4337-883c-3969e347b3a3" (UID: "6e47b212-ad2b-4337-883c-3969e347b3a3"). InnerVolumeSpecName "kube-api-access-jb9vt". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:00:13.031862 kubelet[1748]: I0510 00:00:13.031722 1748 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e47b212-ad2b-4337-883c-3969e347b3a3-cilium-config-path\") on node \"10.0.0.100\" DevicePath \"\"" May 10 00:00:13.031862 kubelet[1748]: I0510 00:00:13.031757 1748 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jb9vt\" (UniqueName: \"kubernetes.io/projected/6e47b212-ad2b-4337-883c-3969e347b3a3-kube-api-access-jb9vt\") on node \"10.0.0.100\" DevicePath \"\"" May 10 00:00:13.031862 kubelet[1748]: I0510 00:00:13.031767 1748 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-hostproc\") on node \"10.0.0.100\" DevicePath \"\"" May 10 00:00:13.031862 kubelet[1748]: I0510 00:00:13.031779 1748 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6e47b212-ad2b-4337-883c-3969e347b3a3-clustermesh-secrets\") on node \"10.0.0.100\" DevicePath \"\"" May 10 00:00:13.031862 kubelet[1748]: I0510 00:00:13.031787 1748 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6e47b212-ad2b-4337-883c-3969e347b3a3-cilium-run\") on node \"10.0.0.100\" DevicePath \"\"" May 10 00:00:13.031862 kubelet[1748]: I0510 00:00:13.031794 1748 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6e47b212-ad2b-4337-883c-3969e347b3a3-hubble-tls\") on node \"10.0.0.100\" DevicePath \"\"" May 10 00:00:13.358692 kubelet[1748]: E0510 00:00:13.358567 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:13.610039 kubelet[1748]: I0510 00:00:13.609851 1748 scope.go:117] "RemoveContainer" containerID="abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3" May 10 00:00:13.611567 containerd[1432]: time="2025-05-10T00:00:13.611489115Z" level=info msg="RemoveContainer for \"abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3\"" May 10 00:00:13.613848 systemd[1]: Removed slice kubepods-burstable-pod6e47b212_ad2b_4337_883c_3969e347b3a3.slice - libcontainer container kubepods-burstable-pod6e47b212_ad2b_4337_883c_3969e347b3a3.slice. May 10 00:00:13.613928 systemd[1]: kubepods-burstable-pod6e47b212_ad2b_4337_883c_3969e347b3a3.slice: Consumed 8.492s CPU time. May 10 00:00:13.617344 containerd[1432]: time="2025-05-10T00:00:13.617296965Z" level=info msg="RemoveContainer for \"abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3\" returns successfully" May 10 00:00:13.617639 kubelet[1748]: I0510 00:00:13.617604 1748 scope.go:117] "RemoveContainer" containerID="4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c" May 10 00:00:13.618845 containerd[1432]: time="2025-05-10T00:00:13.618806173Z" level=info msg="RemoveContainer for \"4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c\"" May 10 00:00:13.621122 containerd[1432]: time="2025-05-10T00:00:13.621084604Z" level=info msg="RemoveContainer for \"4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c\" returns successfully" May 10 00:00:13.621371 kubelet[1748]: I0510 00:00:13.621320 1748 scope.go:117] "RemoveContainer" containerID="d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117" May 10 00:00:13.622424 containerd[1432]: time="2025-05-10T00:00:13.622388547Z" level=info msg="RemoveContainer for \"d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117\"" May 10 00:00:13.634272 containerd[1432]: time="2025-05-10T00:00:13.634198831Z" level=info msg="RemoveContainer for \"d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117\" returns successfully" May 10 00:00:13.634481 kubelet[1748]: I0510 00:00:13.634454 1748 scope.go:117] "RemoveContainer" containerID="7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd" May 10 00:00:13.635552 containerd[1432]: time="2025-05-10T00:00:13.635512014Z" level=info msg="RemoveContainer for \"7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd\"" May 10 00:00:13.638753 containerd[1432]: time="2025-05-10T00:00:13.638076344Z" level=info msg="RemoveContainer for \"7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd\" returns successfully" May 10 00:00:13.638824 kubelet[1748]: I0510 00:00:13.638250 1748 scope.go:117] "RemoveContainer" containerID="6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf" May 10 00:00:13.639654 containerd[1432]: time="2025-05-10T00:00:13.639608510Z" level=info msg="RemoveContainer for \"6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf\"" May 10 00:00:13.642004 containerd[1432]: time="2025-05-10T00:00:13.641903700Z" level=info msg="RemoveContainer for \"6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf\" returns successfully" May 10 00:00:13.642779 containerd[1432]: time="2025-05-10T00:00:13.642387664Z" level=error msg="ContainerStatus for \"abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3\": not found" May 10 00:00:13.642833 kubelet[1748]: I0510 00:00:13.642106 1748 scope.go:117] "RemoveContainer" containerID="abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3" May 10 00:00:13.642833 kubelet[1748]: E0510 00:00:13.642559 1748 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3\": not found" containerID="abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3" May 10 00:00:13.642833 kubelet[1748]: I0510 00:00:13.642590 1748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3"} err="failed to get container status \"abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"abb7e63d494d7be10a73e31131c1143d70cb39a711da91d3efa3f04c857dd7c3\": not found" May 10 00:00:13.642833 kubelet[1748]: I0510 00:00:13.642674 1748 scope.go:117] "RemoveContainer" containerID="4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c" May 10 00:00:13.642939 containerd[1432]: time="2025-05-10T00:00:13.642858789Z" level=error msg="ContainerStatus for \"4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c\": not found" May 10 00:00:13.643040 kubelet[1748]: E0510 00:00:13.642979 1748 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c\": not found" containerID="4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c" May 10 00:00:13.643040 kubelet[1748]: I0510 00:00:13.643029 1748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c"} err="failed to get container status \"4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e6ca90602e71b88be08f0a4b211751d22c2e0a973c0cef4e6069c8313ea664c\": not found" May 10 00:00:13.643092 kubelet[1748]: I0510 00:00:13.643044 1748 scope.go:117] "RemoveContainer" containerID="d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117" May 10 00:00:13.643259 containerd[1432]: time="2025-05-10T00:00:13.643202203Z" level=error msg="ContainerStatus for \"d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117\": not found" May 10 00:00:13.643403 kubelet[1748]: E0510 00:00:13.643348 1748 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117\": not found" containerID="d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117" May 10 00:00:13.643451 kubelet[1748]: I0510 00:00:13.643407 1748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117"} err="failed to get container status \"d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3658a6033e146490a42552addf4ca3196266235133eeffa35f238933868f117\": not found" May 10 00:00:13.643451 kubelet[1748]: I0510 00:00:13.643423 1748 scope.go:117] "RemoveContainer" containerID="7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd" May 10 00:00:13.643725 containerd[1432]: time="2025-05-10T00:00:13.643682528Z" level=error msg="ContainerStatus for \"7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd\": not found" May 10 00:00:13.643857 kubelet[1748]: E0510 00:00:13.643816 1748 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd\": not found" containerID="7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd" May 10 00:00:13.643895 kubelet[1748]: I0510 00:00:13.643852 1748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd"} err="failed to get container status \"7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b15a3c93546175a991d0af36f8ef4efccef21309a8fd05b7fc155016a1e1cfd\": not found" May 10 00:00:13.643895 kubelet[1748]: I0510 00:00:13.643870 1748 scope.go:117] "RemoveContainer" containerID="6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf" May 10 00:00:13.644304 containerd[1432]: time="2025-05-10T00:00:13.644078738Z" level=error msg="ContainerStatus for \"6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf\": not found" May 10 00:00:13.644386 kubelet[1748]: E0510 00:00:13.644223 1748 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf\": not found" containerID="6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf" May 10 00:00:13.644386 kubelet[1748]: I0510 00:00:13.644274 1748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf"} err="failed to get container status \"6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"6aaa22401ef1fa8e7f676a4333ca18af9e0b9f3fe2e18e6bb7d9e8a1be79a1cf\": not found" May 10 00:00:13.698070 systemd[1]: var-lib-kubelet-pods-6e47b212\x2dad2b\x2d4337\x2d883c\x2d3969e347b3a3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djb9vt.mount: Deactivated successfully. May 10 00:00:13.698183 systemd[1]: var-lib-kubelet-pods-6e47b212\x2dad2b\x2d4337\x2d883c\x2d3969e347b3a3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:00:14.358991 kubelet[1748]: E0510 00:00:14.358937 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:14.469564 kubelet[1748]: I0510 00:00:14.469498 1748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e47b212-ad2b-4337-883c-3969e347b3a3" path="/var/lib/kubelet/pods/6e47b212-ad2b-4337-883c-3969e347b3a3/volumes" May 10 00:00:15.360111 kubelet[1748]: E0510 00:00:15.360065 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:15.477404 kubelet[1748]: E0510 00:00:15.477346 1748 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:00:15.573026 kubelet[1748]: I0510 00:00:15.572981 1748 topology_manager.go:215] "Topology Admit Handler" podUID="a1c955f9-dfab-42c0-a33c-8bca3f45a15f" podNamespace="kube-system" podName="cilium-operator-599987898-qwv2q" May 10 00:00:15.573167 kubelet[1748]: E0510 00:00:15.573040 1748 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6e47b212-ad2b-4337-883c-3969e347b3a3" containerName="mount-bpf-fs" May 10 00:00:15.573167 kubelet[1748]: E0510 00:00:15.573051 1748 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6e47b212-ad2b-4337-883c-3969e347b3a3" containerName="apply-sysctl-overwrites" May 10 00:00:15.573167 kubelet[1748]: E0510 00:00:15.573059 1748 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6e47b212-ad2b-4337-883c-3969e347b3a3" containerName="mount-cgroup" May 10 00:00:15.573167 kubelet[1748]: E0510 00:00:15.573065 1748 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6e47b212-ad2b-4337-883c-3969e347b3a3" containerName="clean-cilium-state" May 10 00:00:15.573167 kubelet[1748]: E0510 00:00:15.573073 1748 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6e47b212-ad2b-4337-883c-3969e347b3a3" containerName="cilium-agent" May 10 00:00:15.573167 kubelet[1748]: I0510 00:00:15.573093 1748 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e47b212-ad2b-4337-883c-3969e347b3a3" containerName="cilium-agent" May 10 00:00:15.575533 kubelet[1748]: W0510 00:00:15.575507 1748 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.0.0.100" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.100' and this object May 10 00:00:15.575655 kubelet[1748]: E0510 00:00:15.575632 1748 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.0.0.100" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.100' and this object May 10 00:00:15.579649 systemd[1]: Created slice kubepods-besteffort-poda1c955f9_dfab_42c0_a33c_8bca3f45a15f.slice - libcontainer container kubepods-besteffort-poda1c955f9_dfab_42c0_a33c_8bca3f45a15f.slice. May 10 00:00:15.585927 kubelet[1748]: I0510 00:00:15.585890 1748 topology_manager.go:215] "Topology Admit Handler" podUID="1ad823bd-2bfa-47a0-9875-777316b01975" podNamespace="kube-system" podName="cilium-vp989" May 10 00:00:15.591703 systemd[1]: Created slice kubepods-burstable-pod1ad823bd_2bfa_47a0_9875_777316b01975.slice - libcontainer container kubepods-burstable-pod1ad823bd_2bfa_47a0_9875_777316b01975.slice. May 10 00:00:15.751300 kubelet[1748]: I0510 00:00:15.748903 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1ad823bd-2bfa-47a0-9875-777316b01975-cilium-run\") pod \"cilium-vp989\" (UID: \"1ad823bd-2bfa-47a0-9875-777316b01975\") " pod="kube-system/cilium-vp989" May 10 00:00:15.751300 kubelet[1748]: I0510 00:00:15.748944 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1ad823bd-2bfa-47a0-9875-777316b01975-cni-path\") pod \"cilium-vp989\" (UID: \"1ad823bd-2bfa-47a0-9875-777316b01975\") " pod="kube-system/cilium-vp989" May 10 00:00:15.751300 kubelet[1748]: I0510 00:00:15.748968 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ad823bd-2bfa-47a0-9875-777316b01975-xtables-lock\") pod \"cilium-vp989\" (UID: \"1ad823bd-2bfa-47a0-9875-777316b01975\") " pod="kube-system/cilium-vp989" May 10 00:00:15.751300 kubelet[1748]: I0510 00:00:15.748986 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1ad823bd-2bfa-47a0-9875-777316b01975-host-proc-sys-net\") pod \"cilium-vp989\" (UID: \"1ad823bd-2bfa-47a0-9875-777316b01975\") " pod="kube-system/cilium-vp989" May 10 00:00:15.751300 kubelet[1748]: I0510 00:00:15.749005 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1ad823bd-2bfa-47a0-9875-777316b01975-bpf-maps\") pod \"cilium-vp989\" (UID: \"1ad823bd-2bfa-47a0-9875-777316b01975\") " pod="kube-system/cilium-vp989" May 10 00:00:15.751300 kubelet[1748]: I0510 00:00:15.749052 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1ad823bd-2bfa-47a0-9875-777316b01975-cilium-cgroup\") pod \"cilium-vp989\" (UID: \"1ad823bd-2bfa-47a0-9875-777316b01975\") " pod="kube-system/cilium-vp989" May 10 00:00:15.751638 kubelet[1748]: I0510 00:00:15.749111 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ad823bd-2bfa-47a0-9875-777316b01975-cilium-config-path\") pod \"cilium-vp989\" (UID: \"1ad823bd-2bfa-47a0-9875-777316b01975\") " pod="kube-system/cilium-vp989" May 10 00:00:15.751638 kubelet[1748]: I0510 00:00:15.749153 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1ad823bd-2bfa-47a0-9875-777316b01975-cilium-ipsec-secrets\") pod \"cilium-vp989\" (UID: \"1ad823bd-2bfa-47a0-9875-777316b01975\") " pod="kube-system/cilium-vp989" May 10 00:00:15.751638 kubelet[1748]: I0510 00:00:15.749180 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1c955f9-dfab-42c0-a33c-8bca3f45a15f-cilium-config-path\") pod \"cilium-operator-599987898-qwv2q\" (UID: \"a1c955f9-dfab-42c0-a33c-8bca3f45a15f\") " pod="kube-system/cilium-operator-599987898-qwv2q" May 10 00:00:15.751638 kubelet[1748]: I0510 00:00:15.749202 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ad823bd-2bfa-47a0-9875-777316b01975-etc-cni-netd\") pod \"cilium-vp989\" (UID: \"1ad823bd-2bfa-47a0-9875-777316b01975\") " pod="kube-system/cilium-vp989" May 10 00:00:15.751638 kubelet[1748]: I0510 00:00:15.749227 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ad823bd-2bfa-47a0-9875-777316b01975-lib-modules\") pod \"cilium-vp989\" (UID: \"1ad823bd-2bfa-47a0-9875-777316b01975\") " pod="kube-system/cilium-vp989" May 10 00:00:15.751749 kubelet[1748]: I0510 00:00:15.749266 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1ad823bd-2bfa-47a0-9875-777316b01975-clustermesh-secrets\") pod \"cilium-vp989\" (UID: \"1ad823bd-2bfa-47a0-9875-777316b01975\") " pod="kube-system/cilium-vp989" May 10 00:00:15.751749 kubelet[1748]: I0510 00:00:15.749285 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwkcf\" (UniqueName: \"kubernetes.io/projected/a1c955f9-dfab-42c0-a33c-8bca3f45a15f-kube-api-access-qwkcf\") pod \"cilium-operator-599987898-qwv2q\" (UID: \"a1c955f9-dfab-42c0-a33c-8bca3f45a15f\") " pod="kube-system/cilium-operator-599987898-qwv2q" May 10 00:00:15.751749 kubelet[1748]: I0510 00:00:15.749311 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1ad823bd-2bfa-47a0-9875-777316b01975-hostproc\") pod \"cilium-vp989\" (UID: \"1ad823bd-2bfa-47a0-9875-777316b01975\") " pod="kube-system/cilium-vp989" May 10 00:00:15.751749 kubelet[1748]: I0510 00:00:15.749329 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1ad823bd-2bfa-47a0-9875-777316b01975-host-proc-sys-kernel\") pod \"cilium-vp989\" (UID: \"1ad823bd-2bfa-47a0-9875-777316b01975\") " pod="kube-system/cilium-vp989" May 10 00:00:15.751749 kubelet[1748]: I0510 00:00:15.749346 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1ad823bd-2bfa-47a0-9875-777316b01975-hubble-tls\") pod \"cilium-vp989\" (UID: \"1ad823bd-2bfa-47a0-9875-777316b01975\") " pod="kube-system/cilium-vp989" May 10 00:00:15.751846 kubelet[1748]: I0510 00:00:15.749389 1748 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94w62\" (UniqueName: \"kubernetes.io/projected/1ad823bd-2bfa-47a0-9875-777316b01975-kube-api-access-94w62\") pod \"cilium-vp989\" (UID: \"1ad823bd-2bfa-47a0-9875-777316b01975\") " pod="kube-system/cilium-vp989" May 10 00:00:16.360637 kubelet[1748]: E0510 00:00:16.360581 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:16.852657 kubelet[1748]: E0510 00:00:16.852581 1748 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 10 00:00:16.852780 kubelet[1748]: E0510 00:00:16.852690 1748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1c955f9-dfab-42c0-a33c-8bca3f45a15f-cilium-config-path podName:a1c955f9-dfab-42c0-a33c-8bca3f45a15f nodeName:}" failed. No retries permitted until 2025-05-10 00:00:17.352667462 +0000 UTC m=+57.839369415 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/a1c955f9-dfab-42c0-a33c-8bca3f45a15f-cilium-config-path") pod "cilium-operator-599987898-qwv2q" (UID: "a1c955f9-dfab-42c0-a33c-8bca3f45a15f") : failed to sync configmap cache: timed out waiting for the condition May 10 00:00:16.852954 kubelet[1748]: E0510 00:00:16.852587 1748 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 10 00:00:16.852954 kubelet[1748]: E0510 00:00:16.852933 1748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ad823bd-2bfa-47a0-9875-777316b01975-cilium-config-path podName:1ad823bd-2bfa-47a0-9875-777316b01975 nodeName:}" failed. No retries permitted until 2025-05-10 00:00:17.352916135 +0000 UTC m=+57.839618088 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/1ad823bd-2bfa-47a0-9875-777316b01975-cilium-config-path") pod "cilium-vp989" (UID: "1ad823bd-2bfa-47a0-9875-777316b01975") : failed to sync configmap cache: timed out waiting for the condition May 10 00:00:17.365134 kubelet[1748]: E0510 00:00:17.365081 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:17.383282 kubelet[1748]: E0510 00:00:17.383012 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:00:17.383627 containerd[1432]: time="2025-05-10T00:00:17.383532745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-qwv2q,Uid:a1c955f9-dfab-42c0-a33c-8bca3f45a15f,Namespace:kube-system,Attempt:0,}" May 10 00:00:17.401683 containerd[1432]: time="2025-05-10T00:00:17.401585512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:00:17.401683 containerd[1432]: time="2025-05-10T00:00:17.401643830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:00:17.401683 containerd[1432]: time="2025-05-10T00:00:17.401656590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:17.401861 containerd[1432]: time="2025-05-10T00:00:17.401737188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:17.404978 kubelet[1748]: E0510 00:00:17.402735 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:00:17.405107 containerd[1432]: time="2025-05-10T00:00:17.403340346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vp989,Uid:1ad823bd-2bfa-47a0-9875-777316b01975,Namespace:kube-system,Attempt:0,}" May 10 00:00:17.423591 systemd[1]: Started cri-containerd-49573bb46e89a7b78a97bb6b8719dd95c14baa56a3fb6b7ac88e531c9bdd69ce.scope - libcontainer container 49573bb46e89a7b78a97bb6b8719dd95c14baa56a3fb6b7ac88e531c9bdd69ce. May 10 00:00:17.442631 containerd[1432]: time="2025-05-10T00:00:17.442485919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:00:17.442631 containerd[1432]: time="2025-05-10T00:00:17.442588716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:00:17.443274 containerd[1432]: time="2025-05-10T00:00:17.442796351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:17.443537 containerd[1432]: time="2025-05-10T00:00:17.443472493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:17.465560 systemd[1]: Started cri-containerd-5e9386c349adcfc5703e015cba7c79c2147487a45dd73ace6fbd383eec71b125.scope - libcontainer container 5e9386c349adcfc5703e015cba7c79c2147487a45dd73ace6fbd383eec71b125. May 10 00:00:17.466598 containerd[1432]: time="2025-05-10T00:00:17.466239016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-qwv2q,Uid:a1c955f9-dfab-42c0-a33c-8bca3f45a15f,Namespace:kube-system,Attempt:0,} returns sandbox id \"49573bb46e89a7b78a97bb6b8719dd95c14baa56a3fb6b7ac88e531c9bdd69ce\"" May 10 00:00:17.467073 kubelet[1748]: E0510 00:00:17.467039 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:00:17.468351 containerd[1432]: time="2025-05-10T00:00:17.467960331Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 10 00:00:17.494661 containerd[1432]: time="2025-05-10T00:00:17.494617912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vp989,Uid:1ad823bd-2bfa-47a0-9875-777316b01975,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e9386c349adcfc5703e015cba7c79c2147487a45dd73ace6fbd383eec71b125\"" May 10 00:00:17.495632 kubelet[1748]: E0510 00:00:17.495571 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:00:17.502819 containerd[1432]: time="2025-05-10T00:00:17.502773978Z" level=info msg="CreateContainer within sandbox \"5e9386c349adcfc5703e015cba7c79c2147487a45dd73ace6fbd383eec71b125\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:00:17.595806 containerd[1432]: time="2025-05-10T00:00:17.595752299Z" level=info msg="CreateContainer within sandbox \"5e9386c349adcfc5703e015cba7c79c2147487a45dd73ace6fbd383eec71b125\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3541b876ed344b21a9dbb213c3196b4352f1eef759eb9911639971e5654acb8d\"" May 10 00:00:17.596519 containerd[1432]: time="2025-05-10T00:00:17.596486040Z" level=info msg="StartContainer for \"3541b876ed344b21a9dbb213c3196b4352f1eef759eb9911639971e5654acb8d\"" May 10 00:00:17.621547 systemd[1]: Started cri-containerd-3541b876ed344b21a9dbb213c3196b4352f1eef759eb9911639971e5654acb8d.scope - libcontainer container 3541b876ed344b21a9dbb213c3196b4352f1eef759eb9911639971e5654acb8d. May 10 00:00:17.642052 containerd[1432]: time="2025-05-10T00:00:17.642009886Z" level=info msg="StartContainer for \"3541b876ed344b21a9dbb213c3196b4352f1eef759eb9911639971e5654acb8d\" returns successfully" May 10 00:00:17.708852 systemd[1]: cri-containerd-3541b876ed344b21a9dbb213c3196b4352f1eef759eb9911639971e5654acb8d.scope: Deactivated successfully. May 10 00:00:17.739184 containerd[1432]: time="2025-05-10T00:00:17.739109419Z" level=info msg="shim disconnected" id=3541b876ed344b21a9dbb213c3196b4352f1eef759eb9911639971e5654acb8d namespace=k8s.io May 10 00:00:17.739184 containerd[1432]: time="2025-05-10T00:00:17.739165338Z" level=warning msg="cleaning up after shim disconnected" id=3541b876ed344b21a9dbb213c3196b4352f1eef759eb9911639971e5654acb8d namespace=k8s.io May 10 00:00:17.739184 containerd[1432]: time="2025-05-10T00:00:17.739174938Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:17.749241 containerd[1432]: time="2025-05-10T00:00:17.749188195Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:00:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 10 00:00:18.365808 kubelet[1748]: E0510 00:00:18.365764 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:18.621821 kubelet[1748]: E0510 00:00:18.621336 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:00:18.623777 containerd[1432]: time="2025-05-10T00:00:18.623715538Z" level=info msg="CreateContainer within sandbox \"5e9386c349adcfc5703e015cba7c79c2147487a45dd73ace6fbd383eec71b125\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:00:18.636018 containerd[1432]: time="2025-05-10T00:00:18.635877908Z" level=info msg="CreateContainer within sandbox \"5e9386c349adcfc5703e015cba7c79c2147487a45dd73ace6fbd383eec71b125\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2fe4fd7cdefedeaee497658286b4a153aafed68677fad169ef038f025c8ff4fc\"" May 10 00:00:18.636424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount469129416.mount: Deactivated successfully. May 10 00:00:18.636720 containerd[1432]: time="2025-05-10T00:00:18.636529332Z" level=info msg="StartContainer for \"2fe4fd7cdefedeaee497658286b4a153aafed68677fad169ef038f025c8ff4fc\"" May 10 00:00:18.667587 systemd[1]: Started cri-containerd-2fe4fd7cdefedeaee497658286b4a153aafed68677fad169ef038f025c8ff4fc.scope - libcontainer container 2fe4fd7cdefedeaee497658286b4a153aafed68677fad169ef038f025c8ff4fc. May 10 00:00:18.692788 containerd[1432]: time="2025-05-10T00:00:18.692729418Z" level=info msg="StartContainer for \"2fe4fd7cdefedeaee497658286b4a153aafed68677fad169ef038f025c8ff4fc\" returns successfully" May 10 00:00:18.716045 systemd[1]: cri-containerd-2fe4fd7cdefedeaee497658286b4a153aafed68677fad169ef038f025c8ff4fc.scope: Deactivated successfully. May 10 00:00:18.738570 containerd[1432]: time="2025-05-10T00:00:18.738500411Z" level=info msg="shim disconnected" id=2fe4fd7cdefedeaee497658286b4a153aafed68677fad169ef038f025c8ff4fc namespace=k8s.io May 10 00:00:18.738570 containerd[1432]: time="2025-05-10T00:00:18.738564969Z" level=warning msg="cleaning up after shim disconnected" id=2fe4fd7cdefedeaee497658286b4a153aafed68677fad169ef038f025c8ff4fc namespace=k8s.io May 10 00:00:18.738570 containerd[1432]: time="2025-05-10T00:00:18.738574089Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:19.366230 kubelet[1748]: E0510 00:00:19.366159 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:19.391367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fe4fd7cdefedeaee497658286b4a153aafed68677fad169ef038f025c8ff4fc-rootfs.mount: Deactivated successfully. May 10 00:00:19.624917 kubelet[1748]: E0510 00:00:19.624702 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:00:19.626931 containerd[1432]: time="2025-05-10T00:00:19.626794424Z" level=info msg="CreateContainer within sandbox \"5e9386c349adcfc5703e015cba7c79c2147487a45dd73ace6fbd383eec71b125\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:00:19.645554 containerd[1432]: time="2025-05-10T00:00:19.645496120Z" level=info msg="CreateContainer within sandbox \"5e9386c349adcfc5703e015cba7c79c2147487a45dd73ace6fbd383eec71b125\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3a63bb6ad0cb2056f9f11813f7583e7dd731bca3435523592fc3f535017b1795\"" May 10 00:00:19.647731 containerd[1432]: time="2025-05-10T00:00:19.646227542Z" level=info msg="StartContainer for \"3a63bb6ad0cb2056f9f11813f7583e7dd731bca3435523592fc3f535017b1795\"" May 10 00:00:19.684571 systemd[1]: Started cri-containerd-3a63bb6ad0cb2056f9f11813f7583e7dd731bca3435523592fc3f535017b1795.scope - libcontainer container 3a63bb6ad0cb2056f9f11813f7583e7dd731bca3435523592fc3f535017b1795. May 10 00:00:19.756923 systemd[1]: cri-containerd-3a63bb6ad0cb2056f9f11813f7583e7dd731bca3435523592fc3f535017b1795.scope: Deactivated successfully. May 10 00:00:19.757095 containerd[1432]: time="2025-05-10T00:00:19.757057433Z" level=info msg="StartContainer for \"3a63bb6ad0cb2056f9f11813f7583e7dd731bca3435523592fc3f535017b1795\" returns successfully" May 10 00:00:19.830296 containerd[1432]: time="2025-05-10T00:00:19.830227898Z" level=info msg="shim disconnected" id=3a63bb6ad0cb2056f9f11813f7583e7dd731bca3435523592fc3f535017b1795 namespace=k8s.io May 10 00:00:19.830296 containerd[1432]: time="2025-05-10T00:00:19.830289737Z" level=warning msg="cleaning up after shim disconnected" id=3a63bb6ad0cb2056f9f11813f7583e7dd731bca3435523592fc3f535017b1795 namespace=k8s.io May 10 00:00:19.830296 containerd[1432]: time="2025-05-10T00:00:19.830298816Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:19.878838 containerd[1432]: time="2025-05-10T00:00:19.878713656Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:19.879139 containerd[1432]: time="2025-05-10T00:00:19.879088566Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 10 00:00:19.879983 containerd[1432]: time="2025-05-10T00:00:19.879935385Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:19.881446 containerd[1432]: time="2025-05-10T00:00:19.881406309Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.41338706s" May 10 00:00:19.881446 containerd[1432]: time="2025-05-10T00:00:19.881445388Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 10 00:00:19.883659 containerd[1432]: time="2025-05-10T00:00:19.883543296Z" level=info msg="CreateContainer within sandbox \"49573bb46e89a7b78a97bb6b8719dd95c14baa56a3fb6b7ac88e531c9bdd69ce\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 10 00:00:19.905159 containerd[1432]: time="2025-05-10T00:00:19.905090721Z" level=info msg="CreateContainer within sandbox \"49573bb46e89a7b78a97bb6b8719dd95c14baa56a3fb6b7ac88e531c9bdd69ce\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9d193913ca3714405020784d9b137a27a3e4ecad2a93f5035722ee9b9e69c5f7\"" May 10 00:00:19.905837 containerd[1432]: time="2025-05-10T00:00:19.905598109Z" level=info msg="StartContainer for \"9d193913ca3714405020784d9b137a27a3e4ecad2a93f5035722ee9b9e69c5f7\"" May 10 00:00:19.940620 systemd[1]: Started cri-containerd-9d193913ca3714405020784d9b137a27a3e4ecad2a93f5035722ee9b9e69c5f7.scope - libcontainer container 9d193913ca3714405020784d9b137a27a3e4ecad2a93f5035722ee9b9e69c5f7. May 10 00:00:19.967528 containerd[1432]: time="2025-05-10T00:00:19.967338377Z" level=info msg="StartContainer for \"9d193913ca3714405020784d9b137a27a3e4ecad2a93f5035722ee9b9e69c5f7\" returns successfully" May 10 00:00:20.322374 kubelet[1748]: E0510 00:00:20.322309 1748 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:20.335662 containerd[1432]: time="2025-05-10T00:00:20.335523192Z" level=info msg="StopPodSandbox for \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\"" May 10 00:00:20.335662 containerd[1432]: time="2025-05-10T00:00:20.335620590Z" level=info msg="TearDown network for sandbox \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\" successfully" May 10 00:00:20.335662 containerd[1432]: time="2025-05-10T00:00:20.335630389Z" level=info msg="StopPodSandbox for \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\" returns successfully" May 10 00:00:20.343457 containerd[1432]: time="2025-05-10T00:00:20.343400922Z" level=info msg="RemovePodSandbox for \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\"" May 10 00:00:20.343552 containerd[1432]: time="2025-05-10T00:00:20.343464120Z" level=info msg="Forcibly stopping sandbox \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\"" May 10 00:00:20.343552 containerd[1432]: time="2025-05-10T00:00:20.343528919Z" level=info msg="TearDown network for sandbox \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\" successfully" May 10 00:00:20.352571 containerd[1432]: time="2025-05-10T00:00:20.352523662Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:00:20.352666 containerd[1432]: time="2025-05-10T00:00:20.352586260Z" level=info msg="RemovePodSandbox \"cc647c6cfdcb3a901ca3f9ab5f34638cde37c272935df568098f3cf136fc1611\" returns successfully" May 10 00:00:20.367115 kubelet[1748]: E0510 00:00:20.367071 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:20.392407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a63bb6ad0cb2056f9f11813f7583e7dd731bca3435523592fc3f535017b1795-rootfs.mount: Deactivated successfully. May 10 00:00:20.479055 kubelet[1748]: E0510 00:00:20.478930 1748 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:00:20.631299 kubelet[1748]: E0510 00:00:20.631174 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:00:20.635909 containerd[1432]: time="2025-05-10T00:00:20.635535274Z" level=info msg="CreateContainer within sandbox \"5e9386c349adcfc5703e015cba7c79c2147487a45dd73ace6fbd383eec71b125\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:00:20.636881 kubelet[1748]: E0510 00:00:20.636390 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:00:20.648347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1486475011.mount: Deactivated successfully. May 10 00:00:20.649761 containerd[1432]: time="2025-05-10T00:00:20.649595015Z" level=info msg="CreateContainer within sandbox \"5e9386c349adcfc5703e015cba7c79c2147487a45dd73ace6fbd383eec71b125\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"15fb23070fe5e2b33f2994f61bdc2f00e714e94f0b38100e5f0d4a34529b8a43\"" May 10 00:00:20.650446 containerd[1432]: time="2025-05-10T00:00:20.650411435Z" level=info msg="StartContainer for \"15fb23070fe5e2b33f2994f61bdc2f00e714e94f0b38100e5f0d4a34529b8a43\"" May 10 00:00:20.658501 kubelet[1748]: I0510 00:00:20.658211 1748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-qwv2q" podStartSLOduration=3.243709095 podStartE2EDuration="5.658167928s" podCreationTimestamp="2025-05-10 00:00:15 +0000 UTC" firstStartedPulling="2025-05-10 00:00:17.467718657 +0000 UTC m=+57.954420610" lastFinishedPulling="2025-05-10 00:00:19.88217749 +0000 UTC m=+60.368879443" observedRunningTime="2025-05-10 00:00:20.656804321 +0000 UTC m=+61.143506234" watchObservedRunningTime="2025-05-10 00:00:20.658167928 +0000 UTC m=+61.144869881" May 10 00:00:20.681532 systemd[1]: Started cri-containerd-15fb23070fe5e2b33f2994f61bdc2f00e714e94f0b38100e5f0d4a34529b8a43.scope - libcontainer container 15fb23070fe5e2b33f2994f61bdc2f00e714e94f0b38100e5f0d4a34529b8a43. May 10 00:00:20.705910 systemd[1]: cri-containerd-15fb23070fe5e2b33f2994f61bdc2f00e714e94f0b38100e5f0d4a34529b8a43.scope: Deactivated successfully. May 10 00:00:20.712740 containerd[1432]: time="2025-05-10T00:00:20.712699373Z" level=info msg="StartContainer for \"15fb23070fe5e2b33f2994f61bdc2f00e714e94f0b38100e5f0d4a34529b8a43\" returns successfully" May 10 00:00:20.719839 containerd[1432]: time="2025-05-10T00:00:20.710881776Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ad823bd_2bfa_47a0_9875_777316b01975.slice/cri-containerd-15fb23070fe5e2b33f2994f61bdc2f00e714e94f0b38100e5f0d4a34529b8a43.scope/memory.events\": no such file or directory" May 10 00:00:20.733810 containerd[1432]: time="2025-05-10T00:00:20.733742185Z" level=info msg="shim disconnected" id=15fb23070fe5e2b33f2994f61bdc2f00e714e94f0b38100e5f0d4a34529b8a43 namespace=k8s.io May 10 00:00:20.733810 containerd[1432]: time="2025-05-10T00:00:20.733795544Z" level=warning msg="cleaning up after shim disconnected" id=15fb23070fe5e2b33f2994f61bdc2f00e714e94f0b38100e5f0d4a34529b8a43 namespace=k8s.io May 10 00:00:20.733810 containerd[1432]: time="2025-05-10T00:00:20.733804423Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:21.369135 kubelet[1748]: E0510 00:00:21.367839 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:21.391539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15fb23070fe5e2b33f2994f61bdc2f00e714e94f0b38100e5f0d4a34529b8a43-rootfs.mount: Deactivated successfully. May 10 00:00:21.640497 kubelet[1748]: E0510 00:00:21.640396 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:00:21.640793 kubelet[1748]: E0510 00:00:21.640725 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:00:21.643374 containerd[1432]: time="2025-05-10T00:00:21.643326704Z" level=info msg="CreateContainer within sandbox \"5e9386c349adcfc5703e015cba7c79c2147487a45dd73ace6fbd383eec71b125\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:00:21.816045 kubelet[1748]: I0510 00:00:21.815946 1748 setters.go:580] "Node became not ready" node="10.0.0.100" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-10T00:00:21Z","lastTransitionTime":"2025-05-10T00:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 10 00:00:21.955264 containerd[1432]: time="2025-05-10T00:00:21.953628102Z" level=info msg="CreateContainer within sandbox \"5e9386c349adcfc5703e015cba7c79c2147487a45dd73ace6fbd383eec71b125\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bfb5c3a0693789371a67851b6cc291465236d028702765f7ada5e45e63fe2c7e\"" May 10 00:00:21.955264 containerd[1432]: time="2025-05-10T00:00:21.954792194Z" level=info msg="StartContainer for \"bfb5c3a0693789371a67851b6cc291465236d028702765f7ada5e45e63fe2c7e\"" May 10 00:00:21.998622 systemd[1]: Started cri-containerd-bfb5c3a0693789371a67851b6cc291465236d028702765f7ada5e45e63fe2c7e.scope - libcontainer container bfb5c3a0693789371a67851b6cc291465236d028702765f7ada5e45e63fe2c7e. May 10 00:00:22.050108 containerd[1432]: time="2025-05-10T00:00:22.050060910Z" level=info msg="StartContainer for \"bfb5c3a0693789371a67851b6cc291465236d028702765f7ada5e45e63fe2c7e\" returns successfully" May 10 00:00:22.331425 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 10 00:00:22.368806 kubelet[1748]: E0510 00:00:22.368768 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:22.644900 kubelet[1748]: E0510 00:00:22.644869 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:00:22.677392 kubelet[1748]: I0510 00:00:22.675802 1748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vp989" podStartSLOduration=7.675787464 podStartE2EDuration="7.675787464s" podCreationTimestamp="2025-05-10 00:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:00:22.675448952 +0000 UTC m=+63.162150905" watchObservedRunningTime="2025-05-10 00:00:22.675787464 +0000 UTC m=+63.162489417" May 10 00:00:23.369728 kubelet[1748]: E0510 00:00:23.369680 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:23.646633 kubelet[1748]: E0510 00:00:23.646553 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:00:24.370137 kubelet[1748]: E0510 00:00:24.370095 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:24.648136 kubelet[1748]: E0510 00:00:24.648099 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:00:25.197867 systemd-networkd[1373]: lxc_health: Link UP May 10 00:00:25.201924 systemd-networkd[1373]: lxc_health: Gained carrier May 10 00:00:25.370584 kubelet[1748]: E0510 00:00:25.370526 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:25.651280 kubelet[1748]: E0510 00:00:25.651022 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:00:26.371063 kubelet[1748]: E0510 00:00:26.371015 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:26.652159 kubelet[1748]: E0510 00:00:26.652126 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:00:27.263529 systemd-networkd[1373]: lxc_health: Gained IPv6LL May 10 00:00:27.371378 kubelet[1748]: E0510 00:00:27.371311 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:27.655150 kubelet[1748]: E0510 00:00:27.655115 1748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:00:28.371487 kubelet[1748]: E0510 00:00:28.371439 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:29.372340 kubelet[1748]: E0510 00:00:29.372270 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:30.373828 kubelet[1748]: E0510 00:00:30.373778 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:31.374968 kubelet[1748]: E0510 00:00:31.374898 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 00:00:32.375060 kubelet[1748]: E0510 00:00:32.374986 1748 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"