May 9 23:52:29.016969 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 9 23:52:29.016991 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri May 9 22:24:49 -00 2025 May 9 23:52:29.017001 kernel: KASLR enabled May 9 23:52:29.017008 kernel: efi: EFI v2.7 by EDK II May 9 23:52:29.017014 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 May 9 23:52:29.017020 kernel: random: crng init done May 9 23:52:29.017027 kernel: secureboot: Secure boot disabled May 9 23:52:29.017033 kernel: ACPI: Early table checksum verification disabled May 9 23:52:29.017039 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 9 23:52:29.017047 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 9 23:52:29.017054 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:52:29.017060 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:52:29.017066 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:52:29.017072 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:52:29.017080 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:52:29.017088 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:52:29.017095 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:52:29.017102 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:52:29.017108 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:52:29.017115 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 9 23:52:29.017122 kernel: NUMA: Failed to initialise from firmware May 9 23:52:29.017128 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 9 23:52:29.017135 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 9 23:52:29.017141 kernel: Zone ranges: May 9 23:52:29.017148 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 9 23:52:29.017156 kernel: DMA32 empty May 9 23:52:29.017163 kernel: Normal empty May 9 23:52:29.017169 kernel: Movable zone start for each node May 9 23:52:29.017176 kernel: Early memory node ranges May 9 23:52:29.017182 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 9 23:52:29.017189 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 9 23:52:29.017195 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 9 23:52:29.017202 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 9 23:52:29.017208 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 9 23:52:29.017215 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 9 23:52:29.017221 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 9 23:52:29.017228 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 9 23:52:29.017235 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 9 23:52:29.017242 kernel: psci: probing for conduit method from ACPI. May 9 23:52:29.017248 kernel: psci: PSCIv1.1 detected in firmware. May 9 23:52:29.017257 kernel: psci: Using standard PSCI v0.2 function IDs May 9 23:52:29.017264 kernel: psci: Trusted OS migration not required May 9 23:52:29.017271 kernel: psci: SMC Calling Convention v1.1 May 9 23:52:29.017279 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 9 23:52:29.017286 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 9 23:52:29.017293 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 9 23:52:29.017300 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 9 23:52:29.017306 kernel: Detected PIPT I-cache on CPU0 May 9 23:52:29.017314 kernel: CPU features: detected: GIC system register CPU interface May 9 23:52:29.017320 kernel: CPU features: detected: Hardware dirty bit management May 9 23:52:29.017327 kernel: CPU features: detected: Spectre-v4 May 9 23:52:29.017334 kernel: CPU features: detected: Spectre-BHB May 9 23:52:29.017341 kernel: CPU features: kernel page table isolation forced ON by KASLR May 9 23:52:29.017350 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 9 23:52:29.017356 kernel: CPU features: detected: ARM erratum 1418040 May 9 23:52:29.017363 kernel: CPU features: detected: SSBS not fully self-synchronizing May 9 23:52:29.017369 kernel: alternatives: applying boot alternatives May 9 23:52:29.017377 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9a99b6d651f8aeb5d7bfd4370bc36449b7e5138d2f42e40e0aede009df00f5a4 May 9 23:52:29.017384 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 23:52:29.017391 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 23:52:29.017404 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 23:52:29.017412 kernel: Fallback order for Node 0: 0 May 9 23:52:29.017419 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 9 23:52:29.017425 kernel: Policy zone: DMA May 9 23:52:29.017434 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 23:52:29.017441 kernel: software IO TLB: area num 4. May 9 23:52:29.017447 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 9 23:52:29.017455 kernel: Memory: 2386260K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39744K init, 897K bss, 186028K reserved, 0K cma-reserved) May 9 23:52:29.017462 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 23:52:29.017469 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 23:52:29.017476 kernel: rcu: RCU event tracing is enabled. May 9 23:52:29.017483 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 23:52:29.017490 kernel: Trampoline variant of Tasks RCU enabled. May 9 23:52:29.017497 kernel: Tracing variant of Tasks RCU enabled. May 9 23:52:29.017504 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 23:52:29.017511 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 23:52:29.017520 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 23:52:29.017527 kernel: GICv3: 256 SPIs implemented May 9 23:52:29.017534 kernel: GICv3: 0 Extended SPIs implemented May 9 23:52:29.017541 kernel: Root IRQ handler: gic_handle_irq May 9 23:52:29.017548 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 9 23:52:29.017555 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 9 23:52:29.017562 kernel: ITS [mem 0x08080000-0x0809ffff] May 9 23:52:29.017569 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 9 23:52:29.017576 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 9 23:52:29.017583 kernel: GICv3: using LPI property table @0x00000000400f0000 May 9 23:52:29.017590 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 9 23:52:29.017598 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 23:52:29.017605 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:52:29.017612 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 9 23:52:29.017619 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 9 23:52:29.017626 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 9 23:52:29.017634 kernel: arm-pv: using stolen time PV May 9 23:52:29.017641 kernel: Console: colour dummy device 80x25 May 9 23:52:29.017648 kernel: ACPI: Core revision 20230628 May 9 23:52:29.017655 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 9 23:52:29.017663 kernel: pid_max: default: 32768 minimum: 301 May 9 23:52:29.017671 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 23:52:29.017678 kernel: landlock: Up and running. May 9 23:52:29.017685 kernel: SELinux: Initializing. May 9 23:52:29.017693 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:52:29.017804 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:52:29.017812 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 9 23:52:29.017820 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 23:52:29.017827 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 23:52:29.017834 kernel: rcu: Hierarchical SRCU implementation. May 9 23:52:29.017845 kernel: rcu: Max phase no-delay instances is 400. May 9 23:52:29.017853 kernel: Platform MSI: ITS@0x8080000 domain created May 9 23:52:29.017862 kernel: PCI/MSI: ITS@0x8080000 domain created May 9 23:52:29.017869 kernel: Remapping and enabling EFI services. May 9 23:52:29.017876 kernel: smp: Bringing up secondary CPUs ... May 9 23:52:29.017883 kernel: Detected PIPT I-cache on CPU1 May 9 23:52:29.017890 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 9 23:52:29.017897 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 9 23:52:29.017905 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:52:29.017912 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 9 23:52:29.017921 kernel: Detected PIPT I-cache on CPU2 May 9 23:52:29.017928 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 9 23:52:29.017941 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 9 23:52:29.017949 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:52:29.017957 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 9 23:52:29.017964 kernel: Detected PIPT I-cache on CPU3 May 9 23:52:29.017972 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 9 23:52:29.017979 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 9 23:52:29.017987 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:52:29.017995 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 9 23:52:29.018004 kernel: smp: Brought up 1 node, 4 CPUs May 9 23:52:29.018012 kernel: SMP: Total of 4 processors activated. May 9 23:52:29.018019 kernel: CPU features: detected: 32-bit EL0 Support May 9 23:52:29.018028 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 9 23:52:29.018036 kernel: CPU features: detected: Common not Private translations May 9 23:52:29.018043 kernel: CPU features: detected: CRC32 instructions May 9 23:52:29.018051 kernel: CPU features: detected: Enhanced Virtualization Traps May 9 23:52:29.018060 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 9 23:52:29.018067 kernel: CPU features: detected: LSE atomic instructions May 9 23:52:29.018075 kernel: CPU features: detected: Privileged Access Never May 9 23:52:29.018082 kernel: CPU features: detected: RAS Extension Support May 9 23:52:29.018090 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 9 23:52:29.018097 kernel: CPU: All CPU(s) started at EL1 May 9 23:52:29.018105 kernel: alternatives: applying system-wide alternatives May 9 23:52:29.018112 kernel: devtmpfs: initialized May 9 23:52:29.018120 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 23:52:29.018129 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 23:52:29.018137 kernel: pinctrl core: initialized pinctrl subsystem May 9 23:52:29.018144 kernel: SMBIOS 3.0.0 present. May 9 23:52:29.018151 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 9 23:52:29.018159 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 23:52:29.018166 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 23:52:29.018174 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 23:52:29.018181 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 23:52:29.018188 kernel: audit: initializing netlink subsys (disabled) May 9 23:52:29.018197 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 9 23:52:29.018204 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 23:52:29.018212 kernel: cpuidle: using governor menu May 9 23:52:29.018219 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 23:52:29.018226 kernel: ASID allocator initialised with 32768 entries May 9 23:52:29.018234 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 23:52:29.018241 kernel: Serial: AMBA PL011 UART driver May 9 23:52:29.018249 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 9 23:52:29.018256 kernel: Modules: 0 pages in range for non-PLT usage May 9 23:52:29.018265 kernel: Modules: 508944 pages in range for PLT usage May 9 23:52:29.018272 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 23:52:29.018279 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 23:52:29.018287 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 23:52:29.018294 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 23:52:29.018301 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 23:52:29.018309 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 23:52:29.018316 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 23:52:29.018323 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 23:52:29.018333 kernel: ACPI: Added _OSI(Module Device) May 9 23:52:29.018340 kernel: ACPI: Added _OSI(Processor Device) May 9 23:52:29.018347 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 23:52:29.018355 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 23:52:29.018362 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 23:52:29.018370 kernel: ACPI: Interpreter enabled May 9 23:52:29.018378 kernel: ACPI: Using GIC for interrupt routing May 9 23:52:29.018385 kernel: ACPI: MCFG table detected, 1 entries May 9 23:52:29.018393 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 9 23:52:29.018409 kernel: printk: console [ttyAMA0] enabled May 9 23:52:29.018417 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 23:52:29.018591 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 23:52:29.018673 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 9 23:52:29.018770 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 9 23:52:29.018840 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 9 23:52:29.018909 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 9 23:52:29.018923 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 9 23:52:29.018931 kernel: PCI host bridge to bus 0000:00 May 9 23:52:29.019004 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 9 23:52:29.019064 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 9 23:52:29.019132 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 9 23:52:29.019191 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 23:52:29.019278 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 9 23:52:29.019357 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 9 23:52:29.019435 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 9 23:52:29.019503 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 9 23:52:29.019573 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 9 23:52:29.019641 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 9 23:52:29.019718 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 9 23:52:29.019789 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 9 23:52:29.019855 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 9 23:52:29.019915 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 9 23:52:29.019978 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 9 23:52:29.019991 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 9 23:52:29.019998 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 9 23:52:29.020006 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 9 23:52:29.020014 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 9 23:52:29.020023 kernel: iommu: Default domain type: Translated May 9 23:52:29.020031 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 23:52:29.020038 kernel: efivars: Registered efivars operations May 9 23:52:29.020045 kernel: vgaarb: loaded May 9 23:52:29.020053 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 23:52:29.020061 kernel: VFS: Disk quotas dquot_6.6.0 May 9 23:52:29.020068 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 23:52:29.020076 kernel: pnp: PnP ACPI init May 9 23:52:29.020149 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 9 23:52:29.020162 kernel: pnp: PnP ACPI: found 1 devices May 9 23:52:29.020170 kernel: NET: Registered PF_INET protocol family May 9 23:52:29.020177 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 23:52:29.020185 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 23:52:29.020192 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 23:52:29.020200 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 23:52:29.020208 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 23:52:29.020215 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 23:52:29.020223 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:52:29.020232 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:52:29.020240 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 23:52:29.020247 kernel: PCI: CLS 0 bytes, default 64 May 9 23:52:29.020255 kernel: kvm [1]: HYP mode not available May 9 23:52:29.020262 kernel: Initialise system trusted keyrings May 9 23:52:29.020269 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 23:52:29.020277 kernel: Key type asymmetric registered May 9 23:52:29.020284 kernel: Asymmetric key parser 'x509' registered May 9 23:52:29.020292 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 9 23:52:29.020301 kernel: io scheduler mq-deadline registered May 9 23:52:29.020308 kernel: io scheduler kyber registered May 9 23:52:29.020315 kernel: io scheduler bfq registered May 9 23:52:29.020323 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 9 23:52:29.020330 kernel: ACPI: button: Power Button [PWRB] May 9 23:52:29.020338 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 9 23:52:29.020413 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 9 23:52:29.020424 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 23:52:29.020431 kernel: thunder_xcv, ver 1.0 May 9 23:52:29.020441 kernel: thunder_bgx, ver 1.0 May 9 23:52:29.020448 kernel: nicpf, ver 1.0 May 9 23:52:29.020456 kernel: nicvf, ver 1.0 May 9 23:52:29.020533 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 23:52:29.020602 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T23:52:28 UTC (1746834748) May 9 23:52:29.020612 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 23:52:29.020619 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 9 23:52:29.020627 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 23:52:29.020637 kernel: watchdog: Hard watchdog permanently disabled May 9 23:52:29.020645 kernel: NET: Registered PF_INET6 protocol family May 9 23:52:29.020653 kernel: Segment Routing with IPv6 May 9 23:52:29.020663 kernel: In-situ OAM (IOAM) with IPv6 May 9 23:52:29.020672 kernel: NET: Registered PF_PACKET protocol family May 9 23:52:29.020682 kernel: Key type dns_resolver registered May 9 23:52:29.020691 kernel: registered taskstats version 1 May 9 23:52:29.020716 kernel: Loading compiled-in X.509 certificates May 9 23:52:29.020724 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: ce481d22c53070871912748985d4044dfd149966' May 9 23:52:29.020734 kernel: Key type .fscrypt registered May 9 23:52:29.020741 kernel: Key type fscrypt-provisioning registered May 9 23:52:29.020749 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 23:52:29.020756 kernel: ima: Allocated hash algorithm: sha1 May 9 23:52:29.020764 kernel: ima: No architecture policies found May 9 23:52:29.020771 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 23:52:29.020778 kernel: clk: Disabling unused clocks May 9 23:52:29.020786 kernel: Freeing unused kernel memory: 39744K May 9 23:52:29.020794 kernel: Run /init as init process May 9 23:52:29.020805 kernel: with arguments: May 9 23:52:29.020813 kernel: /init May 9 23:52:29.020821 kernel: with environment: May 9 23:52:29.020828 kernel: HOME=/ May 9 23:52:29.020835 kernel: TERM=linux May 9 23:52:29.020842 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 23:52:29.020852 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:52:29.020861 systemd[1]: Detected virtualization kvm. May 9 23:52:29.020871 systemd[1]: Detected architecture arm64. May 9 23:52:29.020879 systemd[1]: Running in initrd. May 9 23:52:29.020887 systemd[1]: No hostname configured, using default hostname. May 9 23:52:29.020894 systemd[1]: Hostname set to . May 9 23:52:29.020902 systemd[1]: Initializing machine ID from VM UUID. May 9 23:52:29.020910 systemd[1]: Queued start job for default target initrd.target. May 9 23:52:29.020918 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:52:29.020927 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:52:29.020938 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 23:52:29.020946 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:52:29.020954 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 23:52:29.020962 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 23:52:29.020972 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 23:52:29.020981 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 23:52:29.020992 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:52:29.021002 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:52:29.021012 systemd[1]: Reached target paths.target - Path Units. May 9 23:52:29.021021 systemd[1]: Reached target slices.target - Slice Units. May 9 23:52:29.021031 systemd[1]: Reached target swap.target - Swaps. May 9 23:52:29.021039 systemd[1]: Reached target timers.target - Timer Units. May 9 23:52:29.021047 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:52:29.021055 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:52:29.021063 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 23:52:29.021073 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 23:52:29.021081 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:52:29.021089 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:52:29.021098 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:52:29.021106 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:52:29.021114 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 23:52:29.021122 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:52:29.021130 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 23:52:29.021140 systemd[1]: Starting systemd-fsck-usr.service... May 9 23:52:29.021148 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:52:29.021156 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:52:29.021164 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:52:29.021172 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 23:52:29.021180 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:52:29.021188 systemd[1]: Finished systemd-fsck-usr.service. May 9 23:52:29.021199 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 23:52:29.021207 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:52:29.021216 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:52:29.021224 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:52:29.021233 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:52:29.021264 systemd-journald[238]: Collecting audit messages is disabled. May 9 23:52:29.021288 systemd-journald[238]: Journal started May 9 23:52:29.021309 systemd-journald[238]: Runtime Journal (/run/log/journal/c3784340aeac49c7a2a083cbe791a6b8) is 5.9M, max 47.3M, 41.4M free. May 9 23:52:29.011345 systemd-modules-load[240]: Inserted module 'overlay' May 9 23:52:29.026027 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:52:29.029740 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 23:52:29.030583 systemd-modules-load[240]: Inserted module 'br_netfilter' May 9 23:52:29.031598 kernel: Bridge firewalling registered May 9 23:52:29.035883 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:52:29.037390 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:52:29.041049 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:52:29.044015 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:52:29.045627 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:52:29.053950 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 23:52:29.055707 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:52:29.063691 dracut-cmdline[273]: dracut-dracut-053 May 9 23:52:29.066510 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:52:29.068761 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9a99b6d651f8aeb5d7bfd4370bc36449b7e5138d2f42e40e0aede009df00f5a4 May 9 23:52:29.069315 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:52:29.118250 systemd-resolved[292]: Positive Trust Anchors: May 9 23:52:29.118409 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:52:29.118441 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:52:29.125655 systemd-resolved[292]: Defaulting to hostname 'linux'. May 9 23:52:29.128144 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:52:29.131815 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:52:29.157732 kernel: SCSI subsystem initialized May 9 23:52:29.162713 kernel: Loading iSCSI transport class v2.0-870. May 9 23:52:29.169724 kernel: iscsi: registered transport (tcp) May 9 23:52:29.183747 kernel: iscsi: registered transport (qla4xxx) May 9 23:52:29.183801 kernel: QLogic iSCSI HBA Driver May 9 23:52:29.232740 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 23:52:29.241854 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 23:52:29.260742 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 23:52:29.260812 kernel: device-mapper: uevent: version 1.0.3 May 9 23:52:29.260824 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 23:52:29.307735 kernel: raid6: neonx8 gen() 15763 MB/s May 9 23:52:29.324731 kernel: raid6: neonx4 gen() 15632 MB/s May 9 23:52:29.341726 kernel: raid6: neonx2 gen() 13267 MB/s May 9 23:52:29.358731 kernel: raid6: neonx1 gen() 10453 MB/s May 9 23:52:29.375723 kernel: raid6: int64x8 gen() 6934 MB/s May 9 23:52:29.392723 kernel: raid6: int64x4 gen() 7337 MB/s May 9 23:52:29.409724 kernel: raid6: int64x2 gen() 6120 MB/s May 9 23:52:29.426983 kernel: raid6: int64x1 gen() 5044 MB/s May 9 23:52:29.426999 kernel: raid6: using algorithm neonx8 gen() 15763 MB/s May 9 23:52:29.444941 kernel: raid6: .... xor() 11906 MB/s, rmw enabled May 9 23:52:29.444963 kernel: raid6: using neon recovery algorithm May 9 23:52:29.451217 kernel: xor: measuring software checksum speed May 9 23:52:29.451241 kernel: 8regs : 19788 MB/sec May 9 23:52:29.451251 kernel: 32regs : 19622 MB/sec May 9 23:52:29.451900 kernel: arm64_neon : 26857 MB/sec May 9 23:52:29.451914 kernel: xor: using function: arm64_neon (26857 MB/sec) May 9 23:52:29.511725 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 23:52:29.523379 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 23:52:29.531907 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:52:29.545109 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 9 23:52:29.550450 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:52:29.561914 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 23:52:29.575554 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation May 9 23:52:29.607989 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:52:29.616910 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:52:29.663914 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:52:29.673954 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 23:52:29.686222 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 23:52:29.688364 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:52:29.690283 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:52:29.693201 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:52:29.700887 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 23:52:29.713544 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 9 23:52:29.714215 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 23:52:29.718666 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 23:52:29.726932 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 23:52:29.726972 kernel: GPT:9289727 != 19775487 May 9 23:52:29.726986 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 23:52:29.728125 kernel: GPT:9289727 != 19775487 May 9 23:52:29.728149 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 23:52:29.728160 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 23:52:29.728996 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:52:29.729149 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:52:29.733480 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:52:29.735360 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:52:29.735551 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:52:29.738877 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:52:29.750731 kernel: BTRFS: device fsid 278061fd-7ea0-499f-a3bc-343431c2d8fa devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (509) May 9 23:52:29.750777 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (514) May 9 23:52:29.751997 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:52:29.765912 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:52:29.771335 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 23:52:29.776163 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 23:52:29.780305 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 23:52:29.781712 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 23:52:29.787878 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 23:52:29.804887 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 23:52:29.806947 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:52:29.814189 disk-uuid[551]: Primary Header is updated. May 9 23:52:29.814189 disk-uuid[551]: Secondary Entries is updated. May 9 23:52:29.814189 disk-uuid[551]: Secondary Header is updated. May 9 23:52:29.818722 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 23:52:29.837462 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:52:30.838851 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 23:52:30.838967 disk-uuid[552]: The operation has completed successfully. May 9 23:52:30.869738 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 23:52:30.869838 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 23:52:30.891933 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 23:52:30.908295 sh[570]: Success May 9 23:52:30.931829 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 23:52:31.002346 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 23:52:31.006723 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 23:52:31.009435 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 23:52:31.023498 kernel: BTRFS info (device dm-0): first mount of filesystem 278061fd-7ea0-499f-a3bc-343431c2d8fa May 9 23:52:31.023535 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 23:52:31.023546 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 23:52:31.025474 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 23:52:31.025501 kernel: BTRFS info (device dm-0): using free space tree May 9 23:52:31.030915 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 23:52:31.032086 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 23:52:31.044887 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 23:52:31.046571 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 23:52:31.055429 kernel: BTRFS info (device vda6): first mount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:52:31.055468 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:52:31.055479 kernel: BTRFS info (device vda6): using free space tree May 9 23:52:31.058831 kernel: BTRFS info (device vda6): auto enabling async discard May 9 23:52:31.066238 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 23:52:31.068133 kernel: BTRFS info (device vda6): last unmount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:52:31.078490 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 23:52:31.084876 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 23:52:31.147439 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:52:31.155919 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:52:31.183936 ignition[670]: Ignition 2.20.0 May 9 23:52:31.183946 ignition[670]: Stage: fetch-offline May 9 23:52:31.184328 systemd-networkd[762]: lo: Link UP May 9 23:52:31.183978 ignition[670]: no configs at "/usr/lib/ignition/base.d" May 9 23:52:31.184332 systemd-networkd[762]: lo: Gained carrier May 9 23:52:31.183987 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:52:31.185075 systemd-networkd[762]: Enumeration completed May 9 23:52:31.184152 ignition[670]: parsed url from cmdline: "" May 9 23:52:31.185186 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:52:31.184154 ignition[670]: no config URL provided May 9 23:52:31.185481 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:52:31.184161 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" May 9 23:52:31.185485 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:52:31.184169 ignition[670]: no config at "/usr/lib/ignition/user.ign" May 9 23:52:31.186360 systemd-networkd[762]: eth0: Link UP May 9 23:52:31.184195 ignition[670]: op(1): [started] loading QEMU firmware config module May 9 23:52:31.186363 systemd-networkd[762]: eth0: Gained carrier May 9 23:52:31.184200 ignition[670]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 23:52:31.186369 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:52:31.195015 ignition[670]: op(1): [finished] loading QEMU firmware config module May 9 23:52:31.186865 systemd[1]: Reached target network.target - Network. May 9 23:52:31.208742 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.76/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 23:52:31.210830 ignition[670]: parsing config with SHA512: 07da6f2fe57164d01f4f62168feca660b2e8c84e345a93a0048eb41efd175609ae4e6011d0fa28865008e467394bb01fccb841dd834083ba6d96fa49f3c325ee May 9 23:52:31.214207 unknown[670]: fetched base config from "system" May 9 23:52:31.214217 unknown[670]: fetched user config from "qemu" May 9 23:52:31.214490 ignition[670]: fetch-offline: fetch-offline passed May 9 23:52:31.215997 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:52:31.214560 ignition[670]: Ignition finished successfully May 9 23:52:31.218062 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 23:52:31.230893 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 23:52:31.241479 ignition[769]: Ignition 2.20.0 May 9 23:52:31.241489 ignition[769]: Stage: kargs May 9 23:52:31.241642 ignition[769]: no configs at "/usr/lib/ignition/base.d" May 9 23:52:31.241652 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:52:31.244477 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 23:52:31.242345 ignition[769]: kargs: kargs passed May 9 23:52:31.242388 ignition[769]: Ignition finished successfully May 9 23:52:31.253898 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 23:52:31.263799 ignition[779]: Ignition 2.20.0 May 9 23:52:31.263809 ignition[779]: Stage: disks May 9 23:52:31.263967 ignition[779]: no configs at "/usr/lib/ignition/base.d" May 9 23:52:31.263976 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:52:31.265989 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 23:52:31.264633 ignition[779]: disks: disks passed May 9 23:52:31.267589 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 23:52:31.264676 ignition[779]: Ignition finished successfully May 9 23:52:31.269524 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 23:52:31.271668 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:52:31.273264 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:52:31.275336 systemd[1]: Reached target basic.target - Basic System. May 9 23:52:31.286878 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 23:52:31.300739 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 23:52:31.305189 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 23:52:31.320857 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 23:52:31.361715 kernel: EXT4-fs (vda9): mounted filesystem caef9e74-1f21-4595-8586-7560f5103527 r/w with ordered data mode. Quota mode: none. May 9 23:52:31.362031 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 23:52:31.363304 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 23:52:31.386864 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:52:31.389503 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 23:52:31.390513 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 23:52:31.390554 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 23:52:31.390577 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:52:31.396850 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 23:52:31.399773 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 23:52:31.404738 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (797) May 9 23:52:31.404778 kernel: BTRFS info (device vda6): first mount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:52:31.404789 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:52:31.405603 kernel: BTRFS info (device vda6): using free space tree May 9 23:52:31.409722 kernel: BTRFS info (device vda6): auto enabling async discard May 9 23:52:31.411276 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:52:31.448681 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory May 9 23:52:31.453497 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory May 9 23:52:31.457652 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory May 9 23:52:31.461846 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory May 9 23:52:31.543004 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 23:52:31.553817 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 23:52:31.556101 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 23:52:31.560709 kernel: BTRFS info (device vda6): last unmount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:52:31.578296 ignition[911]: INFO : Ignition 2.20.0 May 9 23:52:31.578296 ignition[911]: INFO : Stage: mount May 9 23:52:31.579946 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:52:31.579946 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:52:31.579946 ignition[911]: INFO : mount: mount passed May 9 23:52:31.579946 ignition[911]: INFO : Ignition finished successfully May 9 23:52:31.579361 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 23:52:31.581112 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 23:52:31.593816 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 23:52:32.022377 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 23:52:32.038907 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:52:32.044716 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (925) May 9 23:52:32.046949 kernel: BTRFS info (device vda6): first mount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:52:32.046967 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:52:32.046977 kernel: BTRFS info (device vda6): using free space tree May 9 23:52:32.049720 kernel: BTRFS info (device vda6): auto enabling async discard May 9 23:52:32.051011 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:52:32.075934 ignition[942]: INFO : Ignition 2.20.0 May 9 23:52:32.075934 ignition[942]: INFO : Stage: files May 9 23:52:32.077667 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:52:32.077667 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:52:32.077667 ignition[942]: DEBUG : files: compiled without relabeling support, skipping May 9 23:52:32.081149 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 23:52:32.081149 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 23:52:32.081149 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 23:52:32.081149 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 23:52:32.081149 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 23:52:32.081149 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 9 23:52:32.080165 unknown[942]: wrote ssh authorized keys file for user: core May 9 23:52:32.090648 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 9 23:52:32.090648 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:52:32.090648 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:52:32.090648 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 9 23:52:32.090648 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 9 23:52:32.090648 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 9 23:52:32.090648 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 9 23:52:32.384370 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 9 23:52:32.668261 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 9 23:52:32.668261 ignition[942]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 9 23:52:32.672171 ignition[942]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 23:52:32.672171 ignition[942]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 23:52:32.672171 ignition[942]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 9 23:52:32.672171 ignition[942]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 9 23:52:32.699133 ignition[942]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 23:52:32.703273 ignition[942]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 23:52:32.704825 ignition[942]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 9 23:52:32.704825 ignition[942]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 23:52:32.704825 ignition[942]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 23:52:32.704825 ignition[942]: INFO : files: files passed May 9 23:52:32.704825 ignition[942]: INFO : Ignition finished successfully May 9 23:52:32.706524 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 23:52:32.718938 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 23:52:32.720928 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 23:52:32.723592 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 23:52:32.723688 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 23:52:32.729612 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory May 9 23:52:32.733202 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:52:32.733202 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 23:52:32.736444 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:52:32.737780 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:52:32.739715 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 23:52:32.747879 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 23:52:32.770953 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 23:52:32.772055 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 23:52:32.773576 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 23:52:32.775489 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 23:52:32.777435 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 23:52:32.778333 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 23:52:32.796766 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:52:32.812943 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 23:52:32.820943 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 23:52:32.822230 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:52:32.824326 systemd[1]: Stopped target timers.target - Timer Units. May 9 23:52:32.826149 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 23:52:32.826281 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:52:32.828754 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 23:52:32.830752 systemd[1]: Stopped target basic.target - Basic System. May 9 23:52:32.832409 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 23:52:32.834137 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:52:32.836089 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 23:52:32.838124 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 23:52:32.840022 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:52:32.842018 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 23:52:32.844017 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 23:52:32.845763 systemd[1]: Stopped target swap.target - Swaps. May 9 23:52:32.847351 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 23:52:32.847494 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 23:52:32.849786 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 23:52:32.851873 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:52:32.853851 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 23:52:32.854773 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:52:32.856059 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 23:52:32.856177 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 23:52:32.858955 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 23:52:32.859064 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:52:32.861152 systemd[1]: Stopped target paths.target - Path Units. May 9 23:52:32.862661 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 23:52:32.862777 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:52:32.864825 systemd[1]: Stopped target slices.target - Slice Units. May 9 23:52:32.866659 systemd[1]: Stopped target sockets.target - Socket Units. May 9 23:52:32.868310 systemd[1]: iscsid.socket: Deactivated successfully. May 9 23:52:32.868418 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:52:32.870169 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 23:52:32.870255 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:52:32.872447 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 23:52:32.872566 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:52:32.874328 systemd[1]: ignition-files.service: Deactivated successfully. May 9 23:52:32.874442 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 23:52:32.882904 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 23:52:32.884635 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 23:52:32.884811 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:52:32.887527 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 23:52:32.888425 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 23:52:32.888552 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:52:32.890791 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 23:52:32.890895 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:52:32.896769 ignition[998]: INFO : Ignition 2.20.0 May 9 23:52:32.896769 ignition[998]: INFO : Stage: umount May 9 23:52:32.896769 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:52:32.896769 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:52:32.896507 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 23:52:32.909544 ignition[998]: INFO : umount: umount passed May 9 23:52:32.909544 ignition[998]: INFO : Ignition finished successfully May 9 23:52:32.896599 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 23:52:32.899147 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 23:52:32.899239 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 23:52:32.901688 systemd[1]: Stopped target network.target - Network. May 9 23:52:32.903630 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 23:52:32.903788 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 23:52:32.906612 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 23:52:32.906673 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 23:52:32.910617 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 23:52:32.910663 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 23:52:32.912369 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 23:52:32.912430 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 23:52:32.914290 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 23:52:32.918812 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 23:52:32.920890 systemd-networkd[762]: eth0: DHCPv6 lease lost May 9 23:52:32.921830 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 23:52:32.922341 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 23:52:32.922458 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 23:52:32.924431 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 23:52:32.924487 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 23:52:32.934867 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 23:52:32.936377 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 23:52:32.936470 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:52:32.938614 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:52:32.941689 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 23:52:32.941863 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 23:52:32.946233 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:52:32.946300 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:52:32.947805 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 23:52:32.947856 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 23:52:32.950734 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 23:52:32.950787 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:52:32.955152 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 23:52:32.955251 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 23:52:32.959285 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 23:52:32.959416 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:52:32.962065 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 23:52:32.962157 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 23:52:32.964845 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 23:52:32.964896 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 23:52:32.966929 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 23:52:32.966974 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:52:32.968771 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 23:52:32.968827 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 23:52:32.971480 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 23:52:32.971533 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 23:52:32.974469 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:52:32.974520 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:52:32.977314 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 23:52:32.977359 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 23:52:32.984861 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 23:52:32.986377 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 23:52:32.986452 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:52:32.988484 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:52:32.988530 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:52:32.991045 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 23:52:32.991130 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 23:52:32.993142 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 23:52:32.995566 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 23:52:33.007197 systemd[1]: Switching root. May 9 23:52:33.038315 systemd-journald[238]: Journal stopped May 9 23:52:33.751467 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 9 23:52:33.751528 kernel: SELinux: policy capability network_peer_controls=1 May 9 23:52:33.751542 kernel: SELinux: policy capability open_perms=1 May 9 23:52:33.751551 kernel: SELinux: policy capability extended_socket_class=1 May 9 23:52:33.751561 kernel: SELinux: policy capability always_check_network=0 May 9 23:52:33.751574 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 23:52:33.751583 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 23:52:33.751595 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 23:52:33.751604 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 23:52:33.751614 kernel: audit: type=1403 audit(1746834753.158:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 23:52:33.751624 systemd[1]: Successfully loaded SELinux policy in 31.722ms. May 9 23:52:33.751645 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.458ms. May 9 23:52:33.751657 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:52:33.751668 systemd[1]: Detected virtualization kvm. May 9 23:52:33.751680 systemd[1]: Detected architecture arm64. May 9 23:52:33.751690 systemd[1]: Detected first boot. May 9 23:52:33.751713 systemd[1]: Initializing machine ID from VM UUID. May 9 23:52:33.751724 zram_generator::config[1043]: No configuration found. May 9 23:52:33.751735 systemd[1]: Populated /etc with preset unit settings. May 9 23:52:33.751746 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 23:52:33.751757 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 23:52:33.751768 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 23:52:33.751784 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 23:52:33.751795 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 23:52:33.751805 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 23:52:33.751815 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 23:52:33.751825 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 23:52:33.751837 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 23:52:33.751850 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 23:52:33.751861 systemd[1]: Created slice user.slice - User and Session Slice. May 9 23:52:33.751871 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:52:33.751881 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:52:33.751892 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 23:52:33.751902 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 23:52:33.751913 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 23:52:33.751923 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:52:33.751935 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 9 23:52:33.751945 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:52:33.751955 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 23:52:33.751971 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 23:52:33.751981 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 23:52:33.751992 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 23:52:33.752002 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:52:33.752015 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:52:33.752026 systemd[1]: Reached target slices.target - Slice Units. May 9 23:52:33.752037 systemd[1]: Reached target swap.target - Swaps. May 9 23:52:33.752047 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 23:52:33.752058 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 23:52:33.752069 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:52:33.752080 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:52:33.752091 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:52:33.752102 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 23:52:33.752112 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 23:52:33.752125 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 23:52:33.752135 systemd[1]: Mounting media.mount - External Media Directory... May 9 23:52:33.752150 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 23:52:33.752160 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 23:52:33.752170 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 23:52:33.752181 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 23:52:33.752191 systemd[1]: Reached target machines.target - Containers. May 9 23:52:33.752201 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 23:52:33.752213 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:52:33.752224 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:52:33.752240 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 23:52:33.752250 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:52:33.752260 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:52:33.752270 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:52:33.752282 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 23:52:33.752293 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:52:33.752304 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 23:52:33.752317 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 23:52:33.752327 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 23:52:33.752338 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 23:52:33.752348 systemd[1]: Stopped systemd-fsck-usr.service. May 9 23:52:33.752358 kernel: loop: module loaded May 9 23:52:33.752372 kernel: fuse: init (API version 7.39) May 9 23:52:33.752388 kernel: ACPI: bus type drm_connector registered May 9 23:52:33.752399 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:52:33.752410 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:52:33.752422 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 23:52:33.752433 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 23:52:33.752443 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:52:33.752454 systemd[1]: verity-setup.service: Deactivated successfully. May 9 23:52:33.752464 systemd[1]: Stopped verity-setup.service. May 9 23:52:33.752494 systemd-journald[1114]: Collecting audit messages is disabled. May 9 23:52:33.752515 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 23:52:33.752527 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 23:52:33.752538 systemd-journald[1114]: Journal started May 9 23:52:33.752559 systemd-journald[1114]: Runtime Journal (/run/log/journal/c3784340aeac49c7a2a083cbe791a6b8) is 5.9M, max 47.3M, 41.4M free. May 9 23:52:33.537278 systemd[1]: Queued start job for default target multi-user.target. May 9 23:52:33.549768 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 23:52:33.550149 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 23:52:33.757033 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:52:33.757607 systemd[1]: Mounted media.mount - External Media Directory. May 9 23:52:33.758793 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 23:52:33.760039 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 23:52:33.761323 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 23:52:33.762604 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 23:52:33.764087 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:52:33.766252 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 23:52:33.766549 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 23:52:33.768162 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:52:33.769748 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:52:33.771158 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:52:33.771285 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:52:33.773281 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:52:33.774738 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:52:33.776312 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 23:52:33.776605 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 23:52:33.778120 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:52:33.778363 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:52:33.780008 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:52:33.781500 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 23:52:33.783123 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 23:52:33.796275 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 23:52:33.806895 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 23:52:33.809187 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 23:52:33.810330 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 23:52:33.810370 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:52:33.812425 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 23:52:33.814799 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 23:52:33.817107 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 23:52:33.818304 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:52:33.819822 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 23:52:33.822125 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 23:52:33.823314 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:52:33.826914 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 23:52:33.828254 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:52:33.829976 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:52:33.833945 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 23:52:33.836443 systemd-journald[1114]: Time spent on flushing to /var/log/journal/c3784340aeac49c7a2a083cbe791a6b8 is 17.337ms for 838 entries. May 9 23:52:33.836443 systemd-journald[1114]: System Journal (/var/log/journal/c3784340aeac49c7a2a083cbe791a6b8) is 8.0M, max 195.6M, 187.6M free. May 9 23:52:33.861507 systemd-journald[1114]: Received client request to flush runtime journal. May 9 23:52:33.836455 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 23:52:33.840895 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:52:33.843999 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 23:52:33.845492 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 23:52:33.847053 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 23:52:33.849176 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 23:52:33.855695 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 23:52:33.860986 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 23:52:33.864991 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 23:52:33.868803 kernel: loop0: detected capacity change from 0 to 116808 May 9 23:52:33.868186 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 23:52:33.880722 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 23:52:33.890248 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 23:52:33.893918 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:52:33.896611 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:52:33.902755 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 23:52:33.904234 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 23:52:33.908830 kernel: loop1: detected capacity change from 0 to 113536 May 9 23:52:33.911083 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 9 23:52:33.922140 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. May 9 23:52:33.922162 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. May 9 23:52:33.927135 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:52:33.935720 kernel: loop2: detected capacity change from 0 to 189592 May 9 23:52:33.995818 kernel: loop3: detected capacity change from 0 to 116808 May 9 23:52:34.001762 kernel: loop4: detected capacity change from 0 to 113536 May 9 23:52:34.007744 kernel: loop5: detected capacity change from 0 to 189592 May 9 23:52:34.014674 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 23:52:34.015114 (sd-merge)[1178]: Merged extensions into '/usr'. May 9 23:52:34.025193 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... May 9 23:52:34.025219 systemd[1]: Reloading... May 9 23:52:34.085192 zram_generator::config[1206]: No configuration found. May 9 23:52:34.135635 ldconfig[1149]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 23:52:34.173059 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:52:34.208245 systemd[1]: Reloading finished in 182 ms. May 9 23:52:34.240416 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 23:52:34.241987 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 23:52:34.263930 systemd[1]: Starting ensure-sysext.service... May 9 23:52:34.266127 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:52:34.278596 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... May 9 23:52:34.278617 systemd[1]: Reloading... May 9 23:52:34.287899 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 23:52:34.288170 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 23:52:34.288856 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 23:52:34.289072 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. May 9 23:52:34.289123 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. May 9 23:52:34.291480 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:52:34.291494 systemd-tmpfiles[1241]: Skipping /boot May 9 23:52:34.299029 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:52:34.299049 systemd-tmpfiles[1241]: Skipping /boot May 9 23:52:34.334733 zram_generator::config[1268]: No configuration found. May 9 23:52:34.423986 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:52:34.459568 systemd[1]: Reloading finished in 180 ms. May 9 23:52:34.474738 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 23:52:34.489158 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:52:34.497927 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 23:52:34.500749 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 23:52:34.503437 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 23:52:34.510175 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:52:34.515197 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:52:34.521126 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 23:52:34.524923 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:52:34.530640 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:52:34.539335 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:52:34.543820 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:52:34.545045 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:52:34.547886 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 23:52:34.550441 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:52:34.550652 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:52:34.553727 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:52:34.553889 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:52:34.556269 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 23:52:34.558907 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:52:34.559449 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:52:34.568291 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 23:52:34.572145 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:52:34.577252 systemd-udevd[1309]: Using default interface naming scheme 'v255'. May 9 23:52:34.578053 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:52:34.580556 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:52:34.587258 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:52:34.590007 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:52:34.591881 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 23:52:34.594047 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:52:34.594267 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:52:34.596323 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:52:34.597019 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:52:34.601326 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 23:52:34.603416 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:52:34.603585 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:52:34.605485 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 23:52:34.611222 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:52:34.620042 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:52:34.625711 augenrules[1356]: No rules May 9 23:52:34.626939 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:52:34.630265 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:52:34.631690 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:52:34.631782 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 23:52:34.632160 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:52:34.635984 systemd[1]: Finished ensure-sysext.service. May 9 23:52:34.637960 systemd[1]: audit-rules.service: Deactivated successfully. May 9 23:52:34.638146 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 23:52:34.639526 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 23:52:34.642289 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:52:34.642464 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:52:34.645036 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:52:34.645171 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:52:34.652441 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:52:34.652616 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:52:34.671083 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 9 23:52:34.678765 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1371) May 9 23:52:34.681128 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:52:34.683885 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:52:34.683966 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:52:34.688051 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 23:52:34.717293 systemd-resolved[1308]: Positive Trust Anchors: May 9 23:52:34.717694 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:52:34.717794 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:52:34.718993 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 23:52:34.729281 systemd-resolved[1308]: Defaulting to hostname 'linux'. May 9 23:52:34.731952 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 23:52:34.742928 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:52:34.744406 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:52:34.762748 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 23:52:34.764471 systemd[1]: Reached target time-set.target - System Time Set. May 9 23:52:34.766162 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 23:52:34.776299 systemd-networkd[1387]: lo: Link UP May 9 23:52:34.776308 systemd-networkd[1387]: lo: Gained carrier May 9 23:52:34.777228 systemd-networkd[1387]: Enumeration completed May 9 23:52:34.777387 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:52:34.777836 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:52:34.777845 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:52:34.778537 systemd-networkd[1387]: eth0: Link UP May 9 23:52:34.778547 systemd-networkd[1387]: eth0: Gained carrier May 9 23:52:34.778561 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:52:34.779347 systemd[1]: Reached target network.target - Network. May 9 23:52:34.785974 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 23:52:34.791796 systemd-networkd[1387]: eth0: DHCPv4 address 10.0.0.76/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 23:52:34.795087 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. May 9 23:52:34.335993 systemd-timesyncd[1388]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 23:52:34.340720 systemd-journald[1114]: Time jumped backwards, rotating. May 9 23:52:34.336049 systemd-timesyncd[1388]: Initial clock synchronization to Fri 2025-05-09 23:52:34.335785 UTC. May 9 23:52:34.336088 systemd-resolved[1308]: Clock change detected. Flushing caches. May 9 23:52:34.359206 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:52:34.369229 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 23:52:34.375122 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 23:52:34.407600 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:52:34.416263 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:52:34.448474 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 23:52:34.450137 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:52:34.451307 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:52:34.452539 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 23:52:34.453844 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 23:52:34.455376 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 23:52:34.456581 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 23:52:34.458098 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 23:52:34.459390 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 23:52:34.459431 systemd[1]: Reached target paths.target - Path Units. May 9 23:52:34.460341 systemd[1]: Reached target timers.target - Timer Units. May 9 23:52:34.462153 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 23:52:34.464743 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 23:52:34.472979 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 23:52:34.475474 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 23:52:34.477210 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 23:52:34.478479 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:52:34.479477 systemd[1]: Reached target basic.target - Basic System. May 9 23:52:34.480501 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 23:52:34.480546 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 23:52:34.481737 systemd[1]: Starting containerd.service - containerd container runtime... May 9 23:52:34.484119 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 23:52:34.485583 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:52:34.487658 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 23:52:34.493616 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 23:52:34.494739 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 23:52:34.498431 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 23:52:34.500245 jq[1416]: false May 9 23:52:34.501095 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 23:52:34.506085 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 23:52:34.514089 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 23:52:34.518867 extend-filesystems[1417]: Found loop3 May 9 23:52:34.518867 extend-filesystems[1417]: Found loop4 May 9 23:52:34.518867 extend-filesystems[1417]: Found loop5 May 9 23:52:34.518867 extend-filesystems[1417]: Found vda May 9 23:52:34.518867 extend-filesystems[1417]: Found vda1 May 9 23:52:34.518867 extend-filesystems[1417]: Found vda2 May 9 23:52:34.518867 extend-filesystems[1417]: Found vda3 May 9 23:52:34.518867 extend-filesystems[1417]: Found usr May 9 23:52:34.518867 extend-filesystems[1417]: Found vda4 May 9 23:52:34.518867 extend-filesystems[1417]: Found vda6 May 9 23:52:34.518867 extend-filesystems[1417]: Found vda7 May 9 23:52:34.518867 extend-filesystems[1417]: Found vda9 May 9 23:52:34.518867 extend-filesystems[1417]: Checking size of /dev/vda9 May 9 23:52:34.518533 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 23:52:34.544536 dbus-daemon[1415]: [system] SELinux support is enabled May 9 23:52:34.519064 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 23:52:34.522303 systemd[1]: Starting update-engine.service - Update Engine... May 9 23:52:34.526696 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 23:52:34.548532 jq[1433]: true May 9 23:52:34.530180 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 23:52:34.532771 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 23:52:34.533002 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 23:52:34.533798 systemd[1]: motdgen.service: Deactivated successfully. May 9 23:52:34.534005 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 23:52:34.535943 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 23:52:34.536098 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 23:52:34.544755 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 23:52:34.547683 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 23:52:34.547709 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 23:52:34.552581 extend-filesystems[1417]: Resized partition /dev/vda9 May 9 23:52:34.549420 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 23:52:34.549436 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 23:52:34.560893 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1363) May 9 23:52:34.570646 extend-filesystems[1447]: resize2fs 1.47.1 (20-May-2024) May 9 23:52:34.574853 (ntainerd)[1444]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 23:52:34.581859 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 23:52:34.584277 jq[1441]: true May 9 23:52:34.588995 update_engine[1430]: I20250509 23:52:34.585515 1430 main.cc:92] Flatcar Update Engine starting May 9 23:52:34.596080 update_engine[1430]: I20250509 23:52:34.594995 1430 update_check_scheduler.cc:74] Next update check in 6m31s May 9 23:52:34.596918 systemd[1]: Started update-engine.service - Update Engine. May 9 23:52:34.602857 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 23:52:34.613048 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 23:52:34.617210 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (Power Button) May 9 23:52:34.617851 extend-filesystems[1447]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 23:52:34.617851 extend-filesystems[1447]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 23:52:34.617851 extend-filesystems[1447]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 23:52:34.619342 systemd-logind[1424]: New seat seat0. May 9 23:52:34.629903 extend-filesystems[1417]: Resized filesystem in /dev/vda9 May 9 23:52:34.620318 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 23:52:34.620544 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 23:52:34.629613 systemd[1]: Started systemd-logind.service - User Login Management. May 9 23:52:34.642882 bash[1465]: Updated "/home/core/.ssh/authorized_keys" May 9 23:52:34.646771 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 23:52:34.648628 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 23:52:34.672044 locksmithd[1455]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 23:52:34.690975 sshd_keygen[1434]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 23:52:34.710093 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 23:52:34.721177 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 23:52:34.727117 systemd[1]: issuegen.service: Deactivated successfully. May 9 23:52:34.728886 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 23:52:34.731970 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 23:52:34.745428 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 23:52:34.764212 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 23:52:34.766648 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 9 23:52:34.768006 systemd[1]: Reached target getty.target - Login Prompts. May 9 23:52:34.787542 containerd[1444]: time="2025-05-09T23:52:34.787438442Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 9 23:52:34.812159 containerd[1444]: time="2025-05-09T23:52:34.812100082Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 23:52:34.813603 containerd[1444]: time="2025-05-09T23:52:34.813540602Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 23:52:34.813603 containerd[1444]: time="2025-05-09T23:52:34.813574362Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 23:52:34.813603 containerd[1444]: time="2025-05-09T23:52:34.813590642Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 23:52:34.813778 containerd[1444]: time="2025-05-09T23:52:34.813756322Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 23:52:34.813802 containerd[1444]: time="2025-05-09T23:52:34.813781482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 23:52:34.813876 containerd[1444]: time="2025-05-09T23:52:34.813858642Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:52:34.813899 containerd[1444]: time="2025-05-09T23:52:34.813877922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 23:52:34.814061 containerd[1444]: time="2025-05-09T23:52:34.814040962Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:52:34.814080 containerd[1444]: time="2025-05-09T23:52:34.814062882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 23:52:34.814097 containerd[1444]: time="2025-05-09T23:52:34.814076562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:52:34.814097 containerd[1444]: time="2025-05-09T23:52:34.814086522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 23:52:34.814178 containerd[1444]: time="2025-05-09T23:52:34.814163482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 23:52:34.814391 containerd[1444]: time="2025-05-09T23:52:34.814357682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 23:52:34.814468 containerd[1444]: time="2025-05-09T23:52:34.814453322Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:52:34.814492 containerd[1444]: time="2025-05-09T23:52:34.814471202Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 23:52:34.814575 containerd[1444]: time="2025-05-09T23:52:34.814559722Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 23:52:34.814620 containerd[1444]: time="2025-05-09T23:52:34.814608962Z" level=info msg="metadata content store policy set" policy=shared May 9 23:52:34.818903 containerd[1444]: time="2025-05-09T23:52:34.818870882Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 23:52:34.818986 containerd[1444]: time="2025-05-09T23:52:34.818930282Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 23:52:34.818986 containerd[1444]: time="2025-05-09T23:52:34.818946482Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 23:52:34.818986 containerd[1444]: time="2025-05-09T23:52:34.818962362Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 23:52:34.818986 containerd[1444]: time="2025-05-09T23:52:34.818975882Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 23:52:34.819129 containerd[1444]: time="2025-05-09T23:52:34.819109042Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 23:52:34.819887 containerd[1444]: time="2025-05-09T23:52:34.819374202Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 23:52:34.819887 containerd[1444]: time="2025-05-09T23:52:34.819534642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 23:52:34.819887 containerd[1444]: time="2025-05-09T23:52:34.819552602Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 23:52:34.819887 containerd[1444]: time="2025-05-09T23:52:34.819569322Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 23:52:34.819887 containerd[1444]: time="2025-05-09T23:52:34.819584162Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 23:52:34.819887 containerd[1444]: time="2025-05-09T23:52:34.819599002Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 23:52:34.819887 containerd[1444]: time="2025-05-09T23:52:34.819611802Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 23:52:34.819887 containerd[1444]: time="2025-05-09T23:52:34.819625362Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 23:52:34.819887 containerd[1444]: time="2025-05-09T23:52:34.819639602Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 23:52:34.819887 containerd[1444]: time="2025-05-09T23:52:34.819652082Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 23:52:34.819887 containerd[1444]: time="2025-05-09T23:52:34.819664722Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 23:52:34.819887 containerd[1444]: time="2025-05-09T23:52:34.819675082Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 23:52:34.819887 containerd[1444]: time="2025-05-09T23:52:34.819695962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 23:52:34.819887 containerd[1444]: time="2025-05-09T23:52:34.819709562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 23:52:34.820305 containerd[1444]: time="2025-05-09T23:52:34.819722122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 23:52:34.820305 containerd[1444]: time="2025-05-09T23:52:34.819734682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 23:52:34.820305 containerd[1444]: time="2025-05-09T23:52:34.819749002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 23:52:34.820305 containerd[1444]: time="2025-05-09T23:52:34.819761722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 23:52:34.820305 containerd[1444]: time="2025-05-09T23:52:34.819773122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 23:52:34.820305 containerd[1444]: time="2025-05-09T23:52:34.819785722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 23:52:34.820305 containerd[1444]: time="2025-05-09T23:52:34.819798402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 23:52:34.820305 containerd[1444]: time="2025-05-09T23:52:34.819811842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 23:52:34.820305 containerd[1444]: time="2025-05-09T23:52:34.819822682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 23:52:34.820305 containerd[1444]: time="2025-05-09T23:52:34.819855202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 23:52:34.820305 containerd[1444]: time="2025-05-09T23:52:34.819869282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 23:52:34.820305 containerd[1444]: time="2025-05-09T23:52:34.819884722Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 23:52:34.820305 containerd[1444]: time="2025-05-09T23:52:34.819905122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 23:52:34.820305 containerd[1444]: time="2025-05-09T23:52:34.819920802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 23:52:34.820305 containerd[1444]: time="2025-05-09T23:52:34.819932162Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 23:52:34.820557 containerd[1444]: time="2025-05-09T23:52:34.820118162Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 23:52:34.820557 containerd[1444]: time="2025-05-09T23:52:34.820137162Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 23:52:34.820557 containerd[1444]: time="2025-05-09T23:52:34.820148082Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 23:52:34.820557 containerd[1444]: time="2025-05-09T23:52:34.820161802Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 23:52:34.820557 containerd[1444]: time="2025-05-09T23:52:34.820171122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 23:52:34.820557 containerd[1444]: time="2025-05-09T23:52:34.820184402Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 23:52:34.820557 containerd[1444]: time="2025-05-09T23:52:34.820194962Z" level=info msg="NRI interface is disabled by configuration." May 9 23:52:34.820557 containerd[1444]: time="2025-05-09T23:52:34.820204322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 23:52:34.820687 containerd[1444]: time="2025-05-09T23:52:34.820552842Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 23:52:34.820687 containerd[1444]: time="2025-05-09T23:52:34.820598882Z" level=info msg="Connect containerd service" May 9 23:52:34.820687 containerd[1444]: time="2025-05-09T23:52:34.820632842Z" level=info msg="using legacy CRI server" May 9 23:52:34.820687 containerd[1444]: time="2025-05-09T23:52:34.820639602Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 23:52:34.820920 containerd[1444]: time="2025-05-09T23:52:34.820881402Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 23:52:34.821557 containerd[1444]: time="2025-05-09T23:52:34.821519922Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 23:52:34.821808 containerd[1444]: time="2025-05-09T23:52:34.821763602Z" level=info msg="Start subscribing containerd event" May 9 23:52:34.821847 containerd[1444]: time="2025-05-09T23:52:34.821813802Z" level=info msg="Start recovering state" May 9 23:52:34.822011 containerd[1444]: time="2025-05-09T23:52:34.821887362Z" level=info msg="Start event monitor" May 9 23:52:34.822011 containerd[1444]: time="2025-05-09T23:52:34.821907442Z" level=info msg="Start snapshots syncer" May 9 23:52:34.822011 containerd[1444]: time="2025-05-09T23:52:34.821916762Z" level=info msg="Start cni network conf syncer for default" May 9 23:52:34.822011 containerd[1444]: time="2025-05-09T23:52:34.821923642Z" level=info msg="Start streaming server" May 9 23:52:34.822140 containerd[1444]: time="2025-05-09T23:52:34.822097842Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 23:52:34.822168 containerd[1444]: time="2025-05-09T23:52:34.822152802Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 23:52:34.822218 containerd[1444]: time="2025-05-09T23:52:34.822204402Z" level=info msg="containerd successfully booted in 0.036287s" May 9 23:52:34.822359 systemd[1]: Started containerd.service - containerd container runtime. May 9 23:52:35.497061 systemd-networkd[1387]: eth0: Gained IPv6LL May 9 23:52:35.499665 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 23:52:35.501652 systemd[1]: Reached target network-online.target - Network is Online. May 9 23:52:35.514084 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 23:52:35.516548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:52:35.518671 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 23:52:35.538953 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 23:52:35.539792 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 23:52:35.542969 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 23:52:35.545361 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 23:52:36.057296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:52:36.059558 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 23:52:36.061807 (kubelet)[1519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:52:36.063891 systemd[1]: Startup finished in 666ms (kernel) + 4.402s (initrd) + 3.400s (userspace) = 8.469s. May 9 23:52:36.538494 kubelet[1519]: E0509 23:52:36.538386 1519 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:52:36.541153 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:52:36.541485 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:52:40.925575 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 23:52:40.927043 systemd[1]: Started sshd@0-10.0.0.76:22-10.0.0.1:34496.service - OpenSSH per-connection server daemon (10.0.0.1:34496). May 9 23:52:41.001032 sshd[1533]: Accepted publickey for core from 10.0.0.1 port 34496 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:52:41.003079 sshd-session[1533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:52:41.010989 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 23:52:41.020107 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 23:52:41.021923 systemd-logind[1424]: New session 1 of user core. May 9 23:52:41.030221 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 23:52:41.032495 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 23:52:41.039540 (systemd)[1537]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 23:52:41.133975 systemd[1537]: Queued start job for default target default.target. May 9 23:52:41.147909 systemd[1537]: Created slice app.slice - User Application Slice. May 9 23:52:41.147941 systemd[1537]: Reached target paths.target - Paths. May 9 23:52:41.147954 systemd[1537]: Reached target timers.target - Timers. May 9 23:52:41.149207 systemd[1537]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 23:52:41.160149 systemd[1537]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 23:52:41.160265 systemd[1537]: Reached target sockets.target - Sockets. May 9 23:52:41.160278 systemd[1537]: Reached target basic.target - Basic System. May 9 23:52:41.160314 systemd[1537]: Reached target default.target - Main User Target. May 9 23:52:41.160341 systemd[1537]: Startup finished in 115ms. May 9 23:52:41.160560 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 23:52:41.161880 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 23:52:41.225669 systemd[1]: Started sshd@1-10.0.0.76:22-10.0.0.1:34512.service - OpenSSH per-connection server daemon (10.0.0.1:34512). May 9 23:52:41.269301 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 34512 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:52:41.270594 sshd-session[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:52:41.274933 systemd-logind[1424]: New session 2 of user core. May 9 23:52:41.281986 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 23:52:41.334557 sshd[1550]: Connection closed by 10.0.0.1 port 34512 May 9 23:52:41.335046 sshd-session[1548]: pam_unix(sshd:session): session closed for user core May 9 23:52:41.345355 systemd[1]: sshd@1-10.0.0.76:22-10.0.0.1:34512.service: Deactivated successfully. May 9 23:52:41.346761 systemd[1]: session-2.scope: Deactivated successfully. May 9 23:52:41.349022 systemd-logind[1424]: Session 2 logged out. Waiting for processes to exit. May 9 23:52:41.350865 systemd[1]: Started sshd@2-10.0.0.76:22-10.0.0.1:34516.service - OpenSSH per-connection server daemon (10.0.0.1:34516). May 9 23:52:41.351572 systemd-logind[1424]: Removed session 2. May 9 23:52:41.400192 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 34516 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:52:41.401500 sshd-session[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:52:41.408014 systemd-logind[1424]: New session 3 of user core. May 9 23:52:41.415988 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 23:52:41.466489 sshd[1557]: Connection closed by 10.0.0.1 port 34516 May 9 23:52:41.466990 sshd-session[1555]: pam_unix(sshd:session): session closed for user core May 9 23:52:41.480453 systemd[1]: sshd@2-10.0.0.76:22-10.0.0.1:34516.service: Deactivated successfully. May 9 23:52:41.481795 systemd[1]: session-3.scope: Deactivated successfully. May 9 23:52:41.484276 systemd-logind[1424]: Session 3 logged out. Waiting for processes to exit. May 9 23:52:41.486581 systemd[1]: Started sshd@3-10.0.0.76:22-10.0.0.1:34528.service - OpenSSH per-connection server daemon (10.0.0.1:34528). May 9 23:52:41.487342 systemd-logind[1424]: Removed session 3. May 9 23:52:41.533226 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 34528 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:52:41.534629 sshd-session[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:52:41.538537 systemd-logind[1424]: New session 4 of user core. May 9 23:52:41.547013 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 23:52:41.601714 sshd[1564]: Connection closed by 10.0.0.1 port 34528 May 9 23:52:41.602011 sshd-session[1562]: pam_unix(sshd:session): session closed for user core May 9 23:52:41.610229 systemd[1]: sshd@3-10.0.0.76:22-10.0.0.1:34528.service: Deactivated successfully. May 9 23:52:41.611572 systemd[1]: session-4.scope: Deactivated successfully. May 9 23:52:41.614480 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit. May 9 23:52:41.615678 systemd[1]: Started sshd@4-10.0.0.76:22-10.0.0.1:34530.service - OpenSSH per-connection server daemon (10.0.0.1:34530). May 9 23:52:41.616498 systemd-logind[1424]: Removed session 4. May 9 23:52:41.655506 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 34530 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:52:41.656814 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:52:41.661717 systemd-logind[1424]: New session 5 of user core. May 9 23:52:41.676047 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 23:52:41.743066 sudo[1572]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 23:52:41.743339 sudo[1572]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:52:41.757701 sudo[1572]: pam_unix(sudo:session): session closed for user root May 9 23:52:41.762278 sshd[1571]: Connection closed by 10.0.0.1 port 34530 May 9 23:52:41.762170 sshd-session[1569]: pam_unix(sshd:session): session closed for user core May 9 23:52:41.773214 systemd[1]: sshd@4-10.0.0.76:22-10.0.0.1:34530.service: Deactivated successfully. May 9 23:52:41.774534 systemd[1]: session-5.scope: Deactivated successfully. May 9 23:52:41.782150 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit. May 9 23:52:41.782708 systemd[1]: Started sshd@5-10.0.0.76:22-10.0.0.1:34538.service - OpenSSH per-connection server daemon (10.0.0.1:34538). May 9 23:52:41.786718 systemd-logind[1424]: Removed session 5. May 9 23:52:41.824124 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 34538 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:52:41.825421 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:52:41.829203 systemd-logind[1424]: New session 6 of user core. May 9 23:52:41.840040 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 23:52:41.891444 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 23:52:41.891729 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:52:41.895104 sudo[1581]: pam_unix(sudo:session): session closed for user root May 9 23:52:41.899755 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 9 23:52:41.900032 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:52:41.924196 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 23:52:41.947614 augenrules[1603]: No rules May 9 23:52:41.948750 systemd[1]: audit-rules.service: Deactivated successfully. May 9 23:52:41.949051 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 23:52:41.950407 sudo[1580]: pam_unix(sudo:session): session closed for user root May 9 23:52:41.952595 sshd[1579]: Connection closed by 10.0.0.1 port 34538 May 9 23:52:41.952947 sshd-session[1577]: pam_unix(sshd:session): session closed for user core May 9 23:52:41.963079 systemd[1]: sshd@5-10.0.0.76:22-10.0.0.1:34538.service: Deactivated successfully. May 9 23:52:41.965102 systemd[1]: session-6.scope: Deactivated successfully. May 9 23:52:41.967531 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit. May 9 23:52:41.984146 systemd[1]: Started sshd@6-10.0.0.76:22-10.0.0.1:34544.service - OpenSSH per-connection server daemon (10.0.0.1:34544). May 9 23:52:41.985890 systemd-logind[1424]: Removed session 6. May 9 23:52:42.022409 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 34544 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:52:42.022797 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:52:42.026906 systemd-logind[1424]: New session 7 of user core. May 9 23:52:42.033011 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 23:52:42.084010 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 23:52:42.084290 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:52:42.106262 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 23:52:42.123704 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 23:52:42.123938 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 23:52:42.620117 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:52:42.629187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:52:42.665275 systemd[1]: Reloading requested from client PID 1655 ('systemctl') (unit session-7.scope)... May 9 23:52:42.665295 systemd[1]: Reloading... May 9 23:52:42.743869 zram_generator::config[1696]: No configuration found. May 9 23:52:42.940163 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:52:42.994269 systemd[1]: Reloading finished in 328 ms. May 9 23:52:43.032806 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 23:52:43.032934 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 23:52:43.033945 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:52:43.036224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:52:43.148905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:52:43.154373 (kubelet)[1739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 23:52:43.194131 kubelet[1739]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:52:43.194131 kubelet[1739]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 23:52:43.194131 kubelet[1739]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:52:43.194131 kubelet[1739]: I0509 23:52:43.194077 1739 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 23:52:44.077452 kubelet[1739]: I0509 23:52:44.077386 1739 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 9 23:52:44.077452 kubelet[1739]: I0509 23:52:44.077431 1739 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 23:52:44.077722 kubelet[1739]: I0509 23:52:44.077693 1739 server.go:929] "Client rotation is on, will bootstrap in background" May 9 23:52:44.144630 kubelet[1739]: I0509 23:52:44.144426 1739 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 23:52:44.161945 kubelet[1739]: E0509 23:52:44.161886 1739 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 23:52:44.161945 kubelet[1739]: I0509 23:52:44.161937 1739 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 23:52:44.165595 kubelet[1739]: I0509 23:52:44.165562 1739 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 23:52:44.167073 kubelet[1739]: I0509 23:52:44.167042 1739 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 9 23:52:44.167265 kubelet[1739]: I0509 23:52:44.167218 1739 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 23:52:44.167447 kubelet[1739]: I0509 23:52:44.167261 1739 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.76","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 23:52:44.167882 kubelet[1739]: I0509 23:52:44.167866 1739 topology_manager.go:138] "Creating topology manager with none policy" May 9 23:52:44.167882 kubelet[1739]: I0509 23:52:44.167884 1739 container_manager_linux.go:300] "Creating device plugin manager" May 9 23:52:44.168206 kubelet[1739]: I0509 23:52:44.168182 1739 state_mem.go:36] "Initialized new in-memory state store" May 9 23:52:44.172635 kubelet[1739]: I0509 23:52:44.172597 1739 kubelet.go:408] "Attempting to sync node with API server" May 9 23:52:44.172667 kubelet[1739]: I0509 23:52:44.172644 1739 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 23:52:44.172720 kubelet[1739]: I0509 23:52:44.172679 1739 kubelet.go:314] "Adding apiserver pod source" May 9 23:52:44.172749 kubelet[1739]: I0509 23:52:44.172723 1739 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 23:52:44.172872 kubelet[1739]: E0509 23:52:44.172842 1739 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:52:44.172917 kubelet[1739]: E0509 23:52:44.172903 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:52:44.185534 kubelet[1739]: I0509 23:52:44.185505 1739 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 9 23:52:44.187569 kubelet[1739]: I0509 23:52:44.187540 1739 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 23:52:44.188582 kubelet[1739]: W0509 23:52:44.188561 1739 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 23:52:44.191856 kubelet[1739]: I0509 23:52:44.189453 1739 server.go:1269] "Started kubelet" May 9 23:52:44.191856 kubelet[1739]: I0509 23:52:44.190169 1739 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 23:52:44.191856 kubelet[1739]: I0509 23:52:44.190315 1739 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 23:52:44.191856 kubelet[1739]: I0509 23:52:44.190554 1739 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 23:52:44.191856 kubelet[1739]: I0509 23:52:44.191624 1739 server.go:460] "Adding debug handlers to kubelet server" May 9 23:52:44.192614 kubelet[1739]: I0509 23:52:44.192594 1739 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 23:52:44.193563 kubelet[1739]: W0509 23:52:44.193519 1739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.76" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 9 23:52:44.193694 kubelet[1739]: I0509 23:52:44.193672 1739 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 23:52:44.194625 kubelet[1739]: E0509 23:52:44.194560 1739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.76\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" May 9 23:52:44.194956 kubelet[1739]: W0509 23:52:44.194803 1739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 9 23:52:44.194956 kubelet[1739]: E0509 23:52:44.194843 1739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" May 9 23:52:44.196114 kubelet[1739]: I0509 23:52:44.196081 1739 volume_manager.go:289] "Starting Kubelet Volume Manager" May 9 23:52:44.196213 kubelet[1739]: I0509 23:52:44.196198 1739 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 9 23:52:44.196319 kubelet[1739]: I0509 23:52:44.196247 1739 reconciler.go:26] "Reconciler: start to sync state" May 9 23:52:44.196738 kubelet[1739]: E0509 23:52:44.196446 1739 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 23:52:44.196916 kubelet[1739]: E0509 23:52:44.196890 1739 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.76\" not found" May 9 23:52:44.199456 kubelet[1739]: I0509 23:52:44.199411 1739 factory.go:221] Registration of the containerd container factory successfully May 9 23:52:44.199456 kubelet[1739]: I0509 23:52:44.199438 1739 factory.go:221] Registration of the systemd container factory successfully May 9 23:52:44.199576 kubelet[1739]: I0509 23:52:44.199535 1739 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 23:52:44.202180 kubelet[1739]: E0509 23:52:44.202128 1739 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.76\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 9 23:52:44.208467 kubelet[1739]: W0509 23:52:44.207372 1739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 9 23:52:44.208467 kubelet[1739]: E0509 23:52:44.207422 1739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" May 9 23:52:44.208467 kubelet[1739]: E0509 23:52:44.202224 1739 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.76.183e00f680d051da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.76,UID:10.0.0.76,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.76,},FirstTimestamp:2025-05-09 23:52:44.189422042 +0000 UTC m=+1.031861801,LastTimestamp:2025-05-09 23:52:44.189422042 +0000 UTC m=+1.031861801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.76,}" May 9 23:52:44.209008 kubelet[1739]: E0509 23:52:44.208931 1739 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.76.183e00f6813b4b52 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.76,UID:10.0.0.76,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.76,},FirstTimestamp:2025-05-09 23:52:44.196432722 +0000 UTC m=+1.038872481,LastTimestamp:2025-05-09 23:52:44.196432722 +0000 UTC m=+1.038872481,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.76,}" May 9 23:52:44.211165 kubelet[1739]: I0509 23:52:44.211141 1739 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 23:52:44.211271 kubelet[1739]: I0509 23:52:44.211259 1739 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 23:52:44.211330 kubelet[1739]: I0509 23:52:44.211322 1739 state_mem.go:36] "Initialized new in-memory state store" May 9 23:52:44.276377 kubelet[1739]: I0509 23:52:44.276343 1739 policy_none.go:49] "None policy: Start" May 9 23:52:44.277286 kubelet[1739]: I0509 23:52:44.277263 1739 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 23:52:44.277436 kubelet[1739]: I0509 23:52:44.277425 1739 state_mem.go:35] "Initializing new in-memory state store" May 9 23:52:44.290446 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 23:52:44.298207 kubelet[1739]: E0509 23:52:44.298170 1739 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.76\" not found" May 9 23:52:44.300702 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 23:52:44.305880 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 23:52:44.309159 kubelet[1739]: I0509 23:52:44.309105 1739 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 23:52:44.310288 kubelet[1739]: I0509 23:52:44.310261 1739 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 23:52:44.310288 kubelet[1739]: I0509 23:52:44.310290 1739 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 23:52:44.310390 kubelet[1739]: I0509 23:52:44.310309 1739 kubelet.go:2321] "Starting kubelet main sync loop" May 9 23:52:44.310569 kubelet[1739]: E0509 23:52:44.310430 1739 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 23:52:44.317001 kubelet[1739]: I0509 23:52:44.316954 1739 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 23:52:44.317202 kubelet[1739]: I0509 23:52:44.317173 1739 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 23:52:44.317230 kubelet[1739]: I0509 23:52:44.317191 1739 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 23:52:44.317849 kubelet[1739]: I0509 23:52:44.317814 1739 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 23:52:44.319314 kubelet[1739]: E0509 23:52:44.319288 1739 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.76\" not found" May 9 23:52:44.406588 kubelet[1739]: E0509 23:52:44.406489 1739 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.76\" not found" node="10.0.0.76" May 9 23:52:44.418569 kubelet[1739]: I0509 23:52:44.418528 1739 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.76" May 9 23:52:44.425284 kubelet[1739]: I0509 23:52:44.425236 1739 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.76" May 9 23:52:44.425284 kubelet[1739]: E0509 23:52:44.425284 1739 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.76\": node \"10.0.0.76\" not found" May 9 23:52:44.435484 kubelet[1739]: E0509 23:52:44.435434 1739 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.76\" not found" May 9 23:52:44.535816 kubelet[1739]: E0509 23:52:44.535782 1739 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.76\" not found" May 9 23:52:44.636382 kubelet[1739]: E0509 23:52:44.636335 1739 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.76\" not found" May 9 23:52:44.682581 sudo[1614]: pam_unix(sudo:session): session closed for user root May 9 23:52:44.683859 sshd[1613]: Connection closed by 10.0.0.1 port 34544 May 9 23:52:44.684254 sshd-session[1611]: pam_unix(sshd:session): session closed for user core May 9 23:52:44.687630 systemd[1]: sshd@6-10.0.0.76:22-10.0.0.1:34544.service: Deactivated successfully. May 9 23:52:44.689336 systemd[1]: session-7.scope: Deactivated successfully. May 9 23:52:44.690003 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit. May 9 23:52:44.690847 systemd-logind[1424]: Removed session 7. May 9 23:52:44.737093 kubelet[1739]: E0509 23:52:44.737042 1739 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.76\" not found" May 9 23:52:44.837695 kubelet[1739]: E0509 23:52:44.837648 1739 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.76\" not found" May 9 23:52:44.938447 kubelet[1739]: E0509 23:52:44.938337 1739 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.76\" not found" May 9 23:52:45.039064 kubelet[1739]: E0509 23:52:45.039016 1739 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.76\" not found" May 9 23:52:45.080634 kubelet[1739]: I0509 23:52:45.080541 1739 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 9 23:52:45.080810 kubelet[1739]: W0509 23:52:45.080771 1739 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 9 23:52:45.139805 kubelet[1739]: E0509 23:52:45.139760 1739 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.76\" not found" May 9 23:52:45.173986 kubelet[1739]: E0509 23:52:45.173955 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:52:45.240760 kubelet[1739]: E0509 23:52:45.240718 1739 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.76\" not found" May 9 23:52:45.341681 kubelet[1739]: E0509 23:52:45.341639 1739 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.76\" not found" May 9 23:52:45.442405 kubelet[1739]: E0509 23:52:45.442352 1739 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.76\" not found" May 9 23:52:45.543117 kubelet[1739]: E0509 23:52:45.543023 1739 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.76\" not found" May 9 23:52:45.643843 kubelet[1739]: E0509 23:52:45.643794 1739 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.76\" not found" May 9 23:52:45.745353 kubelet[1739]: I0509 23:52:45.745318 1739 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 9 23:52:45.745826 containerd[1444]: time="2025-05-09T23:52:45.745710442Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 23:52:45.746168 kubelet[1739]: I0509 23:52:45.745914 1739 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 9 23:52:46.174573 kubelet[1739]: E0509 23:52:46.174527 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:52:46.174726 kubelet[1739]: I0509 23:52:46.174692 1739 apiserver.go:52] "Watching apiserver" May 9 23:52:46.185940 systemd[1]: Created slice kubepods-besteffort-pod76747474_05d2_4b45_866d_18344b832ab9.slice - libcontainer container kubepods-besteffort-pod76747474_05d2_4b45_866d_18344b832ab9.slice. May 9 23:52:46.196841 kubelet[1739]: I0509 23:52:46.196797 1739 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 9 23:52:46.206169 systemd[1]: Created slice kubepods-burstable-pod32cfeb9b_3503_4f52_8e79_20b0e13b6daa.slice - libcontainer container kubepods-burstable-pod32cfeb9b_3503_4f52_8e79_20b0e13b6daa.slice. May 9 23:52:46.207072 kubelet[1739]: I0509 23:52:46.206979 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-bpf-maps\") pod \"cilium-rvnmt\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " pod="kube-system/cilium-rvnmt" May 9 23:52:46.207072 kubelet[1739]: I0509 23:52:46.207017 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-xtables-lock\") pod \"cilium-rvnmt\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " pod="kube-system/cilium-rvnmt" May 9 23:52:46.207072 kubelet[1739]: I0509 23:52:46.207048 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-hubble-tls\") pod \"cilium-rvnmt\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " pod="kube-system/cilium-rvnmt" May 9 23:52:46.207072 kubelet[1739]: I0509 23:52:46.207065 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw6dx\" (UniqueName: \"kubernetes.io/projected/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-kube-api-access-kw6dx\") pod \"cilium-rvnmt\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " pod="kube-system/cilium-rvnmt" May 9 23:52:46.207226 kubelet[1739]: I0509 23:52:46.207086 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-cni-path\") pod \"cilium-rvnmt\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " pod="kube-system/cilium-rvnmt" May 9 23:52:46.207226 kubelet[1739]: I0509 23:52:46.207122 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-clustermesh-secrets\") pod \"cilium-rvnmt\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " pod="kube-system/cilium-rvnmt" May 9 23:52:46.207226 kubelet[1739]: I0509 23:52:46.207139 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-cilium-config-path\") pod \"cilium-rvnmt\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " pod="kube-system/cilium-rvnmt" May 9 23:52:46.207226 kubelet[1739]: I0509 23:52:46.207173 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-cilium-run\") pod \"cilium-rvnmt\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " pod="kube-system/cilium-rvnmt" May 9 23:52:46.207226 kubelet[1739]: I0509 23:52:46.207188 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-host-proc-sys-kernel\") pod \"cilium-rvnmt\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " pod="kube-system/cilium-rvnmt" May 9 23:52:46.207226 kubelet[1739]: I0509 23:52:46.207208 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/76747474-05d2-4b45-866d-18344b832ab9-kube-proxy\") pod \"kube-proxy-q5sw6\" (UID: \"76747474-05d2-4b45-866d-18344b832ab9\") " pod="kube-system/kube-proxy-q5sw6" May 9 23:52:46.207349 kubelet[1739]: I0509 23:52:46.207300 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76747474-05d2-4b45-866d-18344b832ab9-xtables-lock\") pod \"kube-proxy-q5sw6\" (UID: \"76747474-05d2-4b45-866d-18344b832ab9\") " pod="kube-system/kube-proxy-q5sw6" May 9 23:52:46.207349 kubelet[1739]: I0509 23:52:46.207321 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjghq\" (UniqueName: \"kubernetes.io/projected/76747474-05d2-4b45-866d-18344b832ab9-kube-api-access-pjghq\") pod \"kube-proxy-q5sw6\" (UID: \"76747474-05d2-4b45-866d-18344b832ab9\") " pod="kube-system/kube-proxy-q5sw6" May 9 23:52:46.207349 kubelet[1739]: I0509 23:52:46.207337 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-hostproc\") pod \"cilium-rvnmt\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " pod="kube-system/cilium-rvnmt" May 9 23:52:46.207404 kubelet[1739]: I0509 23:52:46.207354 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-cilium-cgroup\") pod \"cilium-rvnmt\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " pod="kube-system/cilium-rvnmt" May 9 23:52:46.207404 kubelet[1739]: I0509 23:52:46.207389 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-etc-cni-netd\") pod \"cilium-rvnmt\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " pod="kube-system/cilium-rvnmt" May 9 23:52:46.207447 kubelet[1739]: I0509 23:52:46.207433 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-lib-modules\") pod \"cilium-rvnmt\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " pod="kube-system/cilium-rvnmt" May 9 23:52:46.207481 kubelet[1739]: I0509 23:52:46.207453 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-host-proc-sys-net\") pod \"cilium-rvnmt\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " pod="kube-system/cilium-rvnmt" May 9 23:52:46.207504 kubelet[1739]: I0509 23:52:46.207482 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76747474-05d2-4b45-866d-18344b832ab9-lib-modules\") pod \"kube-proxy-q5sw6\" (UID: \"76747474-05d2-4b45-866d-18344b832ab9\") " pod="kube-system/kube-proxy-q5sw6" May 9 23:52:46.504691 kubelet[1739]: E0509 23:52:46.504244 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:52:46.505194 containerd[1444]: time="2025-05-09T23:52:46.505123122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q5sw6,Uid:76747474-05d2-4b45-866d-18344b832ab9,Namespace:kube-system,Attempt:0,}" May 9 23:52:46.519072 kubelet[1739]: E0509 23:52:46.519026 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:52:46.519592 containerd[1444]: time="2025-05-09T23:52:46.519548642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rvnmt,Uid:32cfeb9b-3503-4f52-8e79-20b0e13b6daa,Namespace:kube-system,Attempt:0,}" May 9 23:52:47.119646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount269741345.mount: Deactivated successfully. May 9 23:52:47.127567 containerd[1444]: time="2025-05-09T23:52:47.126874602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:52:47.128222 containerd[1444]: time="2025-05-09T23:52:47.128182162Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:52:47.130122 containerd[1444]: time="2025-05-09T23:52:47.130066282Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 9 23:52:47.131790 containerd[1444]: time="2025-05-09T23:52:47.131736122Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 23:52:47.132910 containerd[1444]: time="2025-05-09T23:52:47.132870202Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:52:47.137610 containerd[1444]: time="2025-05-09T23:52:47.137534002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:52:47.138734 containerd[1444]: time="2025-05-09T23:52:47.138691842Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 619.05556ms" May 9 23:52:47.140443 containerd[1444]: time="2025-05-09T23:52:47.140173282Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 634.91592ms" May 9 23:52:47.175318 kubelet[1739]: E0509 23:52:47.175285 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:52:47.292016 containerd[1444]: time="2025-05-09T23:52:47.291704922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:52:47.292016 containerd[1444]: time="2025-05-09T23:52:47.291805042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:52:47.292016 containerd[1444]: time="2025-05-09T23:52:47.291818002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:52:47.292016 containerd[1444]: time="2025-05-09T23:52:47.291923602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:52:47.296760 containerd[1444]: time="2025-05-09T23:52:47.296392322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:52:47.296760 containerd[1444]: time="2025-05-09T23:52:47.296448882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:52:47.296760 containerd[1444]: time="2025-05-09T23:52:47.296472202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:52:47.296760 containerd[1444]: time="2025-05-09T23:52:47.296556602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:52:47.393110 systemd[1]: Started cri-containerd-457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634.scope - libcontainer container 457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634. May 9 23:52:47.395417 systemd[1]: Started cri-containerd-9406b3024e84624446dc6fd729b8ec82d475e8bd0fd3519dfaa6e5e9d5b3b8ec.scope - libcontainer container 9406b3024e84624446dc6fd729b8ec82d475e8bd0fd3519dfaa6e5e9d5b3b8ec. May 9 23:52:47.419622 containerd[1444]: time="2025-05-09T23:52:47.419558082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rvnmt,Uid:32cfeb9b-3503-4f52-8e79-20b0e13b6daa,Namespace:kube-system,Attempt:0,} returns sandbox id \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\"" May 9 23:52:47.421169 kubelet[1739]: E0509 23:52:47.421140 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:52:47.423392 containerd[1444]: time="2025-05-09T23:52:47.423039722Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 9 23:52:47.429482 containerd[1444]: time="2025-05-09T23:52:47.429417362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q5sw6,Uid:76747474-05d2-4b45-866d-18344b832ab9,Namespace:kube-system,Attempt:0,} returns sandbox id \"9406b3024e84624446dc6fd729b8ec82d475e8bd0fd3519dfaa6e5e9d5b3b8ec\"" May 9 23:52:47.430508 kubelet[1739]: E0509 23:52:47.430480 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:52:48.175948 kubelet[1739]: E0509 23:52:48.175905 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:52:49.178786 kubelet[1739]: E0509 23:52:49.177329 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:52:50.177480 kubelet[1739]: E0509 23:52:50.177423 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:52:50.622568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4117753091.mount: Deactivated successfully. May 9 23:52:51.177925 kubelet[1739]: E0509 23:52:51.177872 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:52:51.999201 containerd[1444]: time="2025-05-09T23:52:51.999145602Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:52:52.000337 containerd[1444]: time="2025-05-09T23:52:52.000094442Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 9 23:52:52.001612 containerd[1444]: time="2025-05-09T23:52:52.001281282Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:52:52.003655 containerd[1444]: time="2025-05-09T23:52:52.003617042Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.58053484s" May 9 23:52:52.003697 containerd[1444]: time="2025-05-09T23:52:52.003655602Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 9 23:52:52.004783 containerd[1444]: time="2025-05-09T23:52:52.004754842Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 9 23:52:52.006279 containerd[1444]: time="2025-05-09T23:52:52.006042562Z" level=info msg="CreateContainer within sandbox \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 23:52:52.027773 containerd[1444]: time="2025-05-09T23:52:52.027591322Z" level=info msg="CreateContainer within sandbox \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae\"" May 9 23:52:52.028400 containerd[1444]: time="2025-05-09T23:52:52.028372882Z" level=info msg="StartContainer for \"937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae\"" May 9 23:52:52.060014 systemd[1]: Started cri-containerd-937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae.scope - libcontainer container 937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae. May 9 23:52:52.085606 containerd[1444]: time="2025-05-09T23:52:52.085472042Z" level=info msg="StartContainer for \"937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae\" returns successfully" May 9 23:52:52.115421 systemd[1]: cri-containerd-937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae.scope: Deactivated successfully. May 9 23:52:52.178983 kubelet[1739]: E0509 23:52:52.178928 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:52:52.236507 containerd[1444]: time="2025-05-09T23:52:52.236451082Z" level=info msg="shim disconnected" id=937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae namespace=k8s.io May 9 23:52:52.236507 containerd[1444]: time="2025-05-09T23:52:52.236505962Z" level=warning msg="cleaning up after shim disconnected" id=937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae namespace=k8s.io May 9 23:52:52.236507 containerd[1444]: time="2025-05-09T23:52:52.236517282Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:52:52.333958 kubelet[1739]: E0509 23:52:52.333798 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:52:52.339139 containerd[1444]: time="2025-05-09T23:52:52.339045002Z" level=info msg="CreateContainer within sandbox \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 23:52:52.348958 containerd[1444]: time="2025-05-09T23:52:52.348910042Z" level=info msg="CreateContainer within sandbox \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"40db122b193860429be1c7767f4d453265dbb2e8d936dc637527796dda4b8ae8\"" May 9 23:52:52.349865 containerd[1444]: time="2025-05-09T23:52:52.349471962Z" level=info msg="StartContainer for \"40db122b193860429be1c7767f4d453265dbb2e8d936dc637527796dda4b8ae8\"" May 9 23:52:52.378045 systemd[1]: Started cri-containerd-40db122b193860429be1c7767f4d453265dbb2e8d936dc637527796dda4b8ae8.scope - libcontainer container 40db122b193860429be1c7767f4d453265dbb2e8d936dc637527796dda4b8ae8. May 9 23:52:52.399498 containerd[1444]: time="2025-05-09T23:52:52.399259682Z" level=info msg="StartContainer for \"40db122b193860429be1c7767f4d453265dbb2e8d936dc637527796dda4b8ae8\" returns successfully" May 9 23:52:52.417092 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:52:52.417303 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:52:52.417365 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 9 23:52:52.426225 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:52:52.426464 systemd[1]: cri-containerd-40db122b193860429be1c7767f4d453265dbb2e8d936dc637527796dda4b8ae8.scope: Deactivated successfully. May 9 23:52:52.437802 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:52:52.453391 containerd[1444]: time="2025-05-09T23:52:52.453324562Z" level=info msg="shim disconnected" id=40db122b193860429be1c7767f4d453265dbb2e8d936dc637527796dda4b8ae8 namespace=k8s.io May 9 23:52:52.453391 containerd[1444]: time="2025-05-09T23:52:52.453380682Z" level=warning msg="cleaning up after shim disconnected" id=40db122b193860429be1c7767f4d453265dbb2e8d936dc637527796dda4b8ae8 namespace=k8s.io May 9 23:52:52.453391 containerd[1444]: time="2025-05-09T23:52:52.453389482Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:52:53.020735 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae-rootfs.mount: Deactivated successfully. May 9 23:52:53.179690 kubelet[1739]: E0509 23:52:53.179650 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:52:53.232584 containerd[1444]: time="2025-05-09T23:52:53.231703242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:52:53.232584 containerd[1444]: time="2025-05-09T23:52:53.232370762Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 9 23:52:53.233232 containerd[1444]: time="2025-05-09T23:52:53.233202002Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:52:53.235527 containerd[1444]: time="2025-05-09T23:52:53.235491082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:52:53.236492 containerd[1444]: time="2025-05-09T23:52:53.236460442Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.23167284s" May 9 23:52:53.236598 containerd[1444]: time="2025-05-09T23:52:53.236580522Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 9 23:52:53.238551 containerd[1444]: time="2025-05-09T23:52:53.238514042Z" level=info msg="CreateContainer within sandbox \"9406b3024e84624446dc6fd729b8ec82d475e8bd0fd3519dfaa6e5e9d5b3b8ec\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 23:52:53.252992 containerd[1444]: time="2025-05-09T23:52:53.252945242Z" level=info msg="CreateContainer within sandbox \"9406b3024e84624446dc6fd729b8ec82d475e8bd0fd3519dfaa6e5e9d5b3b8ec\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b3a72f301c88a41ac04f436d8722485623449adf0947b866ee1899ba85105f11\"" May 9 23:52:53.254721 containerd[1444]: time="2025-05-09T23:52:53.253629402Z" level=info msg="StartContainer for \"b3a72f301c88a41ac04f436d8722485623449adf0947b866ee1899ba85105f11\"" May 9 23:52:53.285056 systemd[1]: Started cri-containerd-b3a72f301c88a41ac04f436d8722485623449adf0947b866ee1899ba85105f11.scope - libcontainer container b3a72f301c88a41ac04f436d8722485623449adf0947b866ee1899ba85105f11. May 9 23:52:53.311859 containerd[1444]: time="2025-05-09T23:52:53.311801282Z" level=info msg="StartContainer for \"b3a72f301c88a41ac04f436d8722485623449adf0947b866ee1899ba85105f11\" returns successfully" May 9 23:52:53.339978 kubelet[1739]: E0509 23:52:53.339722 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:52:53.344173 kubelet[1739]: E0509 23:52:53.343975 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:52:53.345950 containerd[1444]: time="2025-05-09T23:52:53.345907122Z" level=info msg="CreateContainer within sandbox \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 23:52:53.351282 kubelet[1739]: I0509 23:52:53.351225 1739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q5sw6" podStartSLOduration=3.545977282 podStartE2EDuration="9.351206882s" podCreationTimestamp="2025-05-09 23:52:44 +0000 UTC" firstStartedPulling="2025-05-09 23:52:47.432054442 +0000 UTC m=+4.274494161" lastFinishedPulling="2025-05-09 23:52:53.237284002 +0000 UTC m=+10.079723761" observedRunningTime="2025-05-09 23:52:53.350525362 +0000 UTC m=+10.192965121" watchObservedRunningTime="2025-05-09 23:52:53.351206882 +0000 UTC m=+10.193646641" May 9 23:52:53.361180 containerd[1444]: time="2025-05-09T23:52:53.361125482Z" level=info msg="CreateContainer within sandbox \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"64efc23cf04f8b1d0bb12ab80bf8176492433f4bdb6176e187b5e40cefbe1195\"" May 9 23:52:53.361876 containerd[1444]: time="2025-05-09T23:52:53.361792242Z" level=info msg="StartContainer for \"64efc23cf04f8b1d0bb12ab80bf8176492433f4bdb6176e187b5e40cefbe1195\"" May 9 23:52:53.396139 systemd[1]: Started cri-containerd-64efc23cf04f8b1d0bb12ab80bf8176492433f4bdb6176e187b5e40cefbe1195.scope - libcontainer container 64efc23cf04f8b1d0bb12ab80bf8176492433f4bdb6176e187b5e40cefbe1195. May 9 23:52:53.429404 containerd[1444]: time="2025-05-09T23:52:53.429320882Z" level=info msg="StartContainer for \"64efc23cf04f8b1d0bb12ab80bf8176492433f4bdb6176e187b5e40cefbe1195\" returns successfully" May 9 23:52:53.466879 systemd[1]: cri-containerd-64efc23cf04f8b1d0bb12ab80bf8176492433f4bdb6176e187b5e40cefbe1195.scope: Deactivated successfully. May 9 23:52:53.658407 containerd[1444]: time="2025-05-09T23:52:53.658277082Z" level=info msg="shim disconnected" id=64efc23cf04f8b1d0bb12ab80bf8176492433f4bdb6176e187b5e40cefbe1195 namespace=k8s.io May 9 23:52:53.658407 containerd[1444]: time="2025-05-09T23:52:53.658331482Z" level=warning msg="cleaning up after shim disconnected" id=64efc23cf04f8b1d0bb12ab80bf8176492433f4bdb6176e187b5e40cefbe1195 namespace=k8s.io May 9 23:52:53.658407 containerd[1444]: time="2025-05-09T23:52:53.658339682Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:52:54.180859 kubelet[1739]: E0509 23:52:54.180813 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:52:54.347886 kubelet[1739]: E0509 23:52:54.347854 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:52:54.348237 kubelet[1739]: E0509 23:52:54.347902 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:52:54.349932 containerd[1444]: time="2025-05-09T23:52:54.349894242Z" level=info msg="CreateContainer within sandbox \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 23:52:54.368851 containerd[1444]: time="2025-05-09T23:52:54.366102522Z" level=info msg="CreateContainer within sandbox \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e\"" May 9 23:52:54.368851 containerd[1444]: time="2025-05-09T23:52:54.367298322Z" level=info msg="StartContainer for \"46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e\"" May 9 23:52:54.393004 systemd[1]: Started cri-containerd-46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e.scope - libcontainer container 46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e. May 9 23:52:54.412672 systemd[1]: cri-containerd-46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e.scope: Deactivated successfully. May 9 23:52:54.413986 containerd[1444]: time="2025-05-09T23:52:54.413696882Z" level=info msg="StartContainer for \"46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e\" returns successfully" May 9 23:52:54.432190 containerd[1444]: time="2025-05-09T23:52:54.432056202Z" level=info msg="shim disconnected" id=46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e namespace=k8s.io May 9 23:52:54.432666 containerd[1444]: time="2025-05-09T23:52:54.432496402Z" level=warning msg="cleaning up after shim disconnected" id=46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e namespace=k8s.io May 9 23:52:54.432666 containerd[1444]: time="2025-05-09T23:52:54.432520642Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:52:55.020857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e-rootfs.mount: Deactivated successfully. May 9 23:52:55.181542 kubelet[1739]: E0509 23:52:55.181492 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:52:55.351795 kubelet[1739]: E0509 23:52:55.351698 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:52:55.353383 containerd[1444]: time="2025-05-09T23:52:55.353340562Z" level=info msg="CreateContainer within sandbox \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 23:52:55.366168 containerd[1444]: time="2025-05-09T23:52:55.366117962Z" level=info msg="CreateContainer within sandbox \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921\"" May 9 23:52:55.366945 containerd[1444]: time="2025-05-09T23:52:55.366914362Z" level=info msg="StartContainer for \"ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921\"" May 9 23:52:55.399021 systemd[1]: Started cri-containerd-ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921.scope - libcontainer container ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921. May 9 23:52:55.427468 containerd[1444]: time="2025-05-09T23:52:55.427414762Z" level=info msg="StartContainer for \"ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921\" returns successfully" May 9 23:52:55.545871 kubelet[1739]: I0509 23:52:55.545823 1739 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 9 23:52:55.967978 kernel: Initializing XFRM netlink socket May 9 23:52:56.182542 kubelet[1739]: E0509 23:52:56.182477 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:52:56.355870 kubelet[1739]: E0509 23:52:56.355708 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:52:56.371828 kubelet[1739]: I0509 23:52:56.371744 1739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rvnmt" podStartSLOduration=7.789687762 podStartE2EDuration="12.371725602s" podCreationTimestamp="2025-05-09 23:52:44 +0000 UTC" firstStartedPulling="2025-05-09 23:52:47.422479122 +0000 UTC m=+4.264918841" lastFinishedPulling="2025-05-09 23:52:52.004516922 +0000 UTC m=+8.846956681" observedRunningTime="2025-05-09 23:52:56.371251802 +0000 UTC m=+13.213691601" watchObservedRunningTime="2025-05-09 23:52:56.371725602 +0000 UTC m=+13.214165361" May 9 23:52:57.183470 kubelet[1739]: E0509 23:52:57.183417 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:52:57.357271 kubelet[1739]: E0509 23:52:57.356966 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:52:57.611952 systemd-networkd[1387]: cilium_host: Link UP May 9 23:52:57.612626 systemd-networkd[1387]: cilium_net: Link UP May 9 23:52:57.612912 systemd-networkd[1387]: cilium_net: Gained carrier May 9 23:52:57.613059 systemd-networkd[1387]: cilium_host: Gained carrier May 9 23:52:57.617029 systemd-networkd[1387]: cilium_host: Gained IPv6LL May 9 23:52:57.695689 systemd-networkd[1387]: cilium_vxlan: Link UP May 9 23:52:57.695695 systemd-networkd[1387]: cilium_vxlan: Gained carrier May 9 23:52:58.006868 kernel: NET: Registered PF_ALG protocol family May 9 23:52:58.183573 kubelet[1739]: E0509 23:52:58.183517 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:52:58.346536 systemd-networkd[1387]: cilium_net: Gained IPv6LL May 9 23:52:58.358705 kubelet[1739]: E0509 23:52:58.358663 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:52:58.615338 systemd-networkd[1387]: lxc_health: Link UP May 9 23:52:58.629560 systemd-networkd[1387]: lxc_health: Gained carrier May 9 23:52:59.183748 kubelet[1739]: E0509 23:52:59.183701 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:52:59.561016 systemd-networkd[1387]: cilium_vxlan: Gained IPv6LL May 9 23:53:00.184329 kubelet[1739]: E0509 23:53:00.184274 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:00.520888 kubelet[1739]: E0509 23:53:00.520856 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:53:00.584983 systemd-networkd[1387]: lxc_health: Gained IPv6LL May 9 23:53:00.906230 systemd[1]: Created slice kubepods-besteffort-pode590f3e6_22eb_446f_ab40_a9d7949ff3eb.slice - libcontainer container kubepods-besteffort-pode590f3e6_22eb_446f_ab40_a9d7949ff3eb.slice. May 9 23:53:01.003558 kubelet[1739]: I0509 23:53:01.003513 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmrrp\" (UniqueName: \"kubernetes.io/projected/e590f3e6-22eb-446f-ab40-a9d7949ff3eb-kube-api-access-zmrrp\") pod \"nginx-deployment-8587fbcb89-6rssm\" (UID: \"e590f3e6-22eb-446f-ab40-a9d7949ff3eb\") " pod="default/nginx-deployment-8587fbcb89-6rssm" May 9 23:53:01.185229 kubelet[1739]: E0509 23:53:01.185100 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:01.210393 containerd[1444]: time="2025-05-09T23:53:01.210330762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-6rssm,Uid:e590f3e6-22eb-446f-ab40-a9d7949ff3eb,Namespace:default,Attempt:0,}" May 9 23:53:01.298891 systemd-networkd[1387]: lxc21f2fbbce05d: Link UP May 9 23:53:01.308941 kernel: eth0: renamed from tmp09240 May 9 23:53:01.319622 systemd-networkd[1387]: lxc21f2fbbce05d: Gained carrier May 9 23:53:02.185742 kubelet[1739]: E0509 23:53:02.185693 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:02.953008 systemd-networkd[1387]: lxc21f2fbbce05d: Gained IPv6LL May 9 23:53:03.186001 kubelet[1739]: E0509 23:53:03.185947 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:03.317924 containerd[1444]: time="2025-05-09T23:53:03.317718522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:53:03.317924 containerd[1444]: time="2025-05-09T23:53:03.317786002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:53:03.317924 containerd[1444]: time="2025-05-09T23:53:03.317797322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:53:03.318259 containerd[1444]: time="2025-05-09T23:53:03.317901922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:53:03.342074 systemd[1]: Started cri-containerd-09240a622dad34ae94c37b6e7bcd2d73ecf4f9300119bae19fa913bb2858ce1b.scope - libcontainer container 09240a622dad34ae94c37b6e7bcd2d73ecf4f9300119bae19fa913bb2858ce1b. May 9 23:53:03.352702 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 23:53:03.374312 containerd[1444]: time="2025-05-09T23:53:03.374245602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-6rssm,Uid:e590f3e6-22eb-446f-ab40-a9d7949ff3eb,Namespace:default,Attempt:0,} returns sandbox id \"09240a622dad34ae94c37b6e7bcd2d73ecf4f9300119bae19fa913bb2858ce1b\"" May 9 23:53:03.377746 containerd[1444]: time="2025-05-09T23:53:03.377704202Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 9 23:53:04.005989 kubelet[1739]: I0509 23:53:04.005943 1739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 23:53:04.007179 kubelet[1739]: E0509 23:53:04.007140 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:53:04.173608 kubelet[1739]: E0509 23:53:04.173558 1739 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:04.186181 kubelet[1739]: E0509 23:53:04.186131 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:04.372523 kubelet[1739]: E0509 23:53:04.372414 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:53:05.187216 kubelet[1739]: E0509 23:53:05.187168 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:05.198735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount192752900.mount: Deactivated successfully. May 9 23:53:06.121121 containerd[1444]: time="2025-05-09T23:53:06.121028242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:53:06.124793 containerd[1444]: time="2025-05-09T23:53:06.124683282Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69948859" May 9 23:53:06.126157 containerd[1444]: time="2025-05-09T23:53:06.126003482Z" level=info msg="ImageCreate event name:\"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:53:06.129980 containerd[1444]: time="2025-05-09T23:53:06.129900602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:53:06.131852 containerd[1444]: time="2025-05-09T23:53:06.131463082Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 2.75371956s" May 9 23:53:06.131852 containerd[1444]: time="2025-05-09T23:53:06.131506722Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 9 23:53:06.133782 containerd[1444]: time="2025-05-09T23:53:06.133751722Z" level=info msg="CreateContainer within sandbox \"09240a622dad34ae94c37b6e7bcd2d73ecf4f9300119bae19fa913bb2858ce1b\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 9 23:53:06.147019 containerd[1444]: time="2025-05-09T23:53:06.146919762Z" level=info msg="CreateContainer within sandbox \"09240a622dad34ae94c37b6e7bcd2d73ecf4f9300119bae19fa913bb2858ce1b\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"168cd2019654489502ab4da9940eddc4920ec2ca1e4e3486ac154aa23cfc82ea\"" May 9 23:53:06.147555 containerd[1444]: time="2025-05-09T23:53:06.147463802Z" level=info msg="StartContainer for \"168cd2019654489502ab4da9940eddc4920ec2ca1e4e3486ac154aa23cfc82ea\"" May 9 23:53:06.175060 systemd[1]: Started cri-containerd-168cd2019654489502ab4da9940eddc4920ec2ca1e4e3486ac154aa23cfc82ea.scope - libcontainer container 168cd2019654489502ab4da9940eddc4920ec2ca1e4e3486ac154aa23cfc82ea. May 9 23:53:06.187927 kubelet[1739]: E0509 23:53:06.187877 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:06.205249 containerd[1444]: time="2025-05-09T23:53:06.205195602Z" level=info msg="StartContainer for \"168cd2019654489502ab4da9940eddc4920ec2ca1e4e3486ac154aa23cfc82ea\" returns successfully" May 9 23:53:06.389998 kubelet[1739]: I0509 23:53:06.389792 1739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-6rssm" podStartSLOduration=3.634550842 podStartE2EDuration="6.389771122s" podCreationTimestamp="2025-05-09 23:53:00 +0000 UTC" firstStartedPulling="2025-05-09 23:53:03.377422842 +0000 UTC m=+20.219862601" lastFinishedPulling="2025-05-09 23:53:06.132643122 +0000 UTC m=+22.975082881" observedRunningTime="2025-05-09 23:53:06.389516722 +0000 UTC m=+23.231956481" watchObservedRunningTime="2025-05-09 23:53:06.389771122 +0000 UTC m=+23.232210881" May 9 23:53:07.188517 kubelet[1739]: E0509 23:53:07.188455 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:08.189357 kubelet[1739]: E0509 23:53:08.189310 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:09.190279 kubelet[1739]: E0509 23:53:09.190220 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:10.190383 kubelet[1739]: E0509 23:53:10.190337 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:11.191274 kubelet[1739]: E0509 23:53:11.191229 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:12.194287 kubelet[1739]: E0509 23:53:12.194234 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:13.166824 systemd[1]: Created slice kubepods-besteffort-pod7620cc33_cdc7_4382_8399_ba56a5fb5e6f.slice - libcontainer container kubepods-besteffort-pod7620cc33_cdc7_4382_8399_ba56a5fb5e6f.slice. May 9 23:53:13.195388 kubelet[1739]: E0509 23:53:13.195342 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:13.275944 kubelet[1739]: I0509 23:53:13.275826 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqgwx\" (UniqueName: \"kubernetes.io/projected/7620cc33-cdc7-4382-8399-ba56a5fb5e6f-kube-api-access-wqgwx\") pod \"nfs-server-provisioner-0\" (UID: \"7620cc33-cdc7-4382-8399-ba56a5fb5e6f\") " pod="default/nfs-server-provisioner-0" May 9 23:53:13.275944 kubelet[1739]: I0509 23:53:13.275888 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/7620cc33-cdc7-4382-8399-ba56a5fb5e6f-data\") pod \"nfs-server-provisioner-0\" (UID: \"7620cc33-cdc7-4382-8399-ba56a5fb5e6f\") " pod="default/nfs-server-provisioner-0" May 9 23:53:13.470528 containerd[1444]: time="2025-05-09T23:53:13.470483029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7620cc33-cdc7-4382-8399-ba56a5fb5e6f,Namespace:default,Attempt:0,}" May 9 23:53:13.495000 systemd-networkd[1387]: lxcb9a30a2d2208: Link UP May 9 23:53:13.502875 kernel: eth0: renamed from tmp60792 May 9 23:53:13.510112 systemd-networkd[1387]: lxcb9a30a2d2208: Gained carrier May 9 23:53:13.713636 containerd[1444]: time="2025-05-09T23:53:13.713524278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:53:13.714121 containerd[1444]: time="2025-05-09T23:53:13.713948539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:53:13.714121 containerd[1444]: time="2025-05-09T23:53:13.713968860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:53:13.714121 containerd[1444]: time="2025-05-09T23:53:13.714060745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:53:13.743045 systemd[1]: Started cri-containerd-607923110349a83f2d715668a8907965d9f92bed7a1740dfa1408706c11c536e.scope - libcontainer container 607923110349a83f2d715668a8907965d9f92bed7a1740dfa1408706c11c536e. May 9 23:53:13.753568 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 23:53:13.770923 containerd[1444]: time="2025-05-09T23:53:13.770861899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7620cc33-cdc7-4382-8399-ba56a5fb5e6f,Namespace:default,Attempt:0,} returns sandbox id \"607923110349a83f2d715668a8907965d9f92bed7a1740dfa1408706c11c536e\"" May 9 23:53:13.772557 containerd[1444]: time="2025-05-09T23:53:13.772528783Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 9 23:53:14.196448 kubelet[1739]: E0509 23:53:14.196164 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:14.389060 systemd[1]: run-containerd-runc-k8s.io-607923110349a83f2d715668a8907965d9f92bed7a1740dfa1408706c11c536e-runc.25FpoV.mount: Deactivated successfully. May 9 23:53:15.196855 kubelet[1739]: E0509 23:53:15.196765 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:15.304951 systemd-networkd[1387]: lxcb9a30a2d2208: Gained IPv6LL May 9 23:53:15.318680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2672091281.mount: Deactivated successfully. May 9 23:53:16.197766 kubelet[1739]: E0509 23:53:16.197665 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:16.692336 containerd[1444]: time="2025-05-09T23:53:16.692283315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:53:16.692971 containerd[1444]: time="2025-05-09T23:53:16.692916941Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" May 9 23:53:16.693971 containerd[1444]: time="2025-05-09T23:53:16.693919503Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:53:16.698418 containerd[1444]: time="2025-05-09T23:53:16.697890226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:53:16.700420 containerd[1444]: time="2025-05-09T23:53:16.700373288Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 2.927690978s" May 9 23:53:16.700420 containerd[1444]: time="2025-05-09T23:53:16.700413930Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 9 23:53:16.702373 containerd[1444]: time="2025-05-09T23:53:16.702337609Z" level=info msg="CreateContainer within sandbox \"607923110349a83f2d715668a8907965d9f92bed7a1740dfa1408706c11c536e\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 9 23:53:16.712873 containerd[1444]: time="2025-05-09T23:53:16.712740517Z" level=info msg="CreateContainer within sandbox \"607923110349a83f2d715668a8907965d9f92bed7a1740dfa1408706c11c536e\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"3b7f9c598f2a5714141665c3921c591e2eb9d78f838f7e8178179f30541780c3\"" May 9 23:53:16.714483 containerd[1444]: time="2025-05-09T23:53:16.713339381Z" level=info msg="StartContainer for \"3b7f9c598f2a5714141665c3921c591e2eb9d78f838f7e8178179f30541780c3\"" May 9 23:53:16.811047 systemd[1]: Started cri-containerd-3b7f9c598f2a5714141665c3921c591e2eb9d78f838f7e8178179f30541780c3.scope - libcontainer container 3b7f9c598f2a5714141665c3921c591e2eb9d78f838f7e8178179f30541780c3. May 9 23:53:16.834756 containerd[1444]: time="2025-05-09T23:53:16.834714732Z" level=info msg="StartContainer for \"3b7f9c598f2a5714141665c3921c591e2eb9d78f838f7e8178179f30541780c3\" returns successfully" May 9 23:53:17.198194 kubelet[1739]: E0509 23:53:17.198140 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:17.412003 kubelet[1739]: I0509 23:53:17.411940 1739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.4829967370000001 podStartE2EDuration="4.41191457s" podCreationTimestamp="2025-05-09 23:53:13 +0000 UTC" firstStartedPulling="2025-05-09 23:53:13.772110722 +0000 UTC m=+30.614550441" lastFinishedPulling="2025-05-09 23:53:16.701028515 +0000 UTC m=+33.543468274" observedRunningTime="2025-05-09 23:53:17.411433672 +0000 UTC m=+34.253873431" watchObservedRunningTime="2025-05-09 23:53:17.41191457 +0000 UTC m=+34.254354329" May 9 23:53:18.198523 kubelet[1739]: E0509 23:53:18.198475 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:19.199601 kubelet[1739]: E0509 23:53:19.199555 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:19.597519 update_engine[1430]: I20250509 23:53:19.597434 1430 update_attempter.cc:509] Updating boot flags... May 9 23:53:19.634871 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3134) May 9 23:53:19.664924 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3132) May 9 23:53:20.200208 kubelet[1739]: E0509 23:53:20.200156 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:21.200478 kubelet[1739]: E0509 23:53:21.200425 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:22.200811 kubelet[1739]: E0509 23:53:22.200736 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:23.201437 kubelet[1739]: E0509 23:53:23.201381 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:24.173379 kubelet[1739]: E0509 23:53:24.173320 1739 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:24.201975 kubelet[1739]: E0509 23:53:24.201931 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:25.202615 kubelet[1739]: E0509 23:53:25.202560 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:26.202717 kubelet[1739]: E0509 23:53:26.202672 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:27.203282 kubelet[1739]: E0509 23:53:27.203212 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:27.290954 systemd[1]: Created slice kubepods-besteffort-poda387ce0c_9023_46d9_925f_3605c0f35ed9.slice - libcontainer container kubepods-besteffort-poda387ce0c_9023_46d9_925f_3605c0f35ed9.slice. May 9 23:53:27.367228 kubelet[1739]: I0509 23:53:27.367187 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4afb3c3c-b493-4e83-a384-749d0c43cdf8\" (UniqueName: \"kubernetes.io/nfs/a387ce0c-9023-46d9-925f-3605c0f35ed9-pvc-4afb3c3c-b493-4e83-a384-749d0c43cdf8\") pod \"test-pod-1\" (UID: \"a387ce0c-9023-46d9-925f-3605c0f35ed9\") " pod="default/test-pod-1" May 9 23:53:27.367228 kubelet[1739]: I0509 23:53:27.367234 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l47xr\" (UniqueName: \"kubernetes.io/projected/a387ce0c-9023-46d9-925f-3605c0f35ed9-kube-api-access-l47xr\") pod \"test-pod-1\" (UID: \"a387ce0c-9023-46d9-925f-3605c0f35ed9\") " pod="default/test-pod-1" May 9 23:53:27.497033 kernel: FS-Cache: Loaded May 9 23:53:27.525909 kernel: RPC: Registered named UNIX socket transport module. May 9 23:53:27.526041 kernel: RPC: Registered udp transport module. May 9 23:53:27.526060 kernel: RPC: Registered tcp transport module. May 9 23:53:27.527294 kernel: RPC: Registered tcp-with-tls transport module. May 9 23:53:27.527355 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 9 23:53:27.707214 kernel: NFS: Registering the id_resolver key type May 9 23:53:27.707329 kernel: Key type id_resolver registered May 9 23:53:27.707346 kernel: Key type id_legacy registered May 9 23:53:27.737697 nfsidmap[3161]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 9 23:53:27.741742 nfsidmap[3164]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 9 23:53:27.895974 containerd[1444]: time="2025-05-09T23:53:27.895814196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a387ce0c-9023-46d9-925f-3605c0f35ed9,Namespace:default,Attempt:0,}" May 9 23:53:27.926466 systemd-networkd[1387]: lxc02f02ea8cd11: Link UP May 9 23:53:27.946937 kernel: eth0: renamed from tmp49e52 May 9 23:53:27.957528 systemd-networkd[1387]: lxc02f02ea8cd11: Gained carrier May 9 23:53:28.164767 containerd[1444]: time="2025-05-09T23:53:28.164561263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:53:28.164767 containerd[1444]: time="2025-05-09T23:53:28.164641304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:53:28.164767 containerd[1444]: time="2025-05-09T23:53:28.164652904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:53:28.165020 containerd[1444]: time="2025-05-09T23:53:28.164763386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:53:28.192076 systemd[1]: Started cri-containerd-49e52929b7d82a7a7afeb0f5b99eb29b3d7b980a08d49b22cf23e0f3ee173db6.scope - libcontainer container 49e52929b7d82a7a7afeb0f5b99eb29b3d7b980a08d49b22cf23e0f3ee173db6. May 9 23:53:28.205501 kubelet[1739]: E0509 23:53:28.205460 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:28.205939 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 23:53:28.223926 containerd[1444]: time="2025-05-09T23:53:28.223885187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a387ce0c-9023-46d9-925f-3605c0f35ed9,Namespace:default,Attempt:0,} returns sandbox id \"49e52929b7d82a7a7afeb0f5b99eb29b3d7b980a08d49b22cf23e0f3ee173db6\"" May 9 23:53:28.239067 containerd[1444]: time="2025-05-09T23:53:28.238894152Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 9 23:53:28.486676 containerd[1444]: time="2025-05-09T23:53:28.486629607Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:53:28.487417 containerd[1444]: time="2025-05-09T23:53:28.487307540Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" May 9 23:53:28.490920 containerd[1444]: time="2025-05-09T23:53:28.490865888Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 251.928015ms" May 9 23:53:28.490920 containerd[1444]: time="2025-05-09T23:53:28.490912728Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 9 23:53:28.493349 containerd[1444]: time="2025-05-09T23:53:28.493225452Z" level=info msg="CreateContainer within sandbox \"49e52929b7d82a7a7afeb0f5b99eb29b3d7b980a08d49b22cf23e0f3ee173db6\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 9 23:53:28.521114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1025468805.mount: Deactivated successfully. May 9 23:53:28.522348 containerd[1444]: time="2025-05-09T23:53:28.522246842Z" level=info msg="CreateContainer within sandbox \"49e52929b7d82a7a7afeb0f5b99eb29b3d7b980a08d49b22cf23e0f3ee173db6\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"45c29714264cc3912514b619da49bc22678b53fe0781345d9d497be6b96867b3\"" May 9 23:53:28.522912 containerd[1444]: time="2025-05-09T23:53:28.522868334Z" level=info msg="StartContainer for \"45c29714264cc3912514b619da49bc22678b53fe0781345d9d497be6b96867b3\"" May 9 23:53:28.553084 systemd[1]: Started cri-containerd-45c29714264cc3912514b619da49bc22678b53fe0781345d9d497be6b96867b3.scope - libcontainer container 45c29714264cc3912514b619da49bc22678b53fe0781345d9d497be6b96867b3. May 9 23:53:28.583675 containerd[1444]: time="2025-05-09T23:53:28.583611806Z" level=info msg="StartContainer for \"45c29714264cc3912514b619da49bc22678b53fe0781345d9d497be6b96867b3\" returns successfully" May 9 23:53:29.206269 kubelet[1739]: E0509 23:53:29.206213 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:29.257064 systemd-networkd[1387]: lxc02f02ea8cd11: Gained IPv6LL May 9 23:53:29.439445 kubelet[1739]: I0509 23:53:29.439316 1739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.185808515 podStartE2EDuration="16.43929928s" podCreationTimestamp="2025-05-09 23:53:13 +0000 UTC" firstStartedPulling="2025-05-09 23:53:28.238203658 +0000 UTC m=+45.080643417" lastFinishedPulling="2025-05-09 23:53:28.491694463 +0000 UTC m=+45.334134182" observedRunningTime="2025-05-09 23:53:29.439094956 +0000 UTC m=+46.281534715" watchObservedRunningTime="2025-05-09 23:53:29.43929928 +0000 UTC m=+46.281739039" May 9 23:53:30.207020 kubelet[1739]: E0509 23:53:30.206976 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:31.207541 kubelet[1739]: E0509 23:53:31.207488 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:32.207896 kubelet[1739]: E0509 23:53:32.207847 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:33.209175 kubelet[1739]: E0509 23:53:33.209125 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:34.209478 kubelet[1739]: E0509 23:53:34.209403 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:35.210106 kubelet[1739]: E0509 23:53:35.210054 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:36.166710 containerd[1444]: time="2025-05-09T23:53:36.166660459Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 23:53:36.176441 containerd[1444]: time="2025-05-09T23:53:36.176403809Z" level=info msg="StopContainer for \"ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921\" with timeout 2 (s)" May 9 23:53:36.176710 containerd[1444]: time="2025-05-09T23:53:36.176684293Z" level=info msg="Stop container \"ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921\" with signal terminated" May 9 23:53:36.181928 systemd-networkd[1387]: lxc_health: Link DOWN May 9 23:53:36.181935 systemd-networkd[1387]: lxc_health: Lost carrier May 9 23:53:36.209349 systemd[1]: cri-containerd-ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921.scope: Deactivated successfully. May 9 23:53:36.209638 systemd[1]: cri-containerd-ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921.scope: Consumed 6.856s CPU time. May 9 23:53:36.211270 kubelet[1739]: E0509 23:53:36.211228 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:36.229261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921-rootfs.mount: Deactivated successfully. May 9 23:53:36.236875 containerd[1444]: time="2025-05-09T23:53:36.236803973Z" level=info msg="shim disconnected" id=ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921 namespace=k8s.io May 9 23:53:36.237313 containerd[1444]: time="2025-05-09T23:53:36.237114696Z" level=warning msg="cleaning up after shim disconnected" id=ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921 namespace=k8s.io May 9 23:53:36.237313 containerd[1444]: time="2025-05-09T23:53:36.237135296Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:53:36.248525 containerd[1444]: time="2025-05-09T23:53:36.248465225Z" level=info msg="StopContainer for \"ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921\" returns successfully" May 9 23:53:36.249084 containerd[1444]: time="2025-05-09T23:53:36.249040471Z" level=info msg="StopPodSandbox for \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\"" May 9 23:53:36.253393 containerd[1444]: time="2025-05-09T23:53:36.253349400Z" level=info msg="Container to stop \"937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:53:36.253393 containerd[1444]: time="2025-05-09T23:53:36.253385720Z" level=info msg="Container to stop \"64efc23cf04f8b1d0bb12ab80bf8176492433f4bdb6176e187b5e40cefbe1195\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:53:36.253520 containerd[1444]: time="2025-05-09T23:53:36.253396800Z" level=info msg="Container to stop \"ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:53:36.253520 containerd[1444]: time="2025-05-09T23:53:36.253409800Z" level=info msg="Container to stop \"40db122b193860429be1c7767f4d453265dbb2e8d936dc637527796dda4b8ae8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:53:36.253520 containerd[1444]: time="2025-05-09T23:53:36.253419201Z" level=info msg="Container to stop \"46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:53:36.255998 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634-shm.mount: Deactivated successfully. May 9 23:53:36.259102 systemd[1]: cri-containerd-457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634.scope: Deactivated successfully. May 9 23:53:36.274024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634-rootfs.mount: Deactivated successfully. May 9 23:53:36.277174 containerd[1444]: time="2025-05-09T23:53:36.276920466Z" level=info msg="shim disconnected" id=457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634 namespace=k8s.io May 9 23:53:36.277174 containerd[1444]: time="2025-05-09T23:53:36.276977427Z" level=warning msg="cleaning up after shim disconnected" id=457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634 namespace=k8s.io May 9 23:53:36.277174 containerd[1444]: time="2025-05-09T23:53:36.276985147Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:53:36.288373 containerd[1444]: time="2025-05-09T23:53:36.288221874Z" level=info msg="TearDown network for sandbox \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\" successfully" May 9 23:53:36.288373 containerd[1444]: time="2025-05-09T23:53:36.288254515Z" level=info msg="StopPodSandbox for \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\" returns successfully" May 9 23:53:36.326135 kubelet[1739]: I0509 23:53:36.325521 1739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-etc-cni-netd\") pod \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " May 9 23:53:36.326135 kubelet[1739]: I0509 23:53:36.325560 1739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-bpf-maps\") pod \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " May 9 23:53:36.326135 kubelet[1739]: I0509 23:53:36.325590 1739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-hubble-tls\") pod \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " May 9 23:53:36.326135 kubelet[1739]: I0509 23:53:36.325606 1739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-xtables-lock\") pod \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " May 9 23:53:36.326135 kubelet[1739]: I0509 23:53:36.325620 1739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-cni-path\") pod \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " May 9 23:53:36.326135 kubelet[1739]: I0509 23:53:36.325619 1739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "32cfeb9b-3503-4f52-8e79-20b0e13b6daa" (UID: "32cfeb9b-3503-4f52-8e79-20b0e13b6daa"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 23:53:36.326392 kubelet[1739]: I0509 23:53:36.325643 1739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw6dx\" (UniqueName: \"kubernetes.io/projected/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-kube-api-access-kw6dx\") pod \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " May 9 23:53:36.326392 kubelet[1739]: I0509 23:53:36.325662 1739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-host-proc-sys-kernel\") pod \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " May 9 23:53:36.326392 kubelet[1739]: I0509 23:53:36.325667 1739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "32cfeb9b-3503-4f52-8e79-20b0e13b6daa" (UID: "32cfeb9b-3503-4f52-8e79-20b0e13b6daa"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 23:53:36.326392 kubelet[1739]: I0509 23:53:36.325677 1739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-host-proc-sys-net\") pod \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " May 9 23:53:36.326392 kubelet[1739]: I0509 23:53:36.325685 1739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "32cfeb9b-3503-4f52-8e79-20b0e13b6daa" (UID: "32cfeb9b-3503-4f52-8e79-20b0e13b6daa"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 23:53:36.326500 kubelet[1739]: I0509 23:53:36.325696 1739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-clustermesh-secrets\") pod \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " May 9 23:53:36.326500 kubelet[1739]: I0509 23:53:36.325712 1739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-hostproc\") pod \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " May 9 23:53:36.326500 kubelet[1739]: I0509 23:53:36.325729 1739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-cilium-config-path\") pod \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " May 9 23:53:36.326500 kubelet[1739]: I0509 23:53:36.325743 1739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-cilium-run\") pod \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " May 9 23:53:36.326500 kubelet[1739]: I0509 23:53:36.325756 1739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-cilium-cgroup\") pod \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " May 9 23:53:36.326500 kubelet[1739]: I0509 23:53:36.325771 1739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-lib-modules\") pod \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\" (UID: \"32cfeb9b-3503-4f52-8e79-20b0e13b6daa\") " May 9 23:53:36.326659 kubelet[1739]: I0509 23:53:36.325797 1739 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-xtables-lock\") on node \"10.0.0.76\" DevicePath \"\"" May 9 23:53:36.326659 kubelet[1739]: I0509 23:53:36.325806 1739 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-etc-cni-netd\") on node \"10.0.0.76\" DevicePath \"\"" May 9 23:53:36.326659 kubelet[1739]: I0509 23:53:36.325815 1739 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-bpf-maps\") on node \"10.0.0.76\" DevicePath \"\"" May 9 23:53:36.326659 kubelet[1739]: I0509 23:53:36.325864 1739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "32cfeb9b-3503-4f52-8e79-20b0e13b6daa" (UID: "32cfeb9b-3503-4f52-8e79-20b0e13b6daa"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 23:53:36.326659 kubelet[1739]: I0509 23:53:36.325887 1739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-cni-path" (OuterVolumeSpecName: "cni-path") pod "32cfeb9b-3503-4f52-8e79-20b0e13b6daa" (UID: "32cfeb9b-3503-4f52-8e79-20b0e13b6daa"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 23:53:36.326659 kubelet[1739]: I0509 23:53:36.326037 1739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-hostproc" (OuterVolumeSpecName: "hostproc") pod "32cfeb9b-3503-4f52-8e79-20b0e13b6daa" (UID: "32cfeb9b-3503-4f52-8e79-20b0e13b6daa"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 23:53:36.326775 kubelet[1739]: I0509 23:53:36.326065 1739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "32cfeb9b-3503-4f52-8e79-20b0e13b6daa" (UID: "32cfeb9b-3503-4f52-8e79-20b0e13b6daa"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 23:53:36.326775 kubelet[1739]: I0509 23:53:36.326086 1739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "32cfeb9b-3503-4f52-8e79-20b0e13b6daa" (UID: "32cfeb9b-3503-4f52-8e79-20b0e13b6daa"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 23:53:36.326906 kubelet[1739]: I0509 23:53:36.326882 1739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "32cfeb9b-3503-4f52-8e79-20b0e13b6daa" (UID: "32cfeb9b-3503-4f52-8e79-20b0e13b6daa"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 23:53:36.327864 kubelet[1739]: I0509 23:53:36.327819 1739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "32cfeb9b-3503-4f52-8e79-20b0e13b6daa" (UID: "32cfeb9b-3503-4f52-8e79-20b0e13b6daa"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 23:53:36.329051 systemd[1]: var-lib-kubelet-pods-32cfeb9b\x2d3503\x2d4f52\x2d8e79\x2d20b0e13b6daa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkw6dx.mount: Deactivated successfully. May 9 23:53:36.329145 systemd[1]: var-lib-kubelet-pods-32cfeb9b\x2d3503\x2d4f52\x2d8e79\x2d20b0e13b6daa-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 9 23:53:36.329208 systemd[1]: var-lib-kubelet-pods-32cfeb9b\x2d3503\x2d4f52\x2d8e79\x2d20b0e13b6daa-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 9 23:53:36.329488 kubelet[1739]: I0509 23:53:36.329368 1739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "32cfeb9b-3503-4f52-8e79-20b0e13b6daa" (UID: "32cfeb9b-3503-4f52-8e79-20b0e13b6daa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 9 23:53:36.330482 kubelet[1739]: I0509 23:53:36.330456 1739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "32cfeb9b-3503-4f52-8e79-20b0e13b6daa" (UID: "32cfeb9b-3503-4f52-8e79-20b0e13b6daa"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 9 23:53:36.330782 kubelet[1739]: I0509 23:53:36.330632 1739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "32cfeb9b-3503-4f52-8e79-20b0e13b6daa" (UID: "32cfeb9b-3503-4f52-8e79-20b0e13b6daa"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 9 23:53:36.330782 kubelet[1739]: I0509 23:53:36.330749 1739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-kube-api-access-kw6dx" (OuterVolumeSpecName: "kube-api-access-kw6dx") pod "32cfeb9b-3503-4f52-8e79-20b0e13b6daa" (UID: "32cfeb9b-3503-4f52-8e79-20b0e13b6daa"). InnerVolumeSpecName "kube-api-access-kw6dx". PluginName "kubernetes.io/projected", VolumeGidValue "" May 9 23:53:36.426225 kubelet[1739]: I0509 23:53:36.426108 1739 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-cilium-config-path\") on node \"10.0.0.76\" DevicePath \"\"" May 9 23:53:36.426225 kubelet[1739]: I0509 23:53:36.426151 1739 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-cilium-run\") on node \"10.0.0.76\" DevicePath \"\"" May 9 23:53:36.426225 kubelet[1739]: I0509 23:53:36.426164 1739 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-cilium-cgroup\") on node \"10.0.0.76\" DevicePath \"\"" May 9 23:53:36.426225 kubelet[1739]: I0509 23:53:36.426172 1739 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-lib-modules\") on node \"10.0.0.76\" DevicePath \"\"" May 9 23:53:36.426225 kubelet[1739]: I0509 23:53:36.426180 1739 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kw6dx\" (UniqueName: \"kubernetes.io/projected/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-kube-api-access-kw6dx\") on node \"10.0.0.76\" DevicePath \"\"" May 9 23:53:36.426225 kubelet[1739]: I0509 23:53:36.426189 1739 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-host-proc-sys-kernel\") on node \"10.0.0.76\" DevicePath \"\"" May 9 23:53:36.426225 kubelet[1739]: I0509 23:53:36.426196 1739 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-hubble-tls\") on node \"10.0.0.76\" DevicePath \"\"" May 9 23:53:36.426225 kubelet[1739]: I0509 23:53:36.426210 1739 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-cni-path\") on node \"10.0.0.76\" DevicePath \"\"" May 9 23:53:36.426529 kubelet[1739]: I0509 23:53:36.426219 1739 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-host-proc-sys-net\") on node \"10.0.0.76\" DevicePath \"\"" May 9 23:53:36.426529 kubelet[1739]: I0509 23:53:36.426229 1739 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-clustermesh-secrets\") on node \"10.0.0.76\" DevicePath \"\"" May 9 23:53:36.426529 kubelet[1739]: I0509 23:53:36.426237 1739 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32cfeb9b-3503-4f52-8e79-20b0e13b6daa-hostproc\") on node \"10.0.0.76\" DevicePath \"\"" May 9 23:53:36.449341 systemd[1]: Removed slice kubepods-burstable-pod32cfeb9b_3503_4f52_8e79_20b0e13b6daa.slice - libcontainer container kubepods-burstable-pod32cfeb9b_3503_4f52_8e79_20b0e13b6daa.slice. May 9 23:53:36.449440 systemd[1]: kubepods-burstable-pod32cfeb9b_3503_4f52_8e79_20b0e13b6daa.slice: Consumed 6.999s CPU time. May 9 23:53:36.451758 kubelet[1739]: I0509 23:53:36.451719 1739 scope.go:117] "RemoveContainer" containerID="ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921" May 9 23:53:36.457568 containerd[1444]: time="2025-05-09T23:53:36.457534349Z" level=info msg="RemoveContainer for \"ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921\"" May 9 23:53:36.463732 containerd[1444]: time="2025-05-09T23:53:36.463691059Z" level=info msg="RemoveContainer for \"ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921\" returns successfully" May 9 23:53:36.463981 kubelet[1739]: I0509 23:53:36.463946 1739 scope.go:117] "RemoveContainer" containerID="46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e" May 9 23:53:36.464875 containerd[1444]: time="2025-05-09T23:53:36.464851792Z" level=info msg="RemoveContainer for \"46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e\"" May 9 23:53:36.467202 containerd[1444]: time="2025-05-09T23:53:36.467169378Z" level=info msg="RemoveContainer for \"46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e\" returns successfully" May 9 23:53:36.467365 kubelet[1739]: I0509 23:53:36.467340 1739 scope.go:117] "RemoveContainer" containerID="64efc23cf04f8b1d0bb12ab80bf8176492433f4bdb6176e187b5e40cefbe1195" May 9 23:53:36.468181 containerd[1444]: time="2025-05-09T23:53:36.468164110Z" level=info msg="RemoveContainer for \"64efc23cf04f8b1d0bb12ab80bf8176492433f4bdb6176e187b5e40cefbe1195\"" May 9 23:53:36.470257 containerd[1444]: time="2025-05-09T23:53:36.470222413Z" level=info msg="RemoveContainer for \"64efc23cf04f8b1d0bb12ab80bf8176492433f4bdb6176e187b5e40cefbe1195\" returns successfully" May 9 23:53:36.470494 kubelet[1739]: I0509 23:53:36.470397 1739 scope.go:117] "RemoveContainer" containerID="40db122b193860429be1c7767f4d453265dbb2e8d936dc637527796dda4b8ae8" May 9 23:53:36.471467 containerd[1444]: time="2025-05-09T23:53:36.471448827Z" level=info msg="RemoveContainer for \"40db122b193860429be1c7767f4d453265dbb2e8d936dc637527796dda4b8ae8\"" May 9 23:53:36.473486 containerd[1444]: time="2025-05-09T23:53:36.473453409Z" level=info msg="RemoveContainer for \"40db122b193860429be1c7767f4d453265dbb2e8d936dc637527796dda4b8ae8\" returns successfully" May 9 23:53:36.473603 kubelet[1739]: I0509 23:53:36.473577 1739 scope.go:117] "RemoveContainer" containerID="937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae" May 9 23:53:36.474365 containerd[1444]: time="2025-05-09T23:53:36.474346699Z" level=info msg="RemoveContainer for \"937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae\"" May 9 23:53:36.476217 containerd[1444]: time="2025-05-09T23:53:36.476185480Z" level=info msg="RemoveContainer for \"937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae\" returns successfully" May 9 23:53:36.476366 kubelet[1739]: I0509 23:53:36.476337 1739 scope.go:117] "RemoveContainer" containerID="ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921" May 9 23:53:36.476520 containerd[1444]: time="2025-05-09T23:53:36.476492804Z" level=error msg="ContainerStatus for \"ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921\": not found" May 9 23:53:36.476615 kubelet[1739]: E0509 23:53:36.476597 1739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921\": not found" containerID="ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921" May 9 23:53:36.476703 kubelet[1739]: I0509 23:53:36.476624 1739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921"} err="failed to get container status \"ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921\": rpc error: code = NotFound desc = an error occurred when try to find container \"ccdaf6dfab271c7ac86059c6b54b5b0030ad276ab620058ee7b7858ec37ef921\": not found" May 9 23:53:36.476733 kubelet[1739]: I0509 23:53:36.476703 1739 scope.go:117] "RemoveContainer" containerID="46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e" May 9 23:53:36.476844 containerd[1444]: time="2025-05-09T23:53:36.476815807Z" level=error msg="ContainerStatus for \"46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e\": not found" May 9 23:53:36.476915 kubelet[1739]: E0509 23:53:36.476899 1739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e\": not found" containerID="46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e" May 9 23:53:36.476948 kubelet[1739]: I0509 23:53:36.476918 1739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e"} err="failed to get container status \"46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e\": rpc error: code = NotFound desc = an error occurred when try to find container \"46bea6e91e54c10edbfc4fc4e6ddc64450a06b08f4de54b56e7333774a41115e\": not found" May 9 23:53:36.476948 kubelet[1739]: I0509 23:53:36.476929 1739 scope.go:117] "RemoveContainer" containerID="64efc23cf04f8b1d0bb12ab80bf8176492433f4bdb6176e187b5e40cefbe1195" May 9 23:53:36.477074 containerd[1444]: time="2025-05-09T23:53:36.477054450Z" level=error msg="ContainerStatus for \"64efc23cf04f8b1d0bb12ab80bf8176492433f4bdb6176e187b5e40cefbe1195\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64efc23cf04f8b1d0bb12ab80bf8176492433f4bdb6176e187b5e40cefbe1195\": not found" May 9 23:53:36.477152 kubelet[1739]: E0509 23:53:36.477131 1739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64efc23cf04f8b1d0bb12ab80bf8176492433f4bdb6176e187b5e40cefbe1195\": not found" containerID="64efc23cf04f8b1d0bb12ab80bf8176492433f4bdb6176e187b5e40cefbe1195" May 9 23:53:36.477180 kubelet[1739]: I0509 23:53:36.477157 1739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"64efc23cf04f8b1d0bb12ab80bf8176492433f4bdb6176e187b5e40cefbe1195"} err="failed to get container status \"64efc23cf04f8b1d0bb12ab80bf8176492433f4bdb6176e187b5e40cefbe1195\": rpc error: code = NotFound desc = an error occurred when try to find container \"64efc23cf04f8b1d0bb12ab80bf8176492433f4bdb6176e187b5e40cefbe1195\": not found" May 9 23:53:36.477180 kubelet[1739]: I0509 23:53:36.477169 1739 scope.go:117] "RemoveContainer" containerID="40db122b193860429be1c7767f4d453265dbb2e8d936dc637527796dda4b8ae8" May 9 23:53:36.477294 containerd[1444]: time="2025-05-09T23:53:36.477267572Z" level=error msg="ContainerStatus for \"40db122b193860429be1c7767f4d453265dbb2e8d936dc637527796dda4b8ae8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"40db122b193860429be1c7767f4d453265dbb2e8d936dc637527796dda4b8ae8\": not found" May 9 23:53:36.477385 kubelet[1739]: E0509 23:53:36.477366 1739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40db122b193860429be1c7767f4d453265dbb2e8d936dc637527796dda4b8ae8\": not found" containerID="40db122b193860429be1c7767f4d453265dbb2e8d936dc637527796dda4b8ae8" May 9 23:53:36.477415 kubelet[1739]: I0509 23:53:36.477389 1739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"40db122b193860429be1c7767f4d453265dbb2e8d936dc637527796dda4b8ae8"} err="failed to get container status \"40db122b193860429be1c7767f4d453265dbb2e8d936dc637527796dda4b8ae8\": rpc error: code = NotFound desc = an error occurred when try to find container \"40db122b193860429be1c7767f4d453265dbb2e8d936dc637527796dda4b8ae8\": not found" May 9 23:53:36.477442 kubelet[1739]: I0509 23:53:36.477416 1739 scope.go:117] "RemoveContainer" containerID="937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae" May 9 23:53:36.477679 containerd[1444]: time="2025-05-09T23:53:36.477648977Z" level=error msg="ContainerStatus for \"937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae\": not found" May 9 23:53:36.477795 kubelet[1739]: E0509 23:53:36.477776 1739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae\": not found" containerID="937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae" May 9 23:53:36.477822 kubelet[1739]: I0509 23:53:36.477802 1739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae"} err="failed to get container status \"937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"937d63b9aa90e7c6bbca6c35e9ecbc8caad259485d2b4fa7a168455711f1e5ae\": not found" May 9 23:53:37.211911 kubelet[1739]: E0509 23:53:37.211864 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:38.212408 kubelet[1739]: E0509 23:53:38.212363 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:38.317116 kubelet[1739]: I0509 23:53:38.317061 1739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32cfeb9b-3503-4f52-8e79-20b0e13b6daa" path="/var/lib/kubelet/pods/32cfeb9b-3503-4f52-8e79-20b0e13b6daa/volumes" May 9 23:53:39.212653 kubelet[1739]: E0509 23:53:39.212605 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:39.340080 kubelet[1739]: E0509 23:53:39.339987 1739 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 9 23:53:39.730139 kubelet[1739]: E0509 23:53:39.729988 1739 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="32cfeb9b-3503-4f52-8e79-20b0e13b6daa" containerName="cilium-agent" May 9 23:53:39.730139 kubelet[1739]: E0509 23:53:39.730015 1739 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="32cfeb9b-3503-4f52-8e79-20b0e13b6daa" containerName="mount-cgroup" May 9 23:53:39.730139 kubelet[1739]: E0509 23:53:39.730021 1739 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="32cfeb9b-3503-4f52-8e79-20b0e13b6daa" containerName="apply-sysctl-overwrites" May 9 23:53:39.730139 kubelet[1739]: E0509 23:53:39.730028 1739 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="32cfeb9b-3503-4f52-8e79-20b0e13b6daa" containerName="mount-bpf-fs" May 9 23:53:39.730139 kubelet[1739]: E0509 23:53:39.730033 1739 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="32cfeb9b-3503-4f52-8e79-20b0e13b6daa" containerName="clean-cilium-state" May 9 23:53:39.730139 kubelet[1739]: I0509 23:53:39.730052 1739 memory_manager.go:354] "RemoveStaleState removing state" podUID="32cfeb9b-3503-4f52-8e79-20b0e13b6daa" containerName="cilium-agent" May 9 23:53:39.734976 systemd[1]: Created slice kubepods-besteffort-pod7c2a11b1_20f6_492d_a5a7_2ef2d14363e1.slice - libcontainer container kubepods-besteffort-pod7c2a11b1_20f6_492d_a5a7_2ef2d14363e1.slice. May 9 23:53:39.737171 kubelet[1739]: W0509 23:53:39.737147 1739 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.0.0.76" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.76' and this object May 9 23:53:39.737256 kubelet[1739]: E0509 23:53:39.737189 1739 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:10.0.0.76\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.76' and this object" logger="UnhandledError" May 9 23:53:39.744750 kubelet[1739]: I0509 23:53:39.744663 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2m5r\" (UniqueName: \"kubernetes.io/projected/7c2a11b1-20f6-492d-a5a7-2ef2d14363e1-kube-api-access-z2m5r\") pod \"cilium-operator-5d85765b45-t28x2\" (UID: \"7c2a11b1-20f6-492d-a5a7-2ef2d14363e1\") " pod="kube-system/cilium-operator-5d85765b45-t28x2" May 9 23:53:39.744750 kubelet[1739]: I0509 23:53:39.744701 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c2a11b1-20f6-492d-a5a7-2ef2d14363e1-cilium-config-path\") pod \"cilium-operator-5d85765b45-t28x2\" (UID: \"7c2a11b1-20f6-492d-a5a7-2ef2d14363e1\") " pod="kube-system/cilium-operator-5d85765b45-t28x2" May 9 23:53:39.745229 systemd[1]: Created slice kubepods-burstable-podc9d4e193_1b14_41d6_b3a2_2d016c71708a.slice - libcontainer container kubepods-burstable-podc9d4e193_1b14_41d6_b3a2_2d016c71708a.slice. May 9 23:53:39.845194 kubelet[1739]: I0509 23:53:39.845151 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c9d4e193-1b14-41d6-b3a2-2d016c71708a-bpf-maps\") pod \"cilium-g6lt8\" (UID: \"c9d4e193-1b14-41d6-b3a2-2d016c71708a\") " pod="kube-system/cilium-g6lt8" May 9 23:53:39.845194 kubelet[1739]: I0509 23:53:39.845191 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c9d4e193-1b14-41d6-b3a2-2d016c71708a-hostproc\") pod \"cilium-g6lt8\" (UID: \"c9d4e193-1b14-41d6-b3a2-2d016c71708a\") " pod="kube-system/cilium-g6lt8" May 9 23:53:39.845194 kubelet[1739]: I0509 23:53:39.845206 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c9d4e193-1b14-41d6-b3a2-2d016c71708a-cilium-run\") pod \"cilium-g6lt8\" (UID: \"c9d4e193-1b14-41d6-b3a2-2d016c71708a\") " pod="kube-system/cilium-g6lt8" May 9 23:53:39.845388 kubelet[1739]: I0509 23:53:39.845238 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9d4e193-1b14-41d6-b3a2-2d016c71708a-xtables-lock\") pod \"cilium-g6lt8\" (UID: \"c9d4e193-1b14-41d6-b3a2-2d016c71708a\") " pod="kube-system/cilium-g6lt8" May 9 23:53:39.845388 kubelet[1739]: I0509 23:53:39.845255 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c9d4e193-1b14-41d6-b3a2-2d016c71708a-clustermesh-secrets\") pod \"cilium-g6lt8\" (UID: \"c9d4e193-1b14-41d6-b3a2-2d016c71708a\") " pod="kube-system/cilium-g6lt8" May 9 23:53:39.845388 kubelet[1739]: I0509 23:53:39.845272 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c9d4e193-1b14-41d6-b3a2-2d016c71708a-cilium-ipsec-secrets\") pod \"cilium-g6lt8\" (UID: \"c9d4e193-1b14-41d6-b3a2-2d016c71708a\") " pod="kube-system/cilium-g6lt8" May 9 23:53:39.845388 kubelet[1739]: I0509 23:53:39.845305 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c9d4e193-1b14-41d6-b3a2-2d016c71708a-hubble-tls\") pod \"cilium-g6lt8\" (UID: \"c9d4e193-1b14-41d6-b3a2-2d016c71708a\") " pod="kube-system/cilium-g6lt8" May 9 23:53:39.845388 kubelet[1739]: I0509 23:53:39.845322 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pffxd\" (UniqueName: \"kubernetes.io/projected/c9d4e193-1b14-41d6-b3a2-2d016c71708a-kube-api-access-pffxd\") pod \"cilium-g6lt8\" (UID: \"c9d4e193-1b14-41d6-b3a2-2d016c71708a\") " pod="kube-system/cilium-g6lt8" May 9 23:53:39.845388 kubelet[1739]: I0509 23:53:39.845338 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c9d4e193-1b14-41d6-b3a2-2d016c71708a-cni-path\") pod \"cilium-g6lt8\" (UID: \"c9d4e193-1b14-41d6-b3a2-2d016c71708a\") " pod="kube-system/cilium-g6lt8" May 9 23:53:39.845505 kubelet[1739]: I0509 23:53:39.845355 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9d4e193-1b14-41d6-b3a2-2d016c71708a-etc-cni-netd\") pod \"cilium-g6lt8\" (UID: \"c9d4e193-1b14-41d6-b3a2-2d016c71708a\") " pod="kube-system/cilium-g6lt8" May 9 23:53:39.845505 kubelet[1739]: I0509 23:53:39.845370 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c9d4e193-1b14-41d6-b3a2-2d016c71708a-cilium-config-path\") pod \"cilium-g6lt8\" (UID: \"c9d4e193-1b14-41d6-b3a2-2d016c71708a\") " pod="kube-system/cilium-g6lt8" May 9 23:53:39.845505 kubelet[1739]: I0509 23:53:39.845386 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c9d4e193-1b14-41d6-b3a2-2d016c71708a-host-proc-sys-kernel\") pod \"cilium-g6lt8\" (UID: \"c9d4e193-1b14-41d6-b3a2-2d016c71708a\") " pod="kube-system/cilium-g6lt8" May 9 23:53:39.845505 kubelet[1739]: I0509 23:53:39.845401 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c9d4e193-1b14-41d6-b3a2-2d016c71708a-cilium-cgroup\") pod \"cilium-g6lt8\" (UID: \"c9d4e193-1b14-41d6-b3a2-2d016c71708a\") " pod="kube-system/cilium-g6lt8" May 9 23:53:39.845505 kubelet[1739]: I0509 23:53:39.845415 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c9d4e193-1b14-41d6-b3a2-2d016c71708a-host-proc-sys-net\") pod \"cilium-g6lt8\" (UID: \"c9d4e193-1b14-41d6-b3a2-2d016c71708a\") " pod="kube-system/cilium-g6lt8" May 9 23:53:39.845505 kubelet[1739]: I0509 23:53:39.845430 1739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9d4e193-1b14-41d6-b3a2-2d016c71708a-lib-modules\") pod \"cilium-g6lt8\" (UID: \"c9d4e193-1b14-41d6-b3a2-2d016c71708a\") " pod="kube-system/cilium-g6lt8" May 9 23:53:40.213725 kubelet[1739]: E0509 23:53:40.213676 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:40.937483 kubelet[1739]: E0509 23:53:40.937442 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:53:40.938002 containerd[1444]: time="2025-05-09T23:53:40.937966575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-t28x2,Uid:7c2a11b1-20f6-492d-a5a7-2ef2d14363e1,Namespace:kube-system,Attempt:0,}" May 9 23:53:40.954041 containerd[1444]: time="2025-05-09T23:53:40.953937635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:53:40.954816 kubelet[1739]: E0509 23:53:40.954793 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:53:40.955694 containerd[1444]: time="2025-05-09T23:53:40.955372287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g6lt8,Uid:c9d4e193-1b14-41d6-b3a2-2d016c71708a,Namespace:kube-system,Attempt:0,}" May 9 23:53:40.957455 containerd[1444]: time="2025-05-09T23:53:40.954004875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:53:40.957589 containerd[1444]: time="2025-05-09T23:53:40.957549866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:53:40.957957 containerd[1444]: time="2025-05-09T23:53:40.957855309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:53:40.981045 systemd[1]: Started cri-containerd-a89e65aecb1e105429485e7fa9352c219da3ccfe5828f1029be99d2fe103fb03.scope - libcontainer container a89e65aecb1e105429485e7fa9352c219da3ccfe5828f1029be99d2fe103fb03. May 9 23:53:40.987674 containerd[1444]: time="2025-05-09T23:53:40.987579369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:53:40.987674 containerd[1444]: time="2025-05-09T23:53:40.987631209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:53:40.987674 containerd[1444]: time="2025-05-09T23:53:40.987646849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:53:40.988060 containerd[1444]: time="2025-05-09T23:53:40.987716170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:53:41.012032 systemd[1]: Started cri-containerd-c770a3b6f7b7a2c77ab4a5ae61d7dfb4de13e8fa69f1ca5928f7ead957a11969.scope - libcontainer container c770a3b6f7b7a2c77ab4a5ae61d7dfb4de13e8fa69f1ca5928f7ead957a11969. May 9 23:53:41.016476 containerd[1444]: time="2025-05-09T23:53:41.016418493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-t28x2,Uid:7c2a11b1-20f6-492d-a5a7-2ef2d14363e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a89e65aecb1e105429485e7fa9352c219da3ccfe5828f1029be99d2fe103fb03\"" May 9 23:53:41.017412 kubelet[1739]: E0509 23:53:41.017388 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:53:41.018786 containerd[1444]: time="2025-05-09T23:53:41.018548950Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 9 23:53:41.034335 containerd[1444]: time="2025-05-09T23:53:41.034289439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g6lt8,Uid:c9d4e193-1b14-41d6-b3a2-2d016c71708a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c770a3b6f7b7a2c77ab4a5ae61d7dfb4de13e8fa69f1ca5928f7ead957a11969\"" May 9 23:53:41.035246 kubelet[1739]: E0509 23:53:41.035220 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:53:41.037801 containerd[1444]: time="2025-05-09T23:53:41.037768228Z" level=info msg="CreateContainer within sandbox \"c770a3b6f7b7a2c77ab4a5ae61d7dfb4de13e8fa69f1ca5928f7ead957a11969\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 23:53:41.053163 containerd[1444]: time="2025-05-09T23:53:41.052978152Z" level=info msg="CreateContainer within sandbox \"c770a3b6f7b7a2c77ab4a5ae61d7dfb4de13e8fa69f1ca5928f7ead957a11969\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cc9fe301630888ec8b2fd6f6afb7de24d49a6443fb306a27f032bcdbafc560d1\"" May 9 23:53:41.054551 containerd[1444]: time="2025-05-09T23:53:41.053578437Z" level=info msg="StartContainer for \"cc9fe301630888ec8b2fd6f6afb7de24d49a6443fb306a27f032bcdbafc560d1\"" May 9 23:53:41.077018 systemd[1]: Started cri-containerd-cc9fe301630888ec8b2fd6f6afb7de24d49a6443fb306a27f032bcdbafc560d1.scope - libcontainer container cc9fe301630888ec8b2fd6f6afb7de24d49a6443fb306a27f032bcdbafc560d1. May 9 23:53:41.098476 containerd[1444]: time="2025-05-09T23:53:41.098439045Z" level=info msg="StartContainer for \"cc9fe301630888ec8b2fd6f6afb7de24d49a6443fb306a27f032bcdbafc560d1\" returns successfully" May 9 23:53:41.202244 systemd[1]: cri-containerd-cc9fe301630888ec8b2fd6f6afb7de24d49a6443fb306a27f032bcdbafc560d1.scope: Deactivated successfully. May 9 23:53:41.214936 kubelet[1739]: E0509 23:53:41.214810 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:41.229250 containerd[1444]: time="2025-05-09T23:53:41.229025554Z" level=info msg="shim disconnected" id=cc9fe301630888ec8b2fd6f6afb7de24d49a6443fb306a27f032bcdbafc560d1 namespace=k8s.io May 9 23:53:41.229250 containerd[1444]: time="2025-05-09T23:53:41.229081075Z" level=warning msg="cleaning up after shim disconnected" id=cc9fe301630888ec8b2fd6f6afb7de24d49a6443fb306a27f032bcdbafc560d1 namespace=k8s.io May 9 23:53:41.229250 containerd[1444]: time="2025-05-09T23:53:41.229090115Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:53:41.463362 kubelet[1739]: E0509 23:53:41.462935 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:53:41.464959 containerd[1444]: time="2025-05-09T23:53:41.464918367Z" level=info msg="CreateContainer within sandbox \"c770a3b6f7b7a2c77ab4a5ae61d7dfb4de13e8fa69f1ca5928f7ead957a11969\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 23:53:41.785676 containerd[1444]: time="2025-05-09T23:53:41.785567953Z" level=info msg="CreateContainer within sandbox \"c770a3b6f7b7a2c77ab4a5ae61d7dfb4de13e8fa69f1ca5928f7ead957a11969\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c1d60336b241bc6f8a38509340c32749981f2170af162fc8efcc12af4a5253c1\"" May 9 23:53:41.786191 containerd[1444]: time="2025-05-09T23:53:41.786163838Z" level=info msg="StartContainer for \"c1d60336b241bc6f8a38509340c32749981f2170af162fc8efcc12af4a5253c1\"" May 9 23:53:41.810043 systemd[1]: Started cri-containerd-c1d60336b241bc6f8a38509340c32749981f2170af162fc8efcc12af4a5253c1.scope - libcontainer container c1d60336b241bc6f8a38509340c32749981f2170af162fc8efcc12af4a5253c1. May 9 23:53:41.871781 systemd[1]: cri-containerd-c1d60336b241bc6f8a38509340c32749981f2170af162fc8efcc12af4a5253c1.scope: Deactivated successfully. May 9 23:53:41.884001 containerd[1444]: time="2025-05-09T23:53:41.883949319Z" level=info msg="StartContainer for \"c1d60336b241bc6f8a38509340c32749981f2170af162fc8efcc12af4a5253c1\" returns successfully" May 9 23:53:41.977646 containerd[1444]: time="2025-05-09T23:53:41.977581006Z" level=info msg="shim disconnected" id=c1d60336b241bc6f8a38509340c32749981f2170af162fc8efcc12af4a5253c1 namespace=k8s.io May 9 23:53:41.978227 containerd[1444]: time="2025-05-09T23:53:41.977641086Z" level=warning msg="cleaning up after shim disconnected" id=c1d60336b241bc6f8a38509340c32749981f2170af162fc8efcc12af4a5253c1 namespace=k8s.io May 9 23:53:41.978227 containerd[1444]: time="2025-05-09T23:53:41.977674007Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:53:42.215898 kubelet[1739]: E0509 23:53:42.215853 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:42.404268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2352704356.mount: Deactivated successfully. May 9 23:53:42.468335 kubelet[1739]: E0509 23:53:42.468218 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:53:42.470111 containerd[1444]: time="2025-05-09T23:53:42.470073720Z" level=info msg="CreateContainer within sandbox \"c770a3b6f7b7a2c77ab4a5ae61d7dfb4de13e8fa69f1ca5928f7ead957a11969\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 23:53:42.502092 containerd[1444]: time="2025-05-09T23:53:42.501986725Z" level=info msg="CreateContainer within sandbox \"c770a3b6f7b7a2c77ab4a5ae61d7dfb4de13e8fa69f1ca5928f7ead957a11969\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"33f2356e9d35ba9488c6318a0780783dd275fcb99de233e1d9905507bb92cfab\"" May 9 23:53:42.503971 containerd[1444]: time="2025-05-09T23:53:42.502562050Z" level=info msg="StartContainer for \"33f2356e9d35ba9488c6318a0780783dd275fcb99de233e1d9905507bb92cfab\"" May 9 23:53:42.531000 systemd[1]: Started cri-containerd-33f2356e9d35ba9488c6318a0780783dd275fcb99de233e1d9905507bb92cfab.scope - libcontainer container 33f2356e9d35ba9488c6318a0780783dd275fcb99de233e1d9905507bb92cfab. May 9 23:53:42.554996 containerd[1444]: time="2025-05-09T23:53:42.554954052Z" level=info msg="StartContainer for \"33f2356e9d35ba9488c6318a0780783dd275fcb99de233e1d9905507bb92cfab\" returns successfully" May 9 23:53:42.556386 systemd[1]: cri-containerd-33f2356e9d35ba9488c6318a0780783dd275fcb99de233e1d9905507bb92cfab.scope: Deactivated successfully. May 9 23:53:42.585071 containerd[1444]: time="2025-05-09T23:53:42.584990883Z" level=info msg="shim disconnected" id=33f2356e9d35ba9488c6318a0780783dd275fcb99de233e1d9905507bb92cfab namespace=k8s.io May 9 23:53:42.585071 containerd[1444]: time="2025-05-09T23:53:42.585065683Z" level=warning msg="cleaning up after shim disconnected" id=33f2356e9d35ba9488c6318a0780783dd275fcb99de233e1d9905507bb92cfab namespace=k8s.io May 9 23:53:42.585071 containerd[1444]: time="2025-05-09T23:53:42.585076084Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:53:42.868359 containerd[1444]: time="2025-05-09T23:53:42.867918336Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:53:42.871033 containerd[1444]: time="2025-05-09T23:53:42.870982719Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 9 23:53:42.872056 containerd[1444]: time="2025-05-09T23:53:42.872021367Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:53:42.873503 containerd[1444]: time="2025-05-09T23:53:42.873466258Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.854880707s" May 9 23:53:42.873503 containerd[1444]: time="2025-05-09T23:53:42.873501178Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 9 23:53:42.875848 containerd[1444]: time="2025-05-09T23:53:42.875716875Z" level=info msg="CreateContainer within sandbox \"a89e65aecb1e105429485e7fa9352c219da3ccfe5828f1029be99d2fe103fb03\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 9 23:53:42.887077 containerd[1444]: time="2025-05-09T23:53:42.887028842Z" level=info msg="CreateContainer within sandbox \"a89e65aecb1e105429485e7fa9352c219da3ccfe5828f1029be99d2fe103fb03\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"95806a7f84ecb3d37fc180ce7c481721842b78d699c8476e5113719000f17664\"" May 9 23:53:42.887524 containerd[1444]: time="2025-05-09T23:53:42.887492326Z" level=info msg="StartContainer for \"95806a7f84ecb3d37fc180ce7c481721842b78d699c8476e5113719000f17664\"" May 9 23:53:42.923994 systemd[1]: Started cri-containerd-95806a7f84ecb3d37fc180ce7c481721842b78d699c8476e5113719000f17664.scope - libcontainer container 95806a7f84ecb3d37fc180ce7c481721842b78d699c8476e5113719000f17664. May 9 23:53:42.946141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33f2356e9d35ba9488c6318a0780783dd275fcb99de233e1d9905507bb92cfab-rootfs.mount: Deactivated successfully. May 9 23:53:42.953127 containerd[1444]: time="2025-05-09T23:53:42.953017989Z" level=info msg="StartContainer for \"95806a7f84ecb3d37fc180ce7c481721842b78d699c8476e5113719000f17664\" returns successfully" May 9 23:53:43.216477 kubelet[1739]: E0509 23:53:43.216409 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:43.470875 kubelet[1739]: E0509 23:53:43.470747 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:53:43.472845 kubelet[1739]: E0509 23:53:43.472811 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:53:43.474524 containerd[1444]: time="2025-05-09T23:53:43.474487967Z" level=info msg="CreateContainer within sandbox \"c770a3b6f7b7a2c77ab4a5ae61d7dfb4de13e8fa69f1ca5928f7ead957a11969\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 23:53:43.487292 containerd[1444]: time="2025-05-09T23:53:43.487159218Z" level=info msg="CreateContainer within sandbox \"c770a3b6f7b7a2c77ab4a5ae61d7dfb4de13e8fa69f1ca5928f7ead957a11969\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d395b58635ea698a8a54074661fbea6df77b4d6762321407aa9eddf767b68483\"" May 9 23:53:43.487816 containerd[1444]: time="2025-05-09T23:53:43.487778462Z" level=info msg="StartContainer for \"d395b58635ea698a8a54074661fbea6df77b4d6762321407aa9eddf767b68483\"" May 9 23:53:43.508877 kubelet[1739]: I0509 23:53:43.505916 1739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-t28x2" podStartSLOduration=2.649619155 podStartE2EDuration="4.505896353s" podCreationTimestamp="2025-05-09 23:53:39 +0000 UTC" firstStartedPulling="2025-05-09 23:53:41.018266708 +0000 UTC m=+57.860706467" lastFinishedPulling="2025-05-09 23:53:42.874543906 +0000 UTC m=+59.716983665" observedRunningTime="2025-05-09 23:53:43.481521137 +0000 UTC m=+60.323960896" watchObservedRunningTime="2025-05-09 23:53:43.505896353 +0000 UTC m=+60.348336112" May 9 23:53:43.516045 systemd[1]: Started cri-containerd-d395b58635ea698a8a54074661fbea6df77b4d6762321407aa9eddf767b68483.scope - libcontainer container d395b58635ea698a8a54074661fbea6df77b4d6762321407aa9eddf767b68483. May 9 23:53:43.538189 systemd[1]: cri-containerd-d395b58635ea698a8a54074661fbea6df77b4d6762321407aa9eddf767b68483.scope: Deactivated successfully. May 9 23:53:43.541317 containerd[1444]: time="2025-05-09T23:53:43.541188807Z" level=info msg="StartContainer for \"d395b58635ea698a8a54074661fbea6df77b4d6762321407aa9eddf767b68483\" returns successfully" May 9 23:53:43.553816 containerd[1444]: time="2025-05-09T23:53:43.548747101Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9d4e193_1b14_41d6_b3a2_2d016c71708a.slice/cri-containerd-d395b58635ea698a8a54074661fbea6df77b4d6762321407aa9eddf767b68483.scope/memory.events\": no such file or directory" May 9 23:53:43.563397 containerd[1444]: time="2025-05-09T23:53:43.563318366Z" level=info msg="shim disconnected" id=d395b58635ea698a8a54074661fbea6df77b4d6762321407aa9eddf767b68483 namespace=k8s.io May 9 23:53:43.563397 containerd[1444]: time="2025-05-09T23:53:43.563388087Z" level=warning msg="cleaning up after shim disconnected" id=d395b58635ea698a8a54074661fbea6df77b4d6762321407aa9eddf767b68483 namespace=k8s.io May 9 23:53:43.563397 containerd[1444]: time="2025-05-09T23:53:43.563396887Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:53:43.945473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d395b58635ea698a8a54074661fbea6df77b4d6762321407aa9eddf767b68483-rootfs.mount: Deactivated successfully. May 9 23:53:44.173094 kubelet[1739]: E0509 23:53:44.173044 1739 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:44.198232 containerd[1444]: time="2025-05-09T23:53:44.197846090Z" level=info msg="StopPodSandbox for \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\"" May 9 23:53:44.198232 containerd[1444]: time="2025-05-09T23:53:44.197929051Z" level=info msg="TearDown network for sandbox \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\" successfully" May 9 23:53:44.198232 containerd[1444]: time="2025-05-09T23:53:44.197939091Z" level=info msg="StopPodSandbox for \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\" returns successfully" May 9 23:53:44.199206 containerd[1444]: time="2025-05-09T23:53:44.199174659Z" level=info msg="RemovePodSandbox for \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\"" May 9 23:53:44.199271 containerd[1444]: time="2025-05-09T23:53:44.199210699Z" level=info msg="Forcibly stopping sandbox \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\"" May 9 23:53:44.199295 containerd[1444]: time="2025-05-09T23:53:44.199275980Z" level=info msg="TearDown network for sandbox \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\" successfully" May 9 23:53:44.213608 containerd[1444]: time="2025-05-09T23:53:44.213545516Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 23:53:44.213755 containerd[1444]: time="2025-05-09T23:53:44.213634957Z" level=info msg="RemovePodSandbox \"457584492b89294632676c727ef16b1a11d245e9fb4b3aaa817672d9516bf634\" returns successfully" May 9 23:53:44.216748 kubelet[1739]: E0509 23:53:44.216712 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:44.340872 kubelet[1739]: E0509 23:53:44.340783 1739 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 9 23:53:44.479358 kubelet[1739]: E0509 23:53:44.478992 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:53:44.479358 kubelet[1739]: E0509 23:53:44.479094 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:53:44.481152 containerd[1444]: time="2025-05-09T23:53:44.481114322Z" level=info msg="CreateContainer within sandbox \"c770a3b6f7b7a2c77ab4a5ae61d7dfb4de13e8fa69f1ca5928f7ead957a11969\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 23:53:44.496303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1403338284.mount: Deactivated successfully. May 9 23:53:44.498359 containerd[1444]: time="2025-05-09T23:53:44.498294118Z" level=info msg="CreateContainer within sandbox \"c770a3b6f7b7a2c77ab4a5ae61d7dfb4de13e8fa69f1ca5928f7ead957a11969\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"89208ad0ed3b8a9fd1e10eaf028df03292a8020ba6f45f9ee9d121e4acabb07e\"" May 9 23:53:44.499374 containerd[1444]: time="2025-05-09T23:53:44.499291205Z" level=info msg="StartContainer for \"89208ad0ed3b8a9fd1e10eaf028df03292a8020ba6f45f9ee9d121e4acabb07e\"" May 9 23:53:44.545044 systemd[1]: Started cri-containerd-89208ad0ed3b8a9fd1e10eaf028df03292a8020ba6f45f9ee9d121e4acabb07e.scope - libcontainer container 89208ad0ed3b8a9fd1e10eaf028df03292a8020ba6f45f9ee9d121e4acabb07e. May 9 23:53:44.572176 containerd[1444]: time="2025-05-09T23:53:44.572130016Z" level=info msg="StartContainer for \"89208ad0ed3b8a9fd1e10eaf028df03292a8020ba6f45f9ee9d121e4acabb07e\" returns successfully" May 9 23:53:44.879870 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 9 23:53:44.945555 systemd[1]: run-containerd-runc-k8s.io-89208ad0ed3b8a9fd1e10eaf028df03292a8020ba6f45f9ee9d121e4acabb07e-runc.wOJwBP.mount: Deactivated successfully. May 9 23:53:45.217171 kubelet[1739]: E0509 23:53:45.217121 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:45.446523 kubelet[1739]: I0509 23:53:45.446464 1739 setters.go:600] "Node became not ready" node="10.0.0.76" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-09T23:53:45Z","lastTransitionTime":"2025-05-09T23:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 9 23:53:45.483717 kubelet[1739]: E0509 23:53:45.483571 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:53:45.499139 kubelet[1739]: I0509 23:53:45.498900 1739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g6lt8" podStartSLOduration=6.498884862 podStartE2EDuration="6.498884862s" podCreationTimestamp="2025-05-09 23:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:53:45.49868046 +0000 UTC m=+62.341120179" watchObservedRunningTime="2025-05-09 23:53:45.498884862 +0000 UTC m=+62.341324621" May 9 23:53:46.223202 kubelet[1739]: E0509 23:53:46.217254 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:46.956292 kubelet[1739]: E0509 23:53:46.956228 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:53:47.220976 kubelet[1739]: E0509 23:53:47.220860 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:47.761417 systemd-networkd[1387]: lxc_health: Link UP May 9 23:53:47.770100 systemd-networkd[1387]: lxc_health: Gained carrier May 9 23:53:48.221821 kubelet[1739]: E0509 23:53:48.221772 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:48.956693 kubelet[1739]: E0509 23:53:48.956639 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:53:49.222544 kubelet[1739]: E0509 23:53:49.222412 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:49.492927 kubelet[1739]: E0509 23:53:49.491153 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:53:49.544982 systemd-networkd[1387]: lxc_health: Gained IPv6LL May 9 23:53:50.223139 kubelet[1739]: E0509 23:53:50.223077 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:50.492476 kubelet[1739]: E0509 23:53:50.492359 1739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:53:51.224048 kubelet[1739]: E0509 23:53:51.224007 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:52.224872 kubelet[1739]: E0509 23:53:52.224811 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:53.225405 kubelet[1739]: E0509 23:53:53.225355 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:53:54.225760 kubelet[1739]: E0509 23:53:54.225711 1739 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"