May 9 23:44:43.938467 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 9 23:44:43.938490 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri May 9 22:24:49 -00 2025 May 9 23:44:43.938500 kernel: KASLR enabled May 9 23:44:43.938505 kernel: efi: EFI v2.7 by EDK II May 9 23:44:43.938511 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 May 9 23:44:43.938517 kernel: random: crng init done May 9 23:44:43.938524 kernel: secureboot: Secure boot disabled May 9 23:44:43.938530 kernel: ACPI: Early table checksum verification disabled May 9 23:44:43.938536 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 9 23:44:43.938543 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 9 23:44:43.938549 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:44:43.938555 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:44:43.938561 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:44:43.938567 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:44:43.938575 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:44:43.938582 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:44:43.938589 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:44:43.938595 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:44:43.938601 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:44:43.938607 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 9 23:44:43.938613 kernel: NUMA: Failed to initialise from firmware May 9 23:44:43.938619 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 9 23:44:43.938626 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] May 9 23:44:43.938632 kernel: Zone ranges: May 9 23:44:43.938638 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 9 23:44:43.938645 kernel: DMA32 empty May 9 23:44:43.938651 kernel: Normal empty May 9 23:44:43.938657 kernel: Movable zone start for each node May 9 23:44:43.938663 kernel: Early memory node ranges May 9 23:44:43.938669 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 9 23:44:43.938675 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 9 23:44:43.938682 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 9 23:44:43.938688 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 9 23:44:43.938695 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 9 23:44:43.938701 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 9 23:44:43.938707 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 9 23:44:43.938713 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 9 23:44:43.938721 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 9 23:44:43.938727 kernel: psci: probing for conduit method from ACPI. May 9 23:44:43.938733 kernel: psci: PSCIv1.1 detected in firmware. May 9 23:44:43.938743 kernel: psci: Using standard PSCI v0.2 function IDs May 9 23:44:43.938749 kernel: psci: Trusted OS migration not required May 9 23:44:43.938756 kernel: psci: SMC Calling Convention v1.1 May 9 23:44:43.938764 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 9 23:44:43.938771 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 9 23:44:43.938778 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 9 23:44:43.938787 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 9 23:44:43.938793 kernel: Detected PIPT I-cache on CPU0 May 9 23:44:43.938800 kernel: CPU features: detected: GIC system register CPU interface May 9 23:44:43.938807 kernel: CPU features: detected: Hardware dirty bit management May 9 23:44:43.938813 kernel: CPU features: detected: Spectre-v4 May 9 23:44:43.938820 kernel: CPU features: detected: Spectre-BHB May 9 23:44:43.938827 kernel: CPU features: kernel page table isolation forced ON by KASLR May 9 23:44:43.938835 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 9 23:44:43.938842 kernel: CPU features: detected: ARM erratum 1418040 May 9 23:44:43.938848 kernel: CPU features: detected: SSBS not fully self-synchronizing May 9 23:44:43.938855 kernel: alternatives: applying boot alternatives May 9 23:44:43.938862 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9a99b6d651f8aeb5d7bfd4370bc36449b7e5138d2f42e40e0aede009df00f5a4 May 9 23:44:43.938869 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 23:44:43.938876 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 23:44:43.938882 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 23:44:43.938889 kernel: Fallback order for Node 0: 0 May 9 23:44:43.938896 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 9 23:44:43.938902 kernel: Policy zone: DMA May 9 23:44:43.938911 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 23:44:43.938917 kernel: software IO TLB: area num 4. May 9 23:44:43.938924 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 9 23:44:43.938931 kernel: Memory: 2386256K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39744K init, 897K bss, 186032K reserved, 0K cma-reserved) May 9 23:44:43.938938 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 23:44:43.938944 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 23:44:43.938976 kernel: rcu: RCU event tracing is enabled. May 9 23:44:43.938985 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 23:44:43.938992 kernel: Trampoline variant of Tasks RCU enabled. May 9 23:44:43.938999 kernel: Tracing variant of Tasks RCU enabled. May 9 23:44:43.939005 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 23:44:43.939012 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 23:44:43.939021 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 23:44:43.939028 kernel: GICv3: 256 SPIs implemented May 9 23:44:43.939034 kernel: GICv3: 0 Extended SPIs implemented May 9 23:44:43.939041 kernel: Root IRQ handler: gic_handle_irq May 9 23:44:43.939047 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 9 23:44:43.939054 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 9 23:44:43.939060 kernel: ITS [mem 0x08080000-0x0809ffff] May 9 23:44:43.939067 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 9 23:44:43.939113 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 9 23:44:43.939120 kernel: GICv3: using LPI property table @0x00000000400f0000 May 9 23:44:43.939127 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 9 23:44:43.939136 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 23:44:43.939142 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:44:43.939149 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 9 23:44:43.939156 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 9 23:44:43.939162 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 9 23:44:43.939169 kernel: arm-pv: using stolen time PV May 9 23:44:43.939176 kernel: Console: colour dummy device 80x25 May 9 23:44:43.939183 kernel: ACPI: Core revision 20230628 May 9 23:44:43.939190 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 9 23:44:43.939197 kernel: pid_max: default: 32768 minimum: 301 May 9 23:44:43.939205 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 23:44:43.939212 kernel: landlock: Up and running. May 9 23:44:43.939218 kernel: SELinux: Initializing. May 9 23:44:43.939225 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:44:43.939232 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:44:43.939239 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 9 23:44:43.939246 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 23:44:43.939253 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 23:44:43.939260 kernel: rcu: Hierarchical SRCU implementation. May 9 23:44:43.939268 kernel: rcu: Max phase no-delay instances is 400. May 9 23:44:43.939275 kernel: Platform MSI: ITS@0x8080000 domain created May 9 23:44:43.939282 kernel: PCI/MSI: ITS@0x8080000 domain created May 9 23:44:43.939289 kernel: Remapping and enabling EFI services. May 9 23:44:43.939295 kernel: smp: Bringing up secondary CPUs ... May 9 23:44:43.939302 kernel: Detected PIPT I-cache on CPU1 May 9 23:44:43.939309 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 9 23:44:43.939316 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 9 23:44:43.939331 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:44:43.939339 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 9 23:44:43.939348 kernel: Detected PIPT I-cache on CPU2 May 9 23:44:43.939355 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 9 23:44:43.939367 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 9 23:44:43.939375 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:44:43.939382 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 9 23:44:43.939389 kernel: Detected PIPT I-cache on CPU3 May 9 23:44:43.939396 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 9 23:44:43.939403 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 9 23:44:43.939410 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:44:43.939417 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 9 23:44:43.939426 kernel: smp: Brought up 1 node, 4 CPUs May 9 23:44:43.939433 kernel: SMP: Total of 4 processors activated. May 9 23:44:43.939440 kernel: CPU features: detected: 32-bit EL0 Support May 9 23:44:43.939448 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 9 23:44:43.939455 kernel: CPU features: detected: Common not Private translations May 9 23:44:43.939462 kernel: CPU features: detected: CRC32 instructions May 9 23:44:43.939469 kernel: CPU features: detected: Enhanced Virtualization Traps May 9 23:44:43.939477 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 9 23:44:43.939485 kernel: CPU features: detected: LSE atomic instructions May 9 23:44:43.939492 kernel: CPU features: detected: Privileged Access Never May 9 23:44:43.939499 kernel: CPU features: detected: RAS Extension Support May 9 23:44:43.939506 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 9 23:44:43.939513 kernel: CPU: All CPU(s) started at EL1 May 9 23:44:43.939520 kernel: alternatives: applying system-wide alternatives May 9 23:44:43.939527 kernel: devtmpfs: initialized May 9 23:44:43.939535 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 23:44:43.939543 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 23:44:43.939550 kernel: pinctrl core: initialized pinctrl subsystem May 9 23:44:43.939557 kernel: SMBIOS 3.0.0 present. May 9 23:44:43.939565 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 9 23:44:43.939572 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 23:44:43.939579 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 23:44:43.939586 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 23:44:43.939593 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 23:44:43.939601 kernel: audit: initializing netlink subsys (disabled) May 9 23:44:43.939609 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 9 23:44:43.939616 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 23:44:43.939624 kernel: cpuidle: using governor menu May 9 23:44:43.939631 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 23:44:43.939638 kernel: ASID allocator initialised with 32768 entries May 9 23:44:43.939645 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 23:44:43.939652 kernel: Serial: AMBA PL011 UART driver May 9 23:44:43.939659 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 9 23:44:43.939667 kernel: Modules: 0 pages in range for non-PLT usage May 9 23:44:43.939675 kernel: Modules: 508944 pages in range for PLT usage May 9 23:44:43.939682 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 23:44:43.939689 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 23:44:43.939696 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 23:44:43.939704 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 23:44:43.939711 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 23:44:43.939718 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 23:44:43.939725 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 23:44:43.939732 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 23:44:43.939741 kernel: ACPI: Added _OSI(Module Device) May 9 23:44:43.939748 kernel: ACPI: Added _OSI(Processor Device) May 9 23:44:43.939755 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 23:44:43.939762 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 23:44:43.939769 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 23:44:43.939776 kernel: ACPI: Interpreter enabled May 9 23:44:43.939783 kernel: ACPI: Using GIC for interrupt routing May 9 23:44:43.939790 kernel: ACPI: MCFG table detected, 1 entries May 9 23:44:43.939797 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 9 23:44:43.939806 kernel: printk: console [ttyAMA0] enabled May 9 23:44:43.939813 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 23:44:43.939972 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 23:44:43.940052 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 9 23:44:43.940116 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 9 23:44:43.940177 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 9 23:44:43.940238 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 9 23:44:43.940251 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 9 23:44:43.940258 kernel: PCI host bridge to bus 0000:00 May 9 23:44:43.940332 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 9 23:44:43.940394 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 9 23:44:43.940450 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 9 23:44:43.940508 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 23:44:43.940591 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 9 23:44:43.940670 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 9 23:44:43.940736 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 9 23:44:43.940801 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 9 23:44:43.940867 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 9 23:44:43.940932 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 9 23:44:43.941039 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 9 23:44:43.941107 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 9 23:44:43.941167 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 9 23:44:43.941222 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 9 23:44:43.941279 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 9 23:44:43.941288 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 9 23:44:43.941295 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 9 23:44:43.941303 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 9 23:44:43.941310 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 9 23:44:43.941317 kernel: iommu: Default domain type: Translated May 9 23:44:43.941335 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 23:44:43.941342 kernel: efivars: Registered efivars operations May 9 23:44:43.941350 kernel: vgaarb: loaded May 9 23:44:43.941357 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 23:44:43.941364 kernel: VFS: Disk quotas dquot_6.6.0 May 9 23:44:43.941371 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 23:44:43.941378 kernel: pnp: PnP ACPI init May 9 23:44:43.941464 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 9 23:44:43.941491 kernel: pnp: PnP ACPI: found 1 devices May 9 23:44:43.941498 kernel: NET: Registered PF_INET protocol family May 9 23:44:43.941506 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 23:44:43.941514 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 23:44:43.941521 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 23:44:43.941528 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 23:44:43.941536 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 23:44:43.941543 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 23:44:43.941550 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:44:43.941559 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:44:43.941567 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 23:44:43.941574 kernel: PCI: CLS 0 bytes, default 64 May 9 23:44:43.941581 kernel: kvm [1]: HYP mode not available May 9 23:44:43.941589 kernel: Initialise system trusted keyrings May 9 23:44:43.941596 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 23:44:43.941604 kernel: Key type asymmetric registered May 9 23:44:43.941612 kernel: Asymmetric key parser 'x509' registered May 9 23:44:43.941619 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 9 23:44:43.941629 kernel: io scheduler mq-deadline registered May 9 23:44:43.941636 kernel: io scheduler kyber registered May 9 23:44:43.941643 kernel: io scheduler bfq registered May 9 23:44:43.941650 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 9 23:44:43.941658 kernel: ACPI: button: Power Button [PWRB] May 9 23:44:43.941666 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 9 23:44:43.941733 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 9 23:44:43.941743 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 23:44:43.941750 kernel: thunder_xcv, ver 1.0 May 9 23:44:43.941759 kernel: thunder_bgx, ver 1.0 May 9 23:44:43.941766 kernel: nicpf, ver 1.0 May 9 23:44:43.941773 kernel: nicvf, ver 1.0 May 9 23:44:43.941843 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 23:44:43.941904 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T23:44:43 UTC (1746834283) May 9 23:44:43.941913 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 23:44:43.941921 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 9 23:44:43.941928 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 23:44:43.941937 kernel: watchdog: Hard watchdog permanently disabled May 9 23:44:43.941945 kernel: NET: Registered PF_INET6 protocol family May 9 23:44:43.941985 kernel: Segment Routing with IPv6 May 9 23:44:43.941994 kernel: In-situ OAM (IOAM) with IPv6 May 9 23:44:43.942001 kernel: NET: Registered PF_PACKET protocol family May 9 23:44:43.942009 kernel: Key type dns_resolver registered May 9 23:44:43.942017 kernel: registered taskstats version 1 May 9 23:44:43.942024 kernel: Loading compiled-in X.509 certificates May 9 23:44:43.942031 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: ce481d22c53070871912748985d4044dfd149966' May 9 23:44:43.942041 kernel: Key type .fscrypt registered May 9 23:44:43.942049 kernel: Key type fscrypt-provisioning registered May 9 23:44:43.942056 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 23:44:43.942063 kernel: ima: Allocated hash algorithm: sha1 May 9 23:44:43.942070 kernel: ima: No architecture policies found May 9 23:44:43.942077 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 23:44:43.942084 kernel: clk: Disabling unused clocks May 9 23:44:43.942091 kernel: Freeing unused kernel memory: 39744K May 9 23:44:43.942098 kernel: Run /init as init process May 9 23:44:43.942107 kernel: with arguments: May 9 23:44:43.942114 kernel: /init May 9 23:44:43.942122 kernel: with environment: May 9 23:44:43.942129 kernel: HOME=/ May 9 23:44:43.942136 kernel: TERM=linux May 9 23:44:43.942143 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 23:44:43.942152 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:44:43.942162 systemd[1]: Detected virtualization kvm. May 9 23:44:43.942171 systemd[1]: Detected architecture arm64. May 9 23:44:43.942192 systemd[1]: Running in initrd. May 9 23:44:43.942200 systemd[1]: No hostname configured, using default hostname. May 9 23:44:43.942207 systemd[1]: Hostname set to . May 9 23:44:43.942215 systemd[1]: Initializing machine ID from VM UUID. May 9 23:44:43.942223 systemd[1]: Queued start job for default target initrd.target. May 9 23:44:43.942230 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:44:43.942238 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:44:43.942248 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 23:44:43.942256 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:44:43.942264 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 23:44:43.942272 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 23:44:43.942281 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 23:44:43.942289 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 23:44:43.942298 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:44:43.942306 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:44:43.942314 systemd[1]: Reached target paths.target - Path Units. May 9 23:44:43.942327 systemd[1]: Reached target slices.target - Slice Units. May 9 23:44:43.942336 systemd[1]: Reached target swap.target - Swaps. May 9 23:44:43.942344 systemd[1]: Reached target timers.target - Timer Units. May 9 23:44:43.942352 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:44:43.942360 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:44:43.942367 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 23:44:43.942376 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 23:44:43.942385 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:44:43.942392 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:44:43.942400 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:44:43.942408 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:44:43.942415 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 23:44:43.942423 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:44:43.942431 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 23:44:43.942439 systemd[1]: Starting systemd-fsck-usr.service... May 9 23:44:43.942448 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:44:43.942456 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:44:43.942464 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:44:43.942471 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 23:44:43.942479 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:44:43.942487 systemd[1]: Finished systemd-fsck-usr.service. May 9 23:44:43.942496 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 23:44:43.942504 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:44:43.942533 systemd-journald[238]: Collecting audit messages is disabled. May 9 23:44:43.942554 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:44:43.942563 systemd-journald[238]: Journal started May 9 23:44:43.942582 systemd-journald[238]: Runtime Journal (/run/log/journal/76b0e68dc5e846f5a6b51048afea445b) is 5.9M, max 47.3M, 41.4M free. May 9 23:44:43.932185 systemd-modules-load[239]: Inserted module 'overlay' May 9 23:44:43.945652 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:44:43.948979 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 23:44:43.949563 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:44:43.952514 systemd-modules-load[239]: Inserted module 'br_netfilter' May 9 23:44:43.953444 kernel: Bridge firewalling registered May 9 23:44:43.953417 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:44:43.956142 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:44:43.957481 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:44:43.959535 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:44:43.963127 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:44:43.967916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:44:43.972948 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:44:43.975516 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:44:43.980654 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:44:43.983785 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 23:44:43.996685 dracut-cmdline[278]: dracut-dracut-053 May 9 23:44:43.999440 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9a99b6d651f8aeb5d7bfd4370bc36449b7e5138d2f42e40e0aede009df00f5a4 May 9 23:44:44.014685 systemd-resolved[272]: Positive Trust Anchors: May 9 23:44:44.014834 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:44:44.014866 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:44:44.021134 systemd-resolved[272]: Defaulting to hostname 'linux'. May 9 23:44:44.022350 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:44:44.025019 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:44:44.071988 kernel: SCSI subsystem initialized May 9 23:44:44.077977 kernel: Loading iSCSI transport class v2.0-870. May 9 23:44:44.085982 kernel: iscsi: registered transport (tcp) May 9 23:44:44.099982 kernel: iscsi: registered transport (qla4xxx) May 9 23:44:44.100006 kernel: QLogic iSCSI HBA Driver May 9 23:44:44.144660 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 23:44:44.161148 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 23:44:44.177977 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 23:44:44.178045 kernel: device-mapper: uevent: version 1.0.3 May 9 23:44:44.178057 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 23:44:44.228987 kernel: raid6: neonx8 gen() 15714 MB/s May 9 23:44:44.245982 kernel: raid6: neonx4 gen() 14570 MB/s May 9 23:44:44.263004 kernel: raid6: neonx2 gen() 12353 MB/s May 9 23:44:44.279973 kernel: raid6: neonx1 gen() 9786 MB/s May 9 23:44:44.296973 kernel: raid6: int64x8 gen() 6662 MB/s May 9 23:44:44.313988 kernel: raid6: int64x4 gen() 6925 MB/s May 9 23:44:44.330972 kernel: raid6: int64x2 gen() 5994 MB/s May 9 23:44:44.348104 kernel: raid6: int64x1 gen() 5046 MB/s May 9 23:44:44.348135 kernel: raid6: using algorithm neonx8 gen() 15714 MB/s May 9 23:44:44.366070 kernel: raid6: .... xor() 11927 MB/s, rmw enabled May 9 23:44:44.366089 kernel: raid6: using neon recovery algorithm May 9 23:44:44.373343 kernel: xor: measuring software checksum speed May 9 23:44:44.373369 kernel: 8regs : 19773 MB/sec May 9 23:44:44.374027 kernel: 32regs : 19641 MB/sec May 9 23:44:44.375315 kernel: arm64_neon : 26989 MB/sec May 9 23:44:44.375340 kernel: xor: using function: arm64_neon (26989 MB/sec) May 9 23:44:44.433988 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 23:44:44.446013 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 23:44:44.459142 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:44:44.474351 systemd-udevd[461]: Using default interface naming scheme 'v255'. May 9 23:44:44.479156 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:44:44.481409 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 23:44:44.502466 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation May 9 23:44:44.533022 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:44:44.540173 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:44:44.588543 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:44:44.596471 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 23:44:44.609771 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 23:44:44.612825 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:44:44.614161 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:44:44.616826 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:44:44.627113 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 23:44:44.634832 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 23:44:44.643967 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 9 23:44:44.644139 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 23:44:44.656094 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 23:44:44.656143 kernel: GPT:9289727 != 19775487 May 9 23:44:44.656153 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 23:44:44.656551 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:44:44.660169 kernel: GPT:9289727 != 19775487 May 9 23:44:44.660199 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 23:44:44.660210 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 23:44:44.656668 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:44:44.660075 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:44:44.661284 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:44:44.661409 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:44:44.663907 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:44:44.672165 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:44:44.685405 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 23:44:44.689078 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (520) May 9 23:44:44.689105 kernel: BTRFS: device fsid 278061fd-7ea0-499f-a3bc-343431c2d8fa devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (514) May 9 23:44:44.688098 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:44:44.693976 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 23:44:44.703474 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 23:44:44.704747 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 23:44:44.710477 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 23:44:44.723143 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 23:44:44.725052 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:44:44.737189 disk-uuid[559]: Primary Header is updated. May 9 23:44:44.737189 disk-uuid[559]: Secondary Entries is updated. May 9 23:44:44.737189 disk-uuid[559]: Secondary Header is updated. May 9 23:44:44.743292 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 23:44:44.750484 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:44:44.754968 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 23:44:45.753988 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 23:44:45.757124 disk-uuid[563]: The operation has completed successfully. May 9 23:44:45.785680 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 23:44:45.785776 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 23:44:45.802184 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 23:44:45.805072 sh[583]: Success May 9 23:44:45.820981 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 23:44:45.849452 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 23:44:45.862265 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 23:44:45.863725 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 23:44:45.875970 kernel: BTRFS info (device dm-0): first mount of filesystem 278061fd-7ea0-499f-a3bc-343431c2d8fa May 9 23:44:45.876009 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 23:44:45.876019 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 23:44:45.876029 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 23:44:45.877338 kernel: BTRFS info (device dm-0): using free space tree May 9 23:44:45.881297 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 23:44:45.882614 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 23:44:45.903166 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 23:44:45.904892 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 23:44:45.912657 kernel: BTRFS info (device vda6): first mount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:44:45.912708 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:44:45.913410 kernel: BTRFS info (device vda6): using free space tree May 9 23:44:45.915974 kernel: BTRFS info (device vda6): auto enabling async discard May 9 23:44:45.922773 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 23:44:45.924164 kernel: BTRFS info (device vda6): last unmount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:44:45.931118 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 23:44:45.939153 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 23:44:45.999236 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:44:46.017129 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:44:46.032813 ignition[678]: Ignition 2.20.0 May 9 23:44:46.032824 ignition[678]: Stage: fetch-offline May 9 23:44:46.032862 ignition[678]: no configs at "/usr/lib/ignition/base.d" May 9 23:44:46.032871 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:44:46.033112 ignition[678]: parsed url from cmdline: "" May 9 23:44:46.033115 ignition[678]: no config URL provided May 9 23:44:46.033120 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" May 9 23:44:46.033127 ignition[678]: no config at "/usr/lib/ignition/user.ign" May 9 23:44:46.033155 ignition[678]: op(1): [started] loading QEMU firmware config module May 9 23:44:46.040247 systemd-networkd[773]: lo: Link UP May 9 23:44:46.033162 ignition[678]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 23:44:46.040251 systemd-networkd[773]: lo: Gained carrier May 9 23:44:46.039869 ignition[678]: op(1): [finished] loading QEMU firmware config module May 9 23:44:46.040987 systemd-networkd[773]: Enumeration completed May 9 23:44:46.041094 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:44:46.041390 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:44:46.041394 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:44:46.042233 systemd-networkd[773]: eth0: Link UP May 9 23:44:46.042235 systemd-networkd[773]: eth0: Gained carrier May 9 23:44:46.042242 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:44:46.043863 systemd[1]: Reached target network.target - Network. May 9 23:44:46.055022 ignition[678]: parsing config with SHA512: 5d4ed12f6dc8b2781e40484790320c2cecff23c5078314af18275d464ecd42ad44bc3070dae3392f9d54c1eaa32afba5df1aef08dcbd4d47ffded97a0d5d3c85 May 9 23:44:46.058560 unknown[678]: fetched base config from "system" May 9 23:44:46.058569 unknown[678]: fetched user config from "qemu" May 9 23:44:46.058819 ignition[678]: fetch-offline: fetch-offline passed May 9 23:44:46.061028 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:44:46.058891 ignition[678]: Ignition finished successfully May 9 23:44:46.062411 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 23:44:46.064011 systemd-networkd[773]: eth0: DHCPv4 address 10.0.0.40/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 23:44:46.071151 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 23:44:46.082530 ignition[779]: Ignition 2.20.0 May 9 23:44:46.082540 ignition[779]: Stage: kargs May 9 23:44:46.082727 ignition[779]: no configs at "/usr/lib/ignition/base.d" May 9 23:44:46.082736 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:44:46.083473 ignition[779]: kargs: kargs passed May 9 23:44:46.083519 ignition[779]: Ignition finished successfully May 9 23:44:46.086931 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 23:44:46.099110 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 23:44:46.108905 ignition[789]: Ignition 2.20.0 May 9 23:44:46.108921 ignition[789]: Stage: disks May 9 23:44:46.109197 ignition[789]: no configs at "/usr/lib/ignition/base.d" May 9 23:44:46.109207 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:44:46.111560 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 23:44:46.109926 ignition[789]: disks: disks passed May 9 23:44:46.113325 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 23:44:46.109989 ignition[789]: Ignition finished successfully May 9 23:44:46.114906 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 23:44:46.116098 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:44:46.117481 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:44:46.119321 systemd[1]: Reached target basic.target - Basic System. May 9 23:44:46.136127 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 23:44:46.146841 systemd-fsck[800]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 23:44:46.150506 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 23:44:46.158092 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 23:44:46.197971 kernel: EXT4-fs (vda9): mounted filesystem caef9e74-1f21-4595-8586-7560f5103527 r/w with ordered data mode. Quota mode: none. May 9 23:44:46.198654 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 23:44:46.199889 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 23:44:46.218046 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:44:46.219776 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 23:44:46.220726 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 23:44:46.220768 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 23:44:46.220789 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:44:46.231774 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (809) May 9 23:44:46.231804 kernel: BTRFS info (device vda6): first mount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:44:46.231815 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:44:46.231824 kernel: BTRFS info (device vda6): using free space tree May 9 23:44:46.227578 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 23:44:46.229476 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 23:44:46.237972 kernel: BTRFS info (device vda6): auto enabling async discard May 9 23:44:46.239282 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:44:46.277432 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory May 9 23:44:46.281829 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory May 9 23:44:46.285983 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory May 9 23:44:46.289986 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory May 9 23:44:46.363061 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 23:44:46.373107 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 23:44:46.375537 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 23:44:46.380976 kernel: BTRFS info (device vda6): last unmount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:44:46.403716 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 23:44:46.406053 ignition[924]: INFO : Ignition 2.20.0 May 9 23:44:46.406053 ignition[924]: INFO : Stage: mount May 9 23:44:46.406053 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:44:46.406053 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:44:46.410530 ignition[924]: INFO : mount: mount passed May 9 23:44:46.410530 ignition[924]: INFO : Ignition finished successfully May 9 23:44:46.408246 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 23:44:46.420425 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 23:44:46.873990 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 23:44:46.887159 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:44:46.892977 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (937) May 9 23:44:46.895091 kernel: BTRFS info (device vda6): first mount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:44:46.895111 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:44:46.895122 kernel: BTRFS info (device vda6): using free space tree May 9 23:44:46.898736 kernel: BTRFS info (device vda6): auto enabling async discard May 9 23:44:46.899001 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:44:46.922163 ignition[954]: INFO : Ignition 2.20.0 May 9 23:44:46.922163 ignition[954]: INFO : Stage: files May 9 23:44:46.923924 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:44:46.923924 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:44:46.923924 ignition[954]: DEBUG : files: compiled without relabeling support, skipping May 9 23:44:46.927400 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 23:44:46.927400 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 23:44:46.930553 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 23:44:46.932058 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 23:44:46.933663 unknown[954]: wrote ssh authorized keys file for user: core May 9 23:44:46.934919 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 23:44:46.937002 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 9 23:44:46.938656 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 9 23:44:46.940288 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:44:46.940288 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:44:46.940288 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:44:46.940288 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:44:46.940288 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:44:46.940288 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 9 23:44:47.161156 systemd-networkd[773]: eth0: Gained IPv6LL May 9 23:44:47.209849 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 9 23:44:47.541107 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:44:47.541107 ignition[954]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 9 23:44:47.544860 ignition[954]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 23:44:47.544860 ignition[954]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 23:44:47.544860 ignition[954]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 9 23:44:47.544860 ignition[954]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 9 23:44:47.569903 ignition[954]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 23:44:47.573895 ignition[954]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 23:44:47.575501 ignition[954]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 9 23:44:47.575501 ignition[954]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 23:44:47.575501 ignition[954]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 23:44:47.575501 ignition[954]: INFO : files: files passed May 9 23:44:47.575501 ignition[954]: INFO : Ignition finished successfully May 9 23:44:47.578319 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 23:44:47.597103 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 23:44:47.598909 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 23:44:47.601145 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 23:44:47.601243 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 23:44:47.607162 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory May 9 23:44:47.610804 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:44:47.610804 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 23:44:47.614173 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:44:47.615244 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:44:47.616974 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 23:44:47.629109 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 23:44:47.651165 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 23:44:47.652012 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 23:44:47.653632 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 23:44:47.655471 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 23:44:47.657293 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 23:44:47.658091 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 23:44:47.675044 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:44:47.689118 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 23:44:47.698527 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 23:44:47.699786 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:44:47.701895 systemd[1]: Stopped target timers.target - Timer Units. May 9 23:44:47.703716 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 23:44:47.703835 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:44:47.706329 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 23:44:47.708366 systemd[1]: Stopped target basic.target - Basic System. May 9 23:44:47.709968 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 23:44:47.711690 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:44:47.713654 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 23:44:47.715621 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 23:44:47.717449 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:44:47.719375 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 23:44:47.721256 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 23:44:47.723052 systemd[1]: Stopped target swap.target - Swaps. May 9 23:44:47.724647 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 23:44:47.724774 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 23:44:47.727094 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 23:44:47.729015 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:44:47.730997 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 23:44:47.732018 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:44:47.734128 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 23:44:47.734240 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 23:44:47.737002 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 23:44:47.737118 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:44:47.739073 systemd[1]: Stopped target paths.target - Path Units. May 9 23:44:47.740665 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 23:44:47.744001 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:44:47.745293 systemd[1]: Stopped target slices.target - Slice Units. May 9 23:44:47.747452 systemd[1]: Stopped target sockets.target - Socket Units. May 9 23:44:47.748902 systemd[1]: iscsid.socket: Deactivated successfully. May 9 23:44:47.749010 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:44:47.750562 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 23:44:47.750648 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:44:47.752183 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 23:44:47.752290 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:44:47.754045 systemd[1]: ignition-files.service: Deactivated successfully. May 9 23:44:47.754146 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 23:44:47.767151 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 23:44:47.768067 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 23:44:47.768210 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:44:47.771376 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 23:44:47.772850 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 23:44:47.773015 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:44:47.774970 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 23:44:47.779732 ignition[1008]: INFO : Ignition 2.20.0 May 9 23:44:47.779732 ignition[1008]: INFO : Stage: umount May 9 23:44:47.779732 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:44:47.779732 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:44:47.775076 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:44:47.789098 ignition[1008]: INFO : umount: umount passed May 9 23:44:47.789098 ignition[1008]: INFO : Ignition finished successfully May 9 23:44:47.781131 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 23:44:47.781228 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 23:44:47.784355 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 23:44:47.784868 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 23:44:47.784983 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 23:44:47.786536 systemd[1]: Stopped target network.target - Network. May 9 23:44:47.788033 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 23:44:47.788112 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 23:44:47.790124 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 23:44:47.790176 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 23:44:47.791661 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 23:44:47.791710 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 23:44:47.793392 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 23:44:47.793440 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 23:44:47.795294 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 23:44:47.797114 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 23:44:47.799051 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 23:44:47.799175 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 23:44:47.801197 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 23:44:47.801298 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 23:44:47.802944 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 23:44:47.803099 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 23:44:47.805898 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 23:44:47.806027 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:44:47.806488 systemd-networkd[773]: eth0: DHCPv6 lease lost May 9 23:44:47.809472 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 23:44:47.809580 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 23:44:47.813485 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 23:44:47.813519 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 23:44:47.823043 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 23:44:47.824484 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 23:44:47.824545 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:44:47.826556 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:44:47.826599 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:44:47.828380 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 23:44:47.828428 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 23:44:47.830541 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:44:47.839023 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 23:44:47.839133 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 23:44:47.847573 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 23:44:47.847716 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:44:47.849396 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 23:44:47.849439 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 23:44:47.851045 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 23:44:47.851078 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:44:47.852776 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 23:44:47.852821 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 23:44:47.855789 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 23:44:47.855835 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 23:44:47.858553 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:44:47.858599 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:44:47.875118 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 23:44:47.876414 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 23:44:47.876484 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:44:47.879185 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:44:47.879232 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:44:47.881377 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 23:44:47.882988 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 23:44:47.884591 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 23:44:47.886841 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 23:44:47.897147 systemd[1]: Switching root. May 9 23:44:47.928923 systemd-journald[238]: Journal stopped May 9 23:44:48.631081 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 9 23:44:48.631150 kernel: SELinux: policy capability network_peer_controls=1 May 9 23:44:48.631163 kernel: SELinux: policy capability open_perms=1 May 9 23:44:48.631173 kernel: SELinux: policy capability extended_socket_class=1 May 9 23:44:48.631186 kernel: SELinux: policy capability always_check_network=0 May 9 23:44:48.631196 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 23:44:48.631206 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 23:44:48.631216 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 23:44:48.631230 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 23:44:48.631241 kernel: audit: type=1403 audit(1746834288.070:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 23:44:48.631251 systemd[1]: Successfully loaded SELinux policy in 36.656ms. May 9 23:44:48.631273 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.257ms. May 9 23:44:48.631287 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:44:48.631298 systemd[1]: Detected virtualization kvm. May 9 23:44:48.631323 systemd[1]: Detected architecture arm64. May 9 23:44:48.631334 systemd[1]: Detected first boot. May 9 23:44:48.631345 systemd[1]: Initializing machine ID from VM UUID. May 9 23:44:48.631355 zram_generator::config[1052]: No configuration found. May 9 23:44:48.631367 systemd[1]: Populated /etc with preset unit settings. May 9 23:44:48.631378 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 23:44:48.631389 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 23:44:48.631404 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 23:44:48.631415 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 23:44:48.631427 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 23:44:48.631437 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 23:44:48.631448 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 23:44:48.631462 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 23:44:48.631474 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 23:44:48.631485 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 23:44:48.631496 systemd[1]: Created slice user.slice - User and Session Slice. May 9 23:44:48.631507 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:44:48.631518 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:44:48.631530 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 23:44:48.631540 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 23:44:48.631552 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 23:44:48.631566 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:44:48.631576 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 9 23:44:48.631587 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:44:48.631598 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 23:44:48.631609 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 23:44:48.631620 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 23:44:48.631631 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 23:44:48.631643 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:44:48.631654 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:44:48.631665 systemd[1]: Reached target slices.target - Slice Units. May 9 23:44:48.631676 systemd[1]: Reached target swap.target - Swaps. May 9 23:44:48.631686 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 23:44:48.631697 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 23:44:48.631708 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:44:48.631719 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:44:48.631730 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:44:48.631741 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 23:44:48.631753 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 23:44:48.631764 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 23:44:48.631775 systemd[1]: Mounting media.mount - External Media Directory... May 9 23:44:48.631785 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 23:44:48.631798 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 23:44:48.631810 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 23:44:48.631821 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 23:44:48.631832 systemd[1]: Reached target machines.target - Containers. May 9 23:44:48.631845 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 23:44:48.631857 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:44:48.631869 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:44:48.631880 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 23:44:48.631891 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:44:48.631902 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:44:48.631912 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:44:48.631923 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 23:44:48.631934 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:44:48.631946 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 23:44:48.631966 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 23:44:48.631977 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 23:44:48.631988 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 23:44:48.631998 kernel: fuse: init (API version 7.39) May 9 23:44:48.632008 systemd[1]: Stopped systemd-fsck-usr.service. May 9 23:44:48.632019 kernel: loop: module loaded May 9 23:44:48.632028 kernel: ACPI: bus type drm_connector registered May 9 23:44:48.632042 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:44:48.632053 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:44:48.632065 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 23:44:48.632095 systemd-journald[1123]: Collecting audit messages is disabled. May 9 23:44:48.632130 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 23:44:48.632143 systemd-journald[1123]: Journal started May 9 23:44:48.632168 systemd-journald[1123]: Runtime Journal (/run/log/journal/76b0e68dc5e846f5a6b51048afea445b) is 5.9M, max 47.3M, 41.4M free. May 9 23:44:48.435700 systemd[1]: Queued start job for default target multi-user.target. May 9 23:44:48.448729 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 23:44:48.449098 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 23:44:48.636266 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:44:48.639542 systemd[1]: verity-setup.service: Deactivated successfully. May 9 23:44:48.639575 systemd[1]: Stopped verity-setup.service. May 9 23:44:48.643844 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:44:48.644509 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 23:44:48.645668 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 23:44:48.646912 systemd[1]: Mounted media.mount - External Media Directory. May 9 23:44:48.647948 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 23:44:48.649087 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 23:44:48.650252 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 23:44:48.652982 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 23:44:48.654394 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:44:48.655864 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 23:44:48.656025 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 23:44:48.657407 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:44:48.657550 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:44:48.658928 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:44:48.659096 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:44:48.660374 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:44:48.660494 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:44:48.662031 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 23:44:48.662169 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 23:44:48.663644 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:44:48.664992 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:44:48.666314 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:44:48.667923 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 23:44:48.669408 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 23:44:48.682811 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 23:44:48.690054 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 23:44:48.692243 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 23:44:48.693468 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 23:44:48.693512 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:44:48.695468 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 23:44:48.697730 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 23:44:48.699939 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 23:44:48.701058 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:44:48.702467 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 23:44:48.707177 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 23:44:48.708531 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:44:48.711137 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 23:44:48.712298 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:44:48.713488 systemd-journald[1123]: Time spent on flushing to /var/log/journal/76b0e68dc5e846f5a6b51048afea445b is 26.503ms for 838 entries. May 9 23:44:48.713488 systemd-journald[1123]: System Journal (/var/log/journal/76b0e68dc5e846f5a6b51048afea445b) is 8.0M, max 195.6M, 187.6M free. May 9 23:44:48.754584 systemd-journald[1123]: Received client request to flush runtime journal. May 9 23:44:48.754632 kernel: loop0: detected capacity change from 0 to 113536 May 9 23:44:48.754654 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 23:44:48.716284 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:44:48.721181 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 23:44:48.729227 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 23:44:48.732007 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:44:48.733698 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 23:44:48.736220 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 23:44:48.737895 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 23:44:48.740988 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 23:44:48.749889 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 23:44:48.760308 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 23:44:48.766166 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 23:44:48.768115 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 23:44:48.769723 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:44:48.771199 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 23:44:48.778532 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:44:48.781707 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 9 23:44:48.789983 kernel: loop1: detected capacity change from 0 to 116808 May 9 23:44:48.800694 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. May 9 23:44:48.800710 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. May 9 23:44:48.804974 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:44:48.827978 kernel: loop2: detected capacity change from 0 to 201592 May 9 23:44:48.836825 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 23:44:48.838766 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 23:44:48.862211 kernel: loop3: detected capacity change from 0 to 113536 May 9 23:44:48.867023 kernel: loop4: detected capacity change from 0 to 116808 May 9 23:44:48.871989 kernel: loop5: detected capacity change from 0 to 201592 May 9 23:44:48.875909 (sd-merge)[1188]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 23:44:48.876351 (sd-merge)[1188]: Merged extensions into '/usr'. May 9 23:44:48.881227 systemd[1]: Reloading requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... May 9 23:44:48.881246 systemd[1]: Reloading... May 9 23:44:48.944975 zram_generator::config[1214]: No configuration found. May 9 23:44:49.002794 ldconfig[1158]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 23:44:49.043980 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:44:49.079336 systemd[1]: Reloading finished in 197 ms. May 9 23:44:49.120412 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 23:44:49.125047 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 23:44:49.136287 systemd[1]: Starting ensure-sysext.service... May 9 23:44:49.138266 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:44:49.145179 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... May 9 23:44:49.145194 systemd[1]: Reloading... May 9 23:44:49.155492 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 23:44:49.155748 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 23:44:49.156414 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 23:44:49.156634 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. May 9 23:44:49.156684 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. May 9 23:44:49.158866 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:44:49.158877 systemd-tmpfiles[1249]: Skipping /boot May 9 23:44:49.166039 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:44:49.166051 systemd-tmpfiles[1249]: Skipping /boot May 9 23:44:49.188078 zram_generator::config[1273]: No configuration found. May 9 23:44:49.279772 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:44:49.314718 systemd[1]: Reloading finished in 169 ms. May 9 23:44:49.330803 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 23:44:49.344427 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:44:49.352042 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 23:44:49.355000 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 23:44:49.357276 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 23:44:49.361294 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:44:49.365271 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:44:49.372514 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 23:44:49.376404 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:44:49.382248 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:44:49.385203 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:44:49.388150 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:44:49.389983 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:44:49.395229 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 23:44:49.399919 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 23:44:49.401640 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:44:49.401788 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:44:49.403571 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:44:49.403698 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:44:49.405469 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:44:49.405602 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:44:49.413672 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:44:49.414382 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:44:49.417654 systemd-udevd[1317]: Using default interface naming scheme 'v255'. May 9 23:44:49.421350 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 23:44:49.425402 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:44:49.429268 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:44:49.439341 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:44:49.443400 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:44:49.444580 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:44:49.445213 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:44:49.448718 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 23:44:49.452582 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:44:49.452738 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:44:49.455320 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 23:44:49.457642 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:44:49.457768 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:44:49.461132 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 23:44:49.475324 augenrules[1376]: No rules May 9 23:44:49.477361 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 23:44:49.480045 systemd[1]: audit-rules.service: Deactivated successfully. May 9 23:44:49.480233 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 23:44:49.492261 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:44:49.492422 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:44:49.496843 systemd[1]: Finished ensure-sysext.service. May 9 23:44:49.503615 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 9 23:44:49.506001 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1348) May 9 23:44:49.514227 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 23:44:49.515268 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:44:49.518147 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:44:49.521239 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:44:49.523985 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:44:49.525110 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:44:49.530131 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:44:49.531988 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:44:49.533622 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 23:44:49.535432 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 23:44:49.535888 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:44:49.536048 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:44:49.539441 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:44:49.539593 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:44:49.541131 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:44:49.541259 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:44:49.552003 augenrules[1387]: /sbin/augenrules: No change May 9 23:44:49.556164 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:44:49.576439 augenrules[1418]: No rules May 9 23:44:49.577852 systemd[1]: audit-rules.service: Deactivated successfully. May 9 23:44:49.578077 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 23:44:49.586512 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 23:44:49.606303 systemd-resolved[1316]: Positive Trust Anchors: May 9 23:44:49.606649 systemd-resolved[1316]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:44:49.606683 systemd-resolved[1316]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:44:49.610535 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 23:44:49.622063 systemd-resolved[1316]: Defaulting to hostname 'linux'. May 9 23:44:49.624719 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 23:44:49.627539 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 23:44:49.632380 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:44:49.633075 systemd-networkd[1396]: lo: Link UP May 9 23:44:49.633087 systemd-networkd[1396]: lo: Gained carrier May 9 23:44:49.633866 systemd-networkd[1396]: Enumeration completed May 9 23:44:49.634831 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:44:49.636162 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:44:49.636171 systemd-networkd[1396]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:44:49.636946 systemd-networkd[1396]: eth0: Link UP May 9 23:44:49.636965 systemd-networkd[1396]: eth0: Gained carrier May 9 23:44:49.636981 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:44:49.637422 systemd[1]: Reached target network.target - Network. May 9 23:44:49.639132 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:44:49.640491 systemd[1]: Reached target time-set.target - System Time Set. May 9 23:44:49.655252 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 23:44:49.657739 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:44:49.658009 systemd-networkd[1396]: eth0: DHCPv4 address 10.0.0.40/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 23:44:49.660050 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. May 9 23:44:49.660610 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 23:44:49.660658 systemd-timesyncd[1403]: Initial clock synchronization to Fri 2025-05-09 23:44:49.672741 UTC. May 9 23:44:49.666414 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 23:44:49.682130 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 23:44:49.698544 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:44:49.698717 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:44:49.730667 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 23:44:49.732224 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:44:49.733342 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:44:49.734465 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 23:44:49.735682 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 23:44:49.737107 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 23:44:49.738193 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 23:44:49.739494 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 23:44:49.740682 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 23:44:49.740717 systemd[1]: Reached target paths.target - Path Units. May 9 23:44:49.741597 systemd[1]: Reached target timers.target - Timer Units. May 9 23:44:49.743306 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 23:44:49.745678 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 23:44:49.754879 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 23:44:49.757271 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 23:44:49.758891 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 23:44:49.760042 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:44:49.761006 systemd[1]: Reached target basic.target - Basic System. May 9 23:44:49.762020 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 23:44:49.762058 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 23:44:49.763058 systemd[1]: Starting containerd.service - containerd container runtime... May 9 23:44:49.765113 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 23:44:49.768143 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:44:49.769113 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 23:44:49.771108 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 23:44:49.772507 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 23:44:49.774194 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 23:44:49.779136 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 23:44:49.780028 jq[1446]: false May 9 23:44:49.781419 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 23:44:49.786460 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 23:44:49.791888 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 23:44:49.792420 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 23:44:49.794219 systemd[1]: Starting update-engine.service - Update Engine... May 9 23:44:49.798782 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 23:44:49.801045 extend-filesystems[1447]: Found loop3 May 9 23:44:49.801045 extend-filesystems[1447]: Found loop4 May 9 23:44:49.801045 extend-filesystems[1447]: Found loop5 May 9 23:44:49.801045 extend-filesystems[1447]: Found vda May 9 23:44:49.801045 extend-filesystems[1447]: Found vda1 May 9 23:44:49.801045 extend-filesystems[1447]: Found vda2 May 9 23:44:49.801045 extend-filesystems[1447]: Found vda3 May 9 23:44:49.801045 extend-filesystems[1447]: Found usr May 9 23:44:49.801045 extend-filesystems[1447]: Found vda4 May 9 23:44:49.801045 extend-filesystems[1447]: Found vda6 May 9 23:44:49.801045 extend-filesystems[1447]: Found vda7 May 9 23:44:49.801045 extend-filesystems[1447]: Found vda9 May 9 23:44:49.801045 extend-filesystems[1447]: Checking size of /dev/vda9 May 9 23:44:49.801886 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 23:44:49.803735 dbus-daemon[1445]: [system] SELinux support is enabled May 9 23:44:49.810080 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 23:44:49.816715 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 23:44:49.816899 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 23:44:49.817196 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 23:44:49.817355 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 23:44:49.824586 jq[1456]: true May 9 23:44:49.825943 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 23:44:49.826010 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 23:44:49.827354 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 23:44:49.827383 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 23:44:49.836605 systemd[1]: motdgen.service: Deactivated successfully. May 9 23:44:49.836774 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 23:44:49.842733 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 23:44:49.843860 extend-filesystems[1447]: Resized partition /dev/vda9 May 9 23:44:49.856212 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1348) May 9 23:44:49.856277 jq[1475]: true May 9 23:44:49.859156 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (Power Button) May 9 23:44:49.859519 systemd-logind[1452]: New seat seat0. May 9 23:44:49.865227 extend-filesystems[1478]: resize2fs 1.47.1 (20-May-2024) May 9 23:44:49.874676 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 23:44:49.873687 systemd[1]: Started update-engine.service - Update Engine. May 9 23:44:49.874767 update_engine[1455]: I20250509 23:44:49.868889 1455 main.cc:92] Flatcar Update Engine starting May 9 23:44:49.874767 update_engine[1455]: I20250509 23:44:49.870828 1455 update_check_scheduler.cc:74] Next update check in 6m24s May 9 23:44:49.876237 systemd[1]: Started systemd-logind.service - User Login Management. May 9 23:44:49.890139 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 23:44:49.889237 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 23:44:49.903018 extend-filesystems[1478]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 23:44:49.903018 extend-filesystems[1478]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 23:44:49.903018 extend-filesystems[1478]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 23:44:49.906859 extend-filesystems[1447]: Resized filesystem in /dev/vda9 May 9 23:44:49.906577 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 23:44:49.908073 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 23:44:49.936735 bash[1497]: Updated "/home/core/.ssh/authorized_keys" May 9 23:44:49.938330 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 23:44:49.940590 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 23:44:49.954460 locksmithd[1481]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 23:44:50.055394 containerd[1473]: time="2025-05-09T23:44:50.055247991Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 9 23:44:50.081522 containerd[1473]: time="2025-05-09T23:44:50.081475636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 23:44:50.083097 containerd[1473]: time="2025-05-09T23:44:50.083060401Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 23:44:50.083127 containerd[1473]: time="2025-05-09T23:44:50.083096335Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 23:44:50.083127 containerd[1473]: time="2025-05-09T23:44:50.083114261Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 23:44:50.083360 containerd[1473]: time="2025-05-09T23:44:50.083318820Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 23:44:50.083403 containerd[1473]: time="2025-05-09T23:44:50.083369639Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 23:44:50.083450 containerd[1473]: time="2025-05-09T23:44:50.083431663Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:44:50.083450 containerd[1473]: time="2025-05-09T23:44:50.083447549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 23:44:50.083631 containerd[1473]: time="2025-05-09T23:44:50.083609530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:44:50.083631 containerd[1473]: time="2025-05-09T23:44:50.083629578Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 23:44:50.083678 containerd[1473]: time="2025-05-09T23:44:50.083644464Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:44:50.083678 containerd[1473]: time="2025-05-09T23:44:50.083654348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 23:44:50.083739 containerd[1473]: time="2025-05-09T23:44:50.083722574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 23:44:50.083941 containerd[1473]: time="2025-05-09T23:44:50.083921649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 23:44:50.084057 containerd[1473]: time="2025-05-09T23:44:50.084037534Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:44:50.084083 containerd[1473]: time="2025-05-09T23:44:50.084055220Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 23:44:50.084145 containerd[1473]: time="2025-05-09T23:44:50.084130249Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 23:44:50.084190 containerd[1473]: time="2025-05-09T23:44:50.084176347Z" level=info msg="metadata content store policy set" policy=shared May 9 23:44:50.087516 containerd[1473]: time="2025-05-09T23:44:50.087485809Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 23:44:50.087566 containerd[1473]: time="2025-05-09T23:44:50.087551754Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 23:44:50.087606 containerd[1473]: time="2025-05-09T23:44:50.087570842Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 23:44:50.087606 containerd[1473]: time="2025-05-09T23:44:50.087587688Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 23:44:50.087643 containerd[1473]: time="2025-05-09T23:44:50.087634346Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 23:44:50.087793 containerd[1473]: time="2025-05-09T23:44:50.087772759Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 23:44:50.088082 containerd[1473]: time="2025-05-09T23:44:50.088063189Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 23:44:50.088215 containerd[1473]: time="2025-05-09T23:44:50.088197240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 23:44:50.088249 containerd[1473]: time="2025-05-09T23:44:50.088219049Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 23:44:50.088249 containerd[1473]: time="2025-05-09T23:44:50.088243698Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 23:44:50.088286 containerd[1473]: time="2025-05-09T23:44:50.088259184Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 23:44:50.088286 containerd[1473]: time="2025-05-09T23:44:50.088275991Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 23:44:50.088323 containerd[1473]: time="2025-05-09T23:44:50.088311964Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 23:44:50.088354 containerd[1473]: time="2025-05-09T23:44:50.088326690Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 23:44:50.088375 containerd[1473]: time="2025-05-09T23:44:50.088353620Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 23:44:50.088375 containerd[1473]: time="2025-05-09T23:44:50.088367505Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 23:44:50.088409 containerd[1473]: time="2025-05-09T23:44:50.088380951Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 23:44:50.088409 containerd[1473]: time="2025-05-09T23:44:50.088393635Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 23:44:50.088443 containerd[1473]: time="2025-05-09T23:44:50.088414683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 23:44:50.088443 containerd[1473]: time="2025-05-09T23:44:50.088429169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 23:44:50.088490 containerd[1473]: time="2025-05-09T23:44:50.088441734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 23:44:50.088490 containerd[1473]: time="2025-05-09T23:44:50.088455179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 23:44:50.088490 containerd[1473]: time="2025-05-09T23:44:50.088467424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 23:44:50.088490 containerd[1473]: time="2025-05-09T23:44:50.088480869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 23:44:50.088564 containerd[1473]: time="2025-05-09T23:44:50.088492953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 23:44:50.088564 containerd[1473]: time="2025-05-09T23:44:50.088505838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 23:44:50.088564 containerd[1473]: time="2025-05-09T23:44:50.088518883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 23:44:50.088564 containerd[1473]: time="2025-05-09T23:44:50.088532889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 23:44:50.088564 containerd[1473]: time="2025-05-09T23:44:50.088554137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 23:44:50.088658 containerd[1473]: time="2025-05-09T23:44:50.088567822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 23:44:50.088658 containerd[1473]: time="2025-05-09T23:44:50.088580787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 23:44:50.088658 containerd[1473]: time="2025-05-09T23:44:50.088595272Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 23:44:50.088658 containerd[1473]: time="2025-05-09T23:44:50.088621202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 23:44:50.088658 containerd[1473]: time="2025-05-09T23:44:50.088634967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 23:44:50.088658 containerd[1473]: time="2025-05-09T23:44:50.088646732Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 23:44:50.088846 containerd[1473]: time="2025-05-09T23:44:50.088829482Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 23:44:50.088874 containerd[1473]: time="2025-05-09T23:44:50.088852731Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 23:44:50.088874 containerd[1473]: time="2025-05-09T23:44:50.088865615Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 23:44:50.088913 containerd[1473]: time="2025-05-09T23:44:50.088879701Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 23:44:50.088913 containerd[1473]: time="2025-05-09T23:44:50.088890425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 23:44:50.088913 containerd[1473]: time="2025-05-09T23:44:50.088908592Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 23:44:50.088993 containerd[1473]: time="2025-05-09T23:44:50.088919636Z" level=info msg="NRI interface is disabled by configuration." May 9 23:44:50.088993 containerd[1473]: time="2025-05-09T23:44:50.088936402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 23:44:50.089408 containerd[1473]: time="2025-05-09T23:44:50.089348400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 23:44:50.089527 containerd[1473]: time="2025-05-09T23:44:50.089414185Z" level=info msg="Connect containerd service" May 9 23:44:50.089527 containerd[1473]: time="2025-05-09T23:44:50.089458242Z" level=info msg="using legacy CRI server" May 9 23:44:50.089527 containerd[1473]: time="2025-05-09T23:44:50.089465484Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 23:44:50.091084 containerd[1473]: time="2025-05-09T23:44:50.091053450Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 23:44:50.091961 containerd[1473]: time="2025-05-09T23:44:50.091927623Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 23:44:50.092274 containerd[1473]: time="2025-05-09T23:44:50.092240823Z" level=info msg="Start subscribing containerd event" May 9 23:44:50.092301 containerd[1473]: time="2025-05-09T23:44:50.092290922Z" level=info msg="Start recovering state" May 9 23:44:50.092382 containerd[1473]: time="2025-05-09T23:44:50.092366751Z" level=info msg="Start event monitor" May 9 23:44:50.092408 containerd[1473]: time="2025-05-09T23:44:50.092384518Z" level=info msg="Start snapshots syncer" May 9 23:44:50.092408 containerd[1473]: time="2025-05-09T23:44:50.092394562Z" level=info msg="Start cni network conf syncer for default" May 9 23:44:50.092408 containerd[1473]: time="2025-05-09T23:44:50.092401564Z" level=info msg="Start streaming server" May 9 23:44:50.092959 containerd[1473]: time="2025-05-09T23:44:50.092932767Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 23:44:50.093010 containerd[1473]: time="2025-05-09T23:44:50.092994791Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 23:44:50.093136 systemd[1]: Started containerd.service - containerd container runtime. May 9 23:44:50.095163 containerd[1473]: time="2025-05-09T23:44:50.094471074Z" level=info msg="containerd successfully booted in 0.041752s" May 9 23:44:50.530880 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 23:44:50.549174 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 23:44:50.562251 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 23:44:50.567703 systemd[1]: issuegen.service: Deactivated successfully. May 9 23:44:50.567921 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 23:44:50.570865 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 23:44:50.585040 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 23:44:50.588045 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 23:44:50.590351 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 9 23:44:50.591687 systemd[1]: Reached target getty.target - Login Prompts. May 9 23:44:51.129156 systemd-networkd[1396]: eth0: Gained IPv6LL May 9 23:44:51.131689 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 23:44:51.133639 systemd[1]: Reached target network-online.target - Network is Online. May 9 23:44:51.151309 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 23:44:51.154064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:44:51.156768 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 23:44:51.175063 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 23:44:51.175322 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 23:44:51.177907 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 23:44:51.179560 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 23:44:51.713480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:44:51.715255 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 23:44:51.716772 systemd[1]: Startup finished in 582ms (kernel) + 4.352s (initrd) + 3.684s (userspace) = 8.619s. May 9 23:44:51.717782 (kubelet)[1550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:44:52.157937 kubelet[1550]: E0509 23:44:52.157822 1550 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:44:52.160397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:44:52.160545 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:44:56.467672 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 23:44:56.468846 systemd[1]: Started sshd@0-10.0.0.40:22-10.0.0.1:38810.service - OpenSSH per-connection server daemon (10.0.0.1:38810). May 9 23:44:56.540742 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 38810 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:44:56.542820 sshd-session[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:44:56.552400 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 23:44:56.567288 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 23:44:56.569229 systemd-logind[1452]: New session 1 of user core. May 9 23:44:56.577681 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 23:44:56.580097 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 23:44:56.587449 (systemd)[1568]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 23:44:56.680524 systemd[1568]: Queued start job for default target default.target. May 9 23:44:56.689003 systemd[1568]: Created slice app.slice - User Application Slice. May 9 23:44:56.689033 systemd[1568]: Reached target paths.target - Paths. May 9 23:44:56.689045 systemd[1568]: Reached target timers.target - Timers. May 9 23:44:56.691943 systemd[1568]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 23:44:56.705730 systemd[1568]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 23:44:56.705855 systemd[1568]: Reached target sockets.target - Sockets. May 9 23:44:56.705868 systemd[1568]: Reached target basic.target - Basic System. May 9 23:44:56.705907 systemd[1568]: Reached target default.target - Main User Target. May 9 23:44:56.705934 systemd[1568]: Startup finished in 112ms. May 9 23:44:56.709934 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 23:44:56.712839 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 23:44:56.772979 systemd[1]: Started sshd@1-10.0.0.40:22-10.0.0.1:38812.service - OpenSSH per-connection server daemon (10.0.0.1:38812). May 9 23:44:56.815220 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 38812 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:44:56.816706 sshd-session[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:44:56.821348 systemd-logind[1452]: New session 2 of user core. May 9 23:44:56.833167 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 23:44:56.886465 sshd[1581]: Connection closed by 10.0.0.1 port 38812 May 9 23:44:56.887157 sshd-session[1579]: pam_unix(sshd:session): session closed for user core May 9 23:44:56.899505 systemd[1]: sshd@1-10.0.0.40:22-10.0.0.1:38812.service: Deactivated successfully. May 9 23:44:56.902338 systemd[1]: session-2.scope: Deactivated successfully. May 9 23:44:56.903628 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. May 9 23:44:56.910254 systemd[1]: Started sshd@2-10.0.0.40:22-10.0.0.1:38828.service - OpenSSH per-connection server daemon (10.0.0.1:38828). May 9 23:44:56.911069 systemd-logind[1452]: Removed session 2. May 9 23:44:56.947928 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 38828 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:44:56.949530 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:44:56.953168 systemd-logind[1452]: New session 3 of user core. May 9 23:44:56.970139 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 23:44:57.018988 sshd[1588]: Connection closed by 10.0.0.1 port 38828 May 9 23:44:57.018845 sshd-session[1586]: pam_unix(sshd:session): session closed for user core May 9 23:44:57.028085 systemd[1]: sshd@2-10.0.0.40:22-10.0.0.1:38828.service: Deactivated successfully. May 9 23:44:57.029633 systemd[1]: session-3.scope: Deactivated successfully. May 9 23:44:57.031472 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. May 9 23:44:57.041272 systemd[1]: Started sshd@3-10.0.0.40:22-10.0.0.1:38834.service - OpenSSH per-connection server daemon (10.0.0.1:38834). May 9 23:44:57.042225 systemd-logind[1452]: Removed session 3. May 9 23:44:57.078196 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 38834 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:44:57.079654 sshd-session[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:44:57.083672 systemd-logind[1452]: New session 4 of user core. May 9 23:44:57.098150 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 23:44:57.151205 sshd[1595]: Connection closed by 10.0.0.1 port 38834 May 9 23:44:57.151670 sshd-session[1593]: pam_unix(sshd:session): session closed for user core May 9 23:44:57.160195 systemd[1]: sshd@3-10.0.0.40:22-10.0.0.1:38834.service: Deactivated successfully. May 9 23:44:57.161828 systemd[1]: session-4.scope: Deactivated successfully. May 9 23:44:57.163035 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. May 9 23:44:57.166008 systemd[1]: Started sshd@4-10.0.0.40:22-10.0.0.1:38842.service - OpenSSH per-connection server daemon (10.0.0.1:38842). May 9 23:44:57.170517 systemd-logind[1452]: Removed session 4. May 9 23:44:57.205255 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 38842 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:44:57.206579 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:44:57.211381 systemd-logind[1452]: New session 5 of user core. May 9 23:44:57.219151 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 23:44:57.296174 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 23:44:57.296480 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:44:57.311910 sudo[1603]: pam_unix(sudo:session): session closed for user root May 9 23:44:57.316125 sshd[1602]: Connection closed by 10.0.0.1 port 38842 May 9 23:44:57.316572 sshd-session[1600]: pam_unix(sshd:session): session closed for user core May 9 23:44:57.325849 systemd[1]: sshd@4-10.0.0.40:22-10.0.0.1:38842.service: Deactivated successfully. May 9 23:44:57.327826 systemd[1]: session-5.scope: Deactivated successfully. May 9 23:44:57.329394 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. May 9 23:44:57.341417 systemd[1]: Started sshd@5-10.0.0.40:22-10.0.0.1:38848.service - OpenSSH per-connection server daemon (10.0.0.1:38848). May 9 23:44:57.342362 systemd-logind[1452]: Removed session 5. May 9 23:44:57.378706 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 38848 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:44:57.380048 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:44:57.384540 systemd-logind[1452]: New session 6 of user core. May 9 23:44:57.390117 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 23:44:57.442578 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 23:44:57.442870 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:44:57.446146 sudo[1612]: pam_unix(sudo:session): session closed for user root May 9 23:44:57.451985 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 9 23:44:57.452275 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:44:57.474296 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 23:44:57.502121 augenrules[1634]: No rules May 9 23:44:57.503640 systemd[1]: audit-rules.service: Deactivated successfully. May 9 23:44:57.503815 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 23:44:57.505085 sudo[1611]: pam_unix(sudo:session): session closed for user root May 9 23:44:57.506595 sshd[1610]: Connection closed by 10.0.0.1 port 38848 May 9 23:44:57.507178 sshd-session[1608]: pam_unix(sshd:session): session closed for user core May 9 23:44:57.519663 systemd[1]: sshd@5-10.0.0.40:22-10.0.0.1:38848.service: Deactivated successfully. May 9 23:44:57.521022 systemd[1]: session-6.scope: Deactivated successfully. May 9 23:44:57.522294 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. May 9 23:44:57.523403 systemd[1]: Started sshd@6-10.0.0.40:22-10.0.0.1:38854.service - OpenSSH per-connection server daemon (10.0.0.1:38854). May 9 23:44:57.524183 systemd-logind[1452]: Removed session 6. May 9 23:44:57.573412 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 38854 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:44:57.574894 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:44:57.579345 systemd-logind[1452]: New session 7 of user core. May 9 23:44:57.586158 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 23:44:57.637687 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 23:44:57.638379 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:44:57.655689 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 23:44:57.673489 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 23:44:57.674349 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 23:44:58.121694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:44:58.137229 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:44:58.159943 systemd[1]: Reloading requested from client PID 1687 ('systemctl') (unit session-7.scope)... May 9 23:44:58.159974 systemd[1]: Reloading... May 9 23:44:58.232004 zram_generator::config[1728]: No configuration found. May 9 23:44:58.414545 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:44:58.467452 systemd[1]: Reloading finished in 307 ms. May 9 23:44:58.508094 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 23:44:58.508170 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 23:44:58.508379 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:44:58.510526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:44:58.610511 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:44:58.615171 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 23:44:58.649569 kubelet[1771]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:44:58.649569 kubelet[1771]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 9 23:44:58.649569 kubelet[1771]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:44:58.649900 kubelet[1771]: I0509 23:44:58.649654 1771 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 23:44:59.197942 kubelet[1771]: I0509 23:44:59.197892 1771 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 9 23:44:59.197942 kubelet[1771]: I0509 23:44:59.197926 1771 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 23:44:59.198225 kubelet[1771]: I0509 23:44:59.198197 1771 server.go:954] "Client rotation is on, will bootstrap in background" May 9 23:44:59.248625 kubelet[1771]: I0509 23:44:59.248585 1771 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 23:44:59.257847 kubelet[1771]: E0509 23:44:59.257788 1771 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 23:44:59.257847 kubelet[1771]: I0509 23:44:59.257825 1771 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 23:44:59.260445 kubelet[1771]: I0509 23:44:59.260416 1771 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 23:44:59.260637 kubelet[1771]: I0509 23:44:59.260611 1771 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 23:44:59.260799 kubelet[1771]: I0509 23:44:59.260640 1771 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.40","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 23:44:59.260942 kubelet[1771]: I0509 23:44:59.260868 1771 topology_manager.go:138] "Creating topology manager with none policy" May 9 23:44:59.260942 kubelet[1771]: I0509 23:44:59.260877 1771 container_manager_linux.go:304] "Creating device plugin manager" May 9 23:44:59.261102 kubelet[1771]: I0509 23:44:59.261086 1771 state_mem.go:36] "Initialized new in-memory state store" May 9 23:44:59.265994 kubelet[1771]: I0509 23:44:59.265764 1771 kubelet.go:446] "Attempting to sync node with API server" May 9 23:44:59.265994 kubelet[1771]: I0509 23:44:59.265794 1771 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 23:44:59.265994 kubelet[1771]: I0509 23:44:59.265821 1771 kubelet.go:352] "Adding apiserver pod source" May 9 23:44:59.265994 kubelet[1771]: I0509 23:44:59.265833 1771 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 23:44:59.266353 kubelet[1771]: E0509 23:44:59.266253 1771 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:44:59.266353 kubelet[1771]: E0509 23:44:59.266312 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:44:59.268452 kubelet[1771]: I0509 23:44:59.268432 1771 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 9 23:44:59.269228 kubelet[1771]: I0509 23:44:59.269144 1771 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 23:44:59.269906 kubelet[1771]: W0509 23:44:59.269879 1771 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 23:44:59.270813 kubelet[1771]: I0509 23:44:59.270779 1771 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 9 23:44:59.270813 kubelet[1771]: I0509 23:44:59.270821 1771 server.go:1287] "Started kubelet" May 9 23:44:59.271641 kubelet[1771]: I0509 23:44:59.271074 1771 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 9 23:44:59.271641 kubelet[1771]: I0509 23:44:59.271156 1771 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 23:44:59.271641 kubelet[1771]: I0509 23:44:59.271531 1771 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 23:44:59.273027 kubelet[1771]: I0509 23:44:59.272692 1771 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 23:44:59.273027 kubelet[1771]: I0509 23:44:59.272695 1771 server.go:490] "Adding debug handlers to kubelet server" May 9 23:44:59.273640 kubelet[1771]: I0509 23:44:59.273598 1771 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 23:44:59.276139 kubelet[1771]: E0509 23:44:59.275678 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" May 9 23:44:59.276139 kubelet[1771]: I0509 23:44:59.275725 1771 volume_manager.go:297] "Starting Kubelet Volume Manager" May 9 23:44:59.276139 kubelet[1771]: I0509 23:44:59.275799 1771 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 23:44:59.276139 kubelet[1771]: I0509 23:44:59.275843 1771 reconciler.go:26] "Reconciler: start to sync state" May 9 23:44:59.277266 kubelet[1771]: I0509 23:44:59.277193 1771 factory.go:221] Registration of the systemd container factory successfully May 9 23:44:59.277624 kubelet[1771]: I0509 23:44:59.277405 1771 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 23:44:59.278232 kubelet[1771]: E0509 23:44:59.278197 1771 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 23:44:59.280588 kubelet[1771]: I0509 23:44:59.279322 1771 factory.go:221] Registration of the containerd container factory successfully May 9 23:44:59.281297 kubelet[1771]: W0509 23:44:59.281269 1771 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.40" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 9 23:44:59.281336 kubelet[1771]: E0509 23:44:59.281321 1771 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.40\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" May 9 23:44:59.281703 kubelet[1771]: E0509 23:44:59.281351 1771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.40.183e008a41811a24 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.40,UID:10.0.0.40,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.40,},FirstTimestamp:2025-05-09 23:44:59.27079786 +0000 UTC m=+0.652319906,LastTimestamp:2025-05-09 23:44:59.27079786 +0000 UTC m=+0.652319906,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.40,}" May 9 23:44:59.282108 kubelet[1771]: E0509 23:44:59.282044 1771 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.40\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 9 23:44:59.284808 kubelet[1771]: W0509 23:44:59.284769 1771 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 9 23:44:59.284880 kubelet[1771]: E0509 23:44:59.284814 1771 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" May 9 23:44:59.284906 kubelet[1771]: W0509 23:44:59.284884 1771 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 9 23:44:59.284906 kubelet[1771]: E0509 23:44:59.284898 1771 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" May 9 23:44:59.287480 kubelet[1771]: E0509 23:44:59.287373 1771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.40.183e008a41f19595 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.40,UID:10.0.0.40,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.40,},FirstTimestamp:2025-05-09 23:44:59.278169493 +0000 UTC m=+0.659691619,LastTimestamp:2025-05-09 23:44:59.278169493 +0000 UTC m=+0.659691619,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.40,}" May 9 23:44:59.294663 kubelet[1771]: I0509 23:44:59.294623 1771 cpu_manager.go:221] "Starting CPU manager" policy="none" May 9 23:44:59.294663 kubelet[1771]: I0509 23:44:59.294645 1771 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 9 23:44:59.294663 kubelet[1771]: I0509 23:44:59.294665 1771 state_mem.go:36] "Initialized new in-memory state store" May 9 23:44:59.296599 kubelet[1771]: E0509 23:44:59.295139 1771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.40.183e008a42dca725 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.40,UID:10.0.0.40,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.40 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.40,},FirstTimestamp:2025-05-09 23:44:59.293574949 +0000 UTC m=+0.675096955,LastTimestamp:2025-05-09 23:44:59.293574949 +0000 UTC m=+0.675096955,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.40,}" May 9 23:44:59.366206 kubelet[1771]: I0509 23:44:59.366082 1771 policy_none.go:49] "None policy: Start" May 9 23:44:59.366206 kubelet[1771]: I0509 23:44:59.366121 1771 memory_manager.go:186] "Starting memorymanager" policy="None" May 9 23:44:59.366206 kubelet[1771]: I0509 23:44:59.366135 1771 state_mem.go:35] "Initializing new in-memory state store" May 9 23:44:59.373717 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 23:44:59.376560 kubelet[1771]: E0509 23:44:59.376520 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" May 9 23:44:59.386375 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 23:44:59.390942 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 23:44:59.391613 kubelet[1771]: I0509 23:44:59.391487 1771 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 23:44:59.392622 kubelet[1771]: I0509 23:44:59.392437 1771 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 23:44:59.392622 kubelet[1771]: I0509 23:44:59.392464 1771 status_manager.go:227] "Starting to sync pod status with apiserver" May 9 23:44:59.392622 kubelet[1771]: I0509 23:44:59.392484 1771 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 9 23:44:59.392622 kubelet[1771]: I0509 23:44:59.392491 1771 kubelet.go:2388] "Starting kubelet main sync loop" May 9 23:44:59.392622 kubelet[1771]: E0509 23:44:59.392590 1771 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 23:44:59.396814 kubelet[1771]: I0509 23:44:59.396769 1771 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 23:44:59.397226 kubelet[1771]: I0509 23:44:59.397201 1771 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 23:44:59.397226 kubelet[1771]: I0509 23:44:59.397222 1771 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 23:44:59.397661 kubelet[1771]: I0509 23:44:59.397624 1771 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 23:44:59.399421 kubelet[1771]: E0509 23:44:59.399388 1771 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 9 23:44:59.399486 kubelet[1771]: E0509 23:44:59.399434 1771 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.40\" not found" May 9 23:44:59.486899 kubelet[1771]: E0509 23:44:59.486773 1771 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.40\" not found" node="10.0.0.40" May 9 23:44:59.498793 kubelet[1771]: I0509 23:44:59.498748 1771 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.40" May 9 23:44:59.502629 kubelet[1771]: I0509 23:44:59.502513 1771 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.40" May 9 23:44:59.502629 kubelet[1771]: E0509 23:44:59.502543 1771 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.40\": node \"10.0.0.40\" not found" May 9 23:44:59.508477 kubelet[1771]: E0509 23:44:59.508452 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" May 9 23:44:59.609604 kubelet[1771]: E0509 23:44:59.609548 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" May 9 23:44:59.638772 sudo[1645]: pam_unix(sudo:session): session closed for user root May 9 23:44:59.639996 sshd[1644]: Connection closed by 10.0.0.1 port 38854 May 9 23:44:59.640471 sshd-session[1642]: pam_unix(sshd:session): session closed for user core May 9 23:44:59.643719 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. May 9 23:44:59.644079 systemd[1]: sshd@6-10.0.0.40:22-10.0.0.1:38854.service: Deactivated successfully. May 9 23:44:59.646003 systemd[1]: session-7.scope: Deactivated successfully. May 9 23:44:59.647732 systemd-logind[1452]: Removed session 7. May 9 23:44:59.709902 kubelet[1771]: E0509 23:44:59.709860 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" May 9 23:44:59.810886 kubelet[1771]: E0509 23:44:59.810780 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" May 9 23:44:59.911340 kubelet[1771]: E0509 23:44:59.911311 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" May 9 23:45:00.011879 kubelet[1771]: E0509 23:45:00.011836 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" May 9 23:45:00.112444 kubelet[1771]: E0509 23:45:00.112360 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" May 9 23:45:00.200040 kubelet[1771]: I0509 23:45:00.199973 1771 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 9 23:45:00.200179 kubelet[1771]: W0509 23:45:00.200130 1771 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 9 23:45:00.213132 kubelet[1771]: E0509 23:45:00.213099 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" May 9 23:45:00.266409 kubelet[1771]: E0509 23:45:00.266386 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:00.313895 kubelet[1771]: E0509 23:45:00.313858 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" May 9 23:45:00.414830 kubelet[1771]: E0509 23:45:00.414717 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" May 9 23:45:00.515311 kubelet[1771]: E0509 23:45:00.515265 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" May 9 23:45:00.615915 kubelet[1771]: E0509 23:45:00.615870 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.40\" not found" May 9 23:45:00.716827 kubelet[1771]: I0509 23:45:00.716799 1771 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 9 23:45:00.717209 containerd[1473]: time="2025-05-09T23:45:00.717170677Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 23:45:00.717555 kubelet[1771]: I0509 23:45:00.717357 1771 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 9 23:45:01.266935 kubelet[1771]: E0509 23:45:01.266888 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:01.270197 kubelet[1771]: I0509 23:45:01.270168 1771 apiserver.go:52] "Watching apiserver" May 9 23:45:01.280093 kubelet[1771]: I0509 23:45:01.276450 1771 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 23:45:01.285300 systemd[1]: Created slice kubepods-burstable-pod34c139b6_c37a_481d_8db7_c787120a4cdf.slice - libcontainer container kubepods-burstable-pod34c139b6_c37a_481d_8db7_c787120a4cdf.slice. May 9 23:45:01.287292 kubelet[1771]: I0509 23:45:01.287265 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34c139b6-c37a-481d-8db7-c787120a4cdf-clustermesh-secrets\") pod \"cilium-75wlb\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " pod="kube-system/cilium-75wlb" May 9 23:45:01.287510 kubelet[1771]: I0509 23:45:01.287303 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-host-proc-sys-net\") pod \"cilium-75wlb\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " pod="kube-system/cilium-75wlb" May 9 23:45:01.287510 kubelet[1771]: I0509 23:45:01.287334 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34c139b6-c37a-481d-8db7-c787120a4cdf-hubble-tls\") pod \"cilium-75wlb\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " pod="kube-system/cilium-75wlb" May 9 23:45:01.287510 kubelet[1771]: I0509 23:45:01.287349 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95703302-49a3-408c-9d0f-f15c8244c5e6-xtables-lock\") pod \"kube-proxy-drx4l\" (UID: \"95703302-49a3-408c-9d0f-f15c8244c5e6\") " pod="kube-system/kube-proxy-drx4l" May 9 23:45:01.287510 kubelet[1771]: I0509 23:45:01.287387 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-hostproc\") pod \"cilium-75wlb\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " pod="kube-system/cilium-75wlb" May 9 23:45:01.287510 kubelet[1771]: I0509 23:45:01.287418 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-cni-path\") pod \"cilium-75wlb\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " pod="kube-system/cilium-75wlb" May 9 23:45:01.287510 kubelet[1771]: I0509 23:45:01.287456 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-etc-cni-netd\") pod \"cilium-75wlb\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " pod="kube-system/cilium-75wlb" May 9 23:45:01.287642 kubelet[1771]: I0509 23:45:01.287477 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-xtables-lock\") pod \"cilium-75wlb\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " pod="kube-system/cilium-75wlb" May 9 23:45:01.287642 kubelet[1771]: I0509 23:45:01.287499 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95703302-49a3-408c-9d0f-f15c8244c5e6-lib-modules\") pod \"kube-proxy-drx4l\" (UID: \"95703302-49a3-408c-9d0f-f15c8244c5e6\") " pod="kube-system/kube-proxy-drx4l" May 9 23:45:01.287642 kubelet[1771]: I0509 23:45:01.287514 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn942\" (UniqueName: \"kubernetes.io/projected/95703302-49a3-408c-9d0f-f15c8244c5e6-kube-api-access-zn942\") pod \"kube-proxy-drx4l\" (UID: \"95703302-49a3-408c-9d0f-f15c8244c5e6\") " pod="kube-system/kube-proxy-drx4l" May 9 23:45:01.287642 kubelet[1771]: I0509 23:45:01.287532 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-cilium-run\") pod \"cilium-75wlb\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " pod="kube-system/cilium-75wlb" May 9 23:45:01.287642 kubelet[1771]: I0509 23:45:01.287546 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-cilium-cgroup\") pod \"cilium-75wlb\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " pod="kube-system/cilium-75wlb" May 9 23:45:01.287642 kubelet[1771]: I0509 23:45:01.287583 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/95703302-49a3-408c-9d0f-f15c8244c5e6-kube-proxy\") pod \"kube-proxy-drx4l\" (UID: \"95703302-49a3-408c-9d0f-f15c8244c5e6\") " pod="kube-system/kube-proxy-drx4l" May 9 23:45:01.287768 kubelet[1771]: I0509 23:45:01.287598 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34c139b6-c37a-481d-8db7-c787120a4cdf-cilium-config-path\") pod \"cilium-75wlb\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " pod="kube-system/cilium-75wlb" May 9 23:45:01.287768 kubelet[1771]: I0509 23:45:01.287613 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd2tm\" (UniqueName: \"kubernetes.io/projected/34c139b6-c37a-481d-8db7-c787120a4cdf-kube-api-access-vd2tm\") pod \"cilium-75wlb\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " pod="kube-system/cilium-75wlb" May 9 23:45:01.287768 kubelet[1771]: I0509 23:45:01.287628 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-bpf-maps\") pod \"cilium-75wlb\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " pod="kube-system/cilium-75wlb" May 9 23:45:01.287768 kubelet[1771]: I0509 23:45:01.287642 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-lib-modules\") pod \"cilium-75wlb\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " pod="kube-system/cilium-75wlb" May 9 23:45:01.287768 kubelet[1771]: I0509 23:45:01.287657 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-host-proc-sys-kernel\") pod \"cilium-75wlb\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " pod="kube-system/cilium-75wlb" May 9 23:45:01.314329 systemd[1]: Created slice kubepods-besteffort-pod95703302_49a3_408c_9d0f_f15c8244c5e6.slice - libcontainer container kubepods-besteffort-pod95703302_49a3_408c_9d0f_f15c8244c5e6.slice. May 9 23:45:01.613092 kubelet[1771]: E0509 23:45:01.612973 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:01.614617 containerd[1473]: time="2025-05-09T23:45:01.614413629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-75wlb,Uid:34c139b6-c37a-481d-8db7-c787120a4cdf,Namespace:kube-system,Attempt:0,}" May 9 23:45:01.625911 kubelet[1771]: E0509 23:45:01.625410 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:01.626253 containerd[1473]: time="2025-05-09T23:45:01.626206562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-drx4l,Uid:95703302-49a3-408c-9d0f-f15c8244c5e6,Namespace:kube-system,Attempt:0,}" May 9 23:45:02.124909 containerd[1473]: time="2025-05-09T23:45:02.124860134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:45:02.126130 containerd[1473]: time="2025-05-09T23:45:02.125987067Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 9 23:45:02.126865 containerd[1473]: time="2025-05-09T23:45:02.126771112Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:45:02.127738 containerd[1473]: time="2025-05-09T23:45:02.127709716Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:45:02.127973 containerd[1473]: time="2025-05-09T23:45:02.127922572Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 23:45:02.130210 containerd[1473]: time="2025-05-09T23:45:02.130149552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:45:02.135209 containerd[1473]: time="2025-05-09T23:45:02.135164099Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 520.649724ms" May 9 23:45:02.136793 containerd[1473]: time="2025-05-09T23:45:02.136657529Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 510.364264ms" May 9 23:45:02.229218 containerd[1473]: time="2025-05-09T23:45:02.229064654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:45:02.229218 containerd[1473]: time="2025-05-09T23:45:02.229144115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:45:02.229593 containerd[1473]: time="2025-05-09T23:45:02.229538057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:45:02.229759 containerd[1473]: time="2025-05-09T23:45:02.229727147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:45:02.229822 containerd[1473]: time="2025-05-09T23:45:02.229764036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:45:02.229909 containerd[1473]: time="2025-05-09T23:45:02.229810088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:45:02.229909 containerd[1473]: time="2025-05-09T23:45:02.229827293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:45:02.229909 containerd[1473]: time="2025-05-09T23:45:02.229892750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:45:02.267866 kubelet[1771]: E0509 23:45:02.267155 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:02.313190 systemd[1]: Started cri-containerd-0927f341c21c03edaed7db6357fd5ed3274960a6db24f9c9aff7daeadca05a1d.scope - libcontainer container 0927f341c21c03edaed7db6357fd5ed3274960a6db24f9c9aff7daeadca05a1d. May 9 23:45:02.314736 systemd[1]: Started cri-containerd-4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6.scope - libcontainer container 4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6. May 9 23:45:02.335261 containerd[1473]: time="2025-05-09T23:45:02.335218402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-drx4l,Uid:95703302-49a3-408c-9d0f-f15c8244c5e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"0927f341c21c03edaed7db6357fd5ed3274960a6db24f9c9aff7daeadca05a1d\"" May 9 23:45:02.336587 kubelet[1771]: E0509 23:45:02.336351 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:02.338024 containerd[1473]: time="2025-05-09T23:45:02.337995846Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 9 23:45:02.338459 containerd[1473]: time="2025-05-09T23:45:02.338423278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-75wlb,Uid:34c139b6-c37a-481d-8db7-c787120a4cdf,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\"" May 9 23:45:02.339449 kubelet[1771]: E0509 23:45:02.339422 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:02.394905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3827756107.mount: Deactivated successfully. May 9 23:45:03.268203 kubelet[1771]: E0509 23:45:03.268160 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:03.276560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3496056757.mount: Deactivated successfully. May 9 23:45:03.695945 containerd[1473]: time="2025-05-09T23:45:03.695872551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:45:03.697402 containerd[1473]: time="2025-05-09T23:45:03.697344522Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 9 23:45:03.698330 containerd[1473]: time="2025-05-09T23:45:03.698299003Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:45:03.700214 containerd[1473]: time="2025-05-09T23:45:03.700176157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:45:03.700874 containerd[1473]: time="2025-05-09T23:45:03.700839845Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.362592013s" May 9 23:45:03.700903 containerd[1473]: time="2025-05-09T23:45:03.700872653Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 9 23:45:03.702538 containerd[1473]: time="2025-05-09T23:45:03.702510027Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 9 23:45:03.703652 containerd[1473]: time="2025-05-09T23:45:03.703531525Z" level=info msg="CreateContainer within sandbox \"0927f341c21c03edaed7db6357fd5ed3274960a6db24f9c9aff7daeadca05a1d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 23:45:03.718018 containerd[1473]: time="2025-05-09T23:45:03.717970410Z" level=info msg="CreateContainer within sandbox \"0927f341c21c03edaed7db6357fd5ed3274960a6db24f9c9aff7daeadca05a1d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e3a2fe0742f504a88d5d3176a15feebd70c36bfdcc479ebeb02d4b519c185fe3\"" May 9 23:45:03.718672 containerd[1473]: time="2025-05-09T23:45:03.718644341Z" level=info msg="StartContainer for \"e3a2fe0742f504a88d5d3176a15feebd70c36bfdcc479ebeb02d4b519c185fe3\"" May 9 23:45:03.743220 systemd[1]: Started cri-containerd-e3a2fe0742f504a88d5d3176a15feebd70c36bfdcc479ebeb02d4b519c185fe3.scope - libcontainer container e3a2fe0742f504a88d5d3176a15feebd70c36bfdcc479ebeb02d4b519c185fe3. May 9 23:45:03.770715 containerd[1473]: time="2025-05-09T23:45:03.770657194Z" level=info msg="StartContainer for \"e3a2fe0742f504a88d5d3176a15feebd70c36bfdcc479ebeb02d4b519c185fe3\" returns successfully" May 9 23:45:04.269162 kubelet[1771]: E0509 23:45:04.269110 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:04.404244 kubelet[1771]: E0509 23:45:04.404205 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:04.413713 kubelet[1771]: I0509 23:45:04.413639 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-drx4l" podStartSLOduration=4.049167811 podStartE2EDuration="5.413623583s" podCreationTimestamp="2025-05-09 23:44:59 +0000 UTC" firstStartedPulling="2025-05-09 23:45:02.33724269 +0000 UTC m=+3.718764696" lastFinishedPulling="2025-05-09 23:45:03.701698462 +0000 UTC m=+5.083220468" observedRunningTime="2025-05-09 23:45:04.413210842 +0000 UTC m=+5.794732848" watchObservedRunningTime="2025-05-09 23:45:04.413623583 +0000 UTC m=+5.795145589" May 9 23:45:05.269334 kubelet[1771]: E0509 23:45:05.269274 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:05.406273 kubelet[1771]: E0509 23:45:05.406031 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:06.269929 kubelet[1771]: E0509 23:45:06.269874 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:06.821454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount752166368.mount: Deactivated successfully. May 9 23:45:07.270709 kubelet[1771]: E0509 23:45:07.270667 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:08.072401 containerd[1473]: time="2025-05-09T23:45:08.072321511Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:45:08.073592 containerd[1473]: time="2025-05-09T23:45:08.073547015Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 9 23:45:08.074652 containerd[1473]: time="2025-05-09T23:45:08.074629928Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:45:08.076312 containerd[1473]: time="2025-05-09T23:45:08.076275483Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.373731208s" May 9 23:45:08.076365 containerd[1473]: time="2025-05-09T23:45:08.076315612Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 9 23:45:08.078328 containerd[1473]: time="2025-05-09T23:45:08.078295558Z" level=info msg="CreateContainer within sandbox \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 23:45:08.089327 containerd[1473]: time="2025-05-09T23:45:08.089284366Z" level=info msg="CreateContainer within sandbox \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb\"" May 9 23:45:08.089785 containerd[1473]: time="2025-05-09T23:45:08.089763029Z" level=info msg="StartContainer for \"7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb\"" May 9 23:45:08.119139 systemd[1]: Started cri-containerd-7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb.scope - libcontainer container 7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb. May 9 23:45:08.140613 containerd[1473]: time="2025-05-09T23:45:08.140516363Z" level=info msg="StartContainer for \"7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb\" returns successfully" May 9 23:45:08.188099 systemd[1]: cri-containerd-7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb.scope: Deactivated successfully. May 9 23:45:08.271774 kubelet[1771]: E0509 23:45:08.271723 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:08.307319 containerd[1473]: time="2025-05-09T23:45:08.307250165Z" level=info msg="shim disconnected" id=7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb namespace=k8s.io May 9 23:45:08.307646 containerd[1473]: time="2025-05-09T23:45:08.307460691Z" level=warning msg="cleaning up after shim disconnected" id=7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb namespace=k8s.io May 9 23:45:08.307646 containerd[1473]: time="2025-05-09T23:45:08.307476574Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:45:08.412195 kubelet[1771]: E0509 23:45:08.412082 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:08.414887 containerd[1473]: time="2025-05-09T23:45:08.414846267Z" level=info msg="CreateContainer within sandbox \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 23:45:08.427752 containerd[1473]: time="2025-05-09T23:45:08.427699396Z" level=info msg="CreateContainer within sandbox \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1d32d98413a38203725a5f046cc128c5322228e65ee725e89467fcf7a25b9350\"" May 9 23:45:08.428521 containerd[1473]: time="2025-05-09T23:45:08.428477523Z" level=info msg="StartContainer for \"1d32d98413a38203725a5f046cc128c5322228e65ee725e89467fcf7a25b9350\"" May 9 23:45:08.455142 systemd[1]: Started cri-containerd-1d32d98413a38203725a5f046cc128c5322228e65ee725e89467fcf7a25b9350.scope - libcontainer container 1d32d98413a38203725a5f046cc128c5322228e65ee725e89467fcf7a25b9350. May 9 23:45:08.476043 containerd[1473]: time="2025-05-09T23:45:08.475995881Z" level=info msg="StartContainer for \"1d32d98413a38203725a5f046cc128c5322228e65ee725e89467fcf7a25b9350\" returns successfully" May 9 23:45:08.489052 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:45:08.489285 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:45:08.489351 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 9 23:45:08.498308 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:45:08.498514 systemd[1]: cri-containerd-1d32d98413a38203725a5f046cc128c5322228e65ee725e89467fcf7a25b9350.scope: Deactivated successfully. May 9 23:45:08.508733 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:45:08.526557 containerd[1473]: time="2025-05-09T23:45:08.526493921Z" level=info msg="shim disconnected" id=1d32d98413a38203725a5f046cc128c5322228e65ee725e89467fcf7a25b9350 namespace=k8s.io May 9 23:45:08.526942 containerd[1473]: time="2025-05-09T23:45:08.526771060Z" level=warning msg="cleaning up after shim disconnected" id=1d32d98413a38203725a5f046cc128c5322228e65ee725e89467fcf7a25b9350 namespace=k8s.io May 9 23:45:08.526942 containerd[1473]: time="2025-05-09T23:45:08.526788384Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:45:09.085094 systemd[1]: run-containerd-runc-k8s.io-7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb-runc.7YEGsi.mount: Deactivated successfully. May 9 23:45:09.085177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb-rootfs.mount: Deactivated successfully. May 9 23:45:09.271888 kubelet[1771]: E0509 23:45:09.271828 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:09.416666 kubelet[1771]: E0509 23:45:09.416320 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:09.418220 containerd[1473]: time="2025-05-09T23:45:09.418182889Z" level=info msg="CreateContainer within sandbox \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 23:45:09.438473 containerd[1473]: time="2025-05-09T23:45:09.438425193Z" level=info msg="CreateContainer within sandbox \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62\"" May 9 23:45:09.439253 containerd[1473]: time="2025-05-09T23:45:09.439227441Z" level=info msg="StartContainer for \"817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62\"" May 9 23:45:09.469177 systemd[1]: Started cri-containerd-817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62.scope - libcontainer container 817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62. May 9 23:45:09.495992 containerd[1473]: time="2025-05-09T23:45:09.495938997Z" level=info msg="StartContainer for \"817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62\" returns successfully" May 9 23:45:09.512118 systemd[1]: cri-containerd-817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62.scope: Deactivated successfully. May 9 23:45:09.534564 containerd[1473]: time="2025-05-09T23:45:09.534468959Z" level=info msg="shim disconnected" id=817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62 namespace=k8s.io May 9 23:45:09.534564 containerd[1473]: time="2025-05-09T23:45:09.534526011Z" level=warning msg="cleaning up after shim disconnected" id=817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62 namespace=k8s.io May 9 23:45:09.534564 containerd[1473]: time="2025-05-09T23:45:09.534534693Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:45:10.084649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62-rootfs.mount: Deactivated successfully. May 9 23:45:10.272727 kubelet[1771]: E0509 23:45:10.272672 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:10.421976 kubelet[1771]: E0509 23:45:10.421670 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:10.423665 containerd[1473]: time="2025-05-09T23:45:10.423628213Z" level=info msg="CreateContainer within sandbox \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 23:45:10.439507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3848263284.mount: Deactivated successfully. May 9 23:45:10.444248 containerd[1473]: time="2025-05-09T23:45:10.444204174Z" level=info msg="CreateContainer within sandbox \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e\"" May 9 23:45:10.444907 containerd[1473]: time="2025-05-09T23:45:10.444839902Z" level=info msg="StartContainer for \"66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e\"" May 9 23:45:10.471130 systemd[1]: Started cri-containerd-66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e.scope - libcontainer container 66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e. May 9 23:45:10.491571 systemd[1]: cri-containerd-66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e.scope: Deactivated successfully. May 9 23:45:10.494197 containerd[1473]: time="2025-05-09T23:45:10.494102783Z" level=info msg="StartContainer for \"66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e\" returns successfully" May 9 23:45:10.516622 containerd[1473]: time="2025-05-09T23:45:10.516563244Z" level=info msg="shim disconnected" id=66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e namespace=k8s.io May 9 23:45:10.517135 containerd[1473]: time="2025-05-09T23:45:10.516921076Z" level=warning msg="cleaning up after shim disconnected" id=66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e namespace=k8s.io May 9 23:45:10.517135 containerd[1473]: time="2025-05-09T23:45:10.516982289Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:45:11.084702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e-rootfs.mount: Deactivated successfully. May 9 23:45:11.273331 kubelet[1771]: E0509 23:45:11.273267 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:11.425944 kubelet[1771]: E0509 23:45:11.425845 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:11.427637 containerd[1473]: time="2025-05-09T23:45:11.427598703Z" level=info msg="CreateContainer within sandbox \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 23:45:11.443337 containerd[1473]: time="2025-05-09T23:45:11.443268693Z" level=info msg="CreateContainer within sandbox \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8\"" May 9 23:45:11.443833 containerd[1473]: time="2025-05-09T23:45:11.443789755Z" level=info msg="StartContainer for \"9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8\"" May 9 23:45:11.468138 systemd[1]: Started cri-containerd-9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8.scope - libcontainer container 9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8. May 9 23:45:11.489664 containerd[1473]: time="2025-05-09T23:45:11.489587966Z" level=info msg="StartContainer for \"9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8\" returns successfully" May 9 23:45:11.618363 kubelet[1771]: I0509 23:45:11.618143 1771 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 9 23:45:11.989040 kernel: Initializing XFRM netlink socket May 9 23:45:12.273530 kubelet[1771]: E0509 23:45:12.273396 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:12.429622 kubelet[1771]: E0509 23:45:12.429595 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:12.450181 kubelet[1771]: I0509 23:45:12.450135 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-75wlb" podStartSLOduration=7.712921724 podStartE2EDuration="13.450117082s" podCreationTimestamp="2025-05-09 23:44:59 +0000 UTC" firstStartedPulling="2025-05-09 23:45:02.339880017 +0000 UTC m=+3.721401983" lastFinishedPulling="2025-05-09 23:45:08.077075335 +0000 UTC m=+9.458597341" observedRunningTime="2025-05-09 23:45:12.44794831 +0000 UTC m=+13.829470316" watchObservedRunningTime="2025-05-09 23:45:12.450117082 +0000 UTC m=+13.831639088" May 9 23:45:13.274402 kubelet[1771]: E0509 23:45:13.274355 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:13.434019 kubelet[1771]: E0509 23:45:13.432740 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:13.609413 systemd-networkd[1396]: cilium_host: Link UP May 9 23:45:13.609533 systemd-networkd[1396]: cilium_net: Link UP May 9 23:45:13.609536 systemd-networkd[1396]: cilium_net: Gained carrier May 9 23:45:13.609661 systemd-networkd[1396]: cilium_host: Gained carrier May 9 23:45:13.610451 systemd-networkd[1396]: cilium_host: Gained IPv6LL May 9 23:45:13.690567 systemd-networkd[1396]: cilium_vxlan: Link UP May 9 23:45:13.690576 systemd-networkd[1396]: cilium_vxlan: Gained carrier May 9 23:45:13.986988 kernel: NET: Registered PF_ALG protocol family May 9 23:45:14.105396 systemd-networkd[1396]: cilium_net: Gained IPv6LL May 9 23:45:14.274617 kubelet[1771]: E0509 23:45:14.274507 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:14.434970 kubelet[1771]: E0509 23:45:14.434914 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:14.564498 systemd-networkd[1396]: lxc_health: Link UP May 9 23:45:14.576455 systemd-networkd[1396]: lxc_health: Gained carrier May 9 23:45:14.874213 systemd-networkd[1396]: cilium_vxlan: Gained IPv6LL May 9 23:45:15.275654 kubelet[1771]: E0509 23:45:15.275608 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:15.615094 kubelet[1771]: E0509 23:45:15.614430 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:15.813655 systemd[1]: Created slice kubepods-besteffort-podd7eafcce_6dcf_4cb3_b94f_47ff1bcb236e.slice - libcontainer container kubepods-besteffort-podd7eafcce_6dcf_4cb3_b94f_47ff1bcb236e.slice. May 9 23:45:15.833193 systemd-networkd[1396]: lxc_health: Gained IPv6LL May 9 23:45:15.877360 kubelet[1771]: I0509 23:45:15.877237 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g98gh\" (UniqueName: \"kubernetes.io/projected/d7eafcce-6dcf-4cb3-b94f-47ff1bcb236e-kube-api-access-g98gh\") pod \"nginx-deployment-7fcdb87857-qgv29\" (UID: \"d7eafcce-6dcf-4cb3-b94f-47ff1bcb236e\") " pod="default/nginx-deployment-7fcdb87857-qgv29" May 9 23:45:16.117790 containerd[1473]: time="2025-05-09T23:45:16.117737644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-qgv29,Uid:d7eafcce-6dcf-4cb3-b94f-47ff1bcb236e,Namespace:default,Attempt:0,}" May 9 23:45:16.185709 systemd-networkd[1396]: lxcdb4d3a17e7ca: Link UP May 9 23:45:16.197603 kernel: eth0: renamed from tmp17592 May 9 23:45:16.204199 systemd-networkd[1396]: lxcdb4d3a17e7ca: Gained carrier May 9 23:45:16.276340 kubelet[1771]: E0509 23:45:16.276286 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:17.277178 kubelet[1771]: E0509 23:45:17.277133 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:17.819096 systemd-networkd[1396]: lxcdb4d3a17e7ca: Gained IPv6LL May 9 23:45:18.278226 kubelet[1771]: E0509 23:45:18.278164 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:19.262191 containerd[1473]: time="2025-05-09T23:45:19.261970553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:45:19.262191 containerd[1473]: time="2025-05-09T23:45:19.262047285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:45:19.262191 containerd[1473]: time="2025-05-09T23:45:19.262058446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:45:19.262191 containerd[1473]: time="2025-05-09T23:45:19.262143619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:45:19.266300 kubelet[1771]: E0509 23:45:19.266174 1771 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:19.278899 kubelet[1771]: E0509 23:45:19.278862 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:19.282152 systemd[1]: Started cri-containerd-175920efaa11c2a936e84ce7c785a4da3c48700aa861f65dc1a93204f92226a4.scope - libcontainer container 175920efaa11c2a936e84ce7c785a4da3c48700aa861f65dc1a93204f92226a4. May 9 23:45:19.295445 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 23:45:19.313707 containerd[1473]: time="2025-05-09T23:45:19.313639044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-qgv29,Uid:d7eafcce-6dcf-4cb3-b94f-47ff1bcb236e,Namespace:default,Attempt:0,} returns sandbox id \"175920efaa11c2a936e84ce7c785a4da3c48700aa861f65dc1a93204f92226a4\"" May 9 23:45:19.314807 containerd[1473]: time="2025-05-09T23:45:19.314727809Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 9 23:45:20.279728 kubelet[1771]: E0509 23:45:20.279523 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:21.280417 kubelet[1771]: E0509 23:45:21.280378 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:21.506498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount682147690.mount: Deactivated successfully. May 9 23:45:22.251890 containerd[1473]: time="2025-05-09T23:45:22.251833783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:45:22.252381 containerd[1473]: time="2025-05-09T23:45:22.252267203Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69948859" May 9 23:45:22.253061 containerd[1473]: time="2025-05-09T23:45:22.253029788Z" level=info msg="ImageCreate event name:\"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:45:22.255933 containerd[1473]: time="2025-05-09T23:45:22.255891863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:45:22.257451 containerd[1473]: time="2025-05-09T23:45:22.257409873Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 2.942647979s" May 9 23:45:22.257500 containerd[1473]: time="2025-05-09T23:45:22.257450279Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 9 23:45:22.259499 containerd[1473]: time="2025-05-09T23:45:22.259464517Z" level=info msg="CreateContainer within sandbox \"175920efaa11c2a936e84ce7c785a4da3c48700aa861f65dc1a93204f92226a4\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 9 23:45:22.273320 containerd[1473]: time="2025-05-09T23:45:22.273221737Z" level=info msg="CreateContainer within sandbox \"175920efaa11c2a936e84ce7c785a4da3c48700aa861f65dc1a93204f92226a4\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"916387da297b76b6d9a136bfd0a163d6e047897238d1fd4ad83b7f5782fecc1b\"" May 9 23:45:22.273863 containerd[1473]: time="2025-05-09T23:45:22.273798417Z" level=info msg="StartContainer for \"916387da297b76b6d9a136bfd0a163d6e047897238d1fd4ad83b7f5782fecc1b\"" May 9 23:45:22.281472 kubelet[1771]: E0509 23:45:22.281418 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:22.308219 systemd[1]: Started cri-containerd-916387da297b76b6d9a136bfd0a163d6e047897238d1fd4ad83b7f5782fecc1b.scope - libcontainer container 916387da297b76b6d9a136bfd0a163d6e047897238d1fd4ad83b7f5782fecc1b. May 9 23:45:22.338005 containerd[1473]: time="2025-05-09T23:45:22.337937678Z" level=info msg="StartContainer for \"916387da297b76b6d9a136bfd0a163d6e047897238d1fd4ad83b7f5782fecc1b\" returns successfully" May 9 23:45:22.459120 kubelet[1771]: I0509 23:45:22.459056 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-qgv29" podStartSLOduration=4.515288552 podStartE2EDuration="7.459038287s" podCreationTimestamp="2025-05-09 23:45:15 +0000 UTC" firstStartedPulling="2025-05-09 23:45:19.314461169 +0000 UTC m=+20.695983175" lastFinishedPulling="2025-05-09 23:45:22.258210904 +0000 UTC m=+23.639732910" observedRunningTime="2025-05-09 23:45:22.458854622 +0000 UTC m=+23.840376628" watchObservedRunningTime="2025-05-09 23:45:22.459038287 +0000 UTC m=+23.840560333" May 9 23:45:23.281810 kubelet[1771]: E0509 23:45:23.281764 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:24.282347 kubelet[1771]: E0509 23:45:24.282303 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:24.678397 kubelet[1771]: I0509 23:45:24.678134 1771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 23:45:24.678627 kubelet[1771]: E0509 23:45:24.678599 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:25.283017 kubelet[1771]: E0509 23:45:25.282945 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:25.454479 kubelet[1771]: E0509 23:45:25.454438 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:26.283483 kubelet[1771]: E0509 23:45:26.283442 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:27.283973 kubelet[1771]: E0509 23:45:27.283892 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:27.691488 systemd[1]: Created slice kubepods-besteffort-pod30d01a68_a0c2_4956_a85e_fc8f45f2627e.slice - libcontainer container kubepods-besteffort-pod30d01a68_a0c2_4956_a85e_fc8f45f2627e.slice. May 9 23:45:27.737629 kubelet[1771]: I0509 23:45:27.737575 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57zjr\" (UniqueName: \"kubernetes.io/projected/30d01a68-a0c2-4956-a85e-fc8f45f2627e-kube-api-access-57zjr\") pod \"nfs-server-provisioner-0\" (UID: \"30d01a68-a0c2-4956-a85e-fc8f45f2627e\") " pod="default/nfs-server-provisioner-0" May 9 23:45:27.737629 kubelet[1771]: I0509 23:45:27.737619 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/30d01a68-a0c2-4956-a85e-fc8f45f2627e-data\") pod \"nfs-server-provisioner-0\" (UID: \"30d01a68-a0c2-4956-a85e-fc8f45f2627e\") " pod="default/nfs-server-provisioner-0" May 9 23:45:27.995612 containerd[1473]: time="2025-05-09T23:45:27.995557950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:30d01a68-a0c2-4956-a85e-fc8f45f2627e,Namespace:default,Attempt:0,}" May 9 23:45:28.018865 systemd-networkd[1396]: lxcd67e61475a46: Link UP May 9 23:45:28.035997 kernel: eth0: renamed from tmp5f04a May 9 23:45:28.042560 systemd-networkd[1396]: lxcd67e61475a46: Gained carrier May 9 23:45:28.229846 containerd[1473]: time="2025-05-09T23:45:28.229746912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:45:28.229846 containerd[1473]: time="2025-05-09T23:45:28.229816000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:45:28.229846 containerd[1473]: time="2025-05-09T23:45:28.229841842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:45:28.230084 containerd[1473]: time="2025-05-09T23:45:28.229931173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:45:28.242027 systemd[1]: run-containerd-runc-k8s.io-5f04a5a0da91d840c2cbbe429ee493e0306ae3d9ebc05f3f6481335588255907-runc.Zh4pzG.mount: Deactivated successfully. May 9 23:45:28.250122 systemd[1]: Started cri-containerd-5f04a5a0da91d840c2cbbe429ee493e0306ae3d9ebc05f3f6481335588255907.scope - libcontainer container 5f04a5a0da91d840c2cbbe429ee493e0306ae3d9ebc05f3f6481335588255907. May 9 23:45:28.261309 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 23:45:28.277791 containerd[1473]: time="2025-05-09T23:45:28.277751233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:30d01a68-a0c2-4956-a85e-fc8f45f2627e,Namespace:default,Attempt:0,} returns sandbox id \"5f04a5a0da91d840c2cbbe429ee493e0306ae3d9ebc05f3f6481335588255907\"" May 9 23:45:28.279660 containerd[1473]: time="2025-05-09T23:45:28.279621487Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 9 23:45:28.284872 kubelet[1771]: E0509 23:45:28.284832 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:29.285032 kubelet[1771]: E0509 23:45:29.284983 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:29.913113 systemd-networkd[1396]: lxcd67e61475a46: Gained IPv6LL May 9 23:45:30.057841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3480493493.mount: Deactivated successfully. May 9 23:45:30.286210 kubelet[1771]: E0509 23:45:30.286171 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:31.286883 kubelet[1771]: E0509 23:45:31.286827 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:31.467969 containerd[1473]: time="2025-05-09T23:45:31.467838824Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" May 9 23:45:31.472654 containerd[1473]: time="2025-05-09T23:45:31.472594678Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.192932307s" May 9 23:45:31.472654 containerd[1473]: time="2025-05-09T23:45:31.472648804Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 9 23:45:31.474857 containerd[1473]: time="2025-05-09T23:45:31.474796307Z" level=info msg="CreateContainer within sandbox \"5f04a5a0da91d840c2cbbe429ee493e0306ae3d9ebc05f3f6481335588255907\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 9 23:45:31.482206 containerd[1473]: time="2025-05-09T23:45:31.482139949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:45:31.483247 containerd[1473]: time="2025-05-09T23:45:31.483209100Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:45:31.484091 containerd[1473]: time="2025-05-09T23:45:31.484053228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:45:31.493339 containerd[1473]: time="2025-05-09T23:45:31.493239741Z" level=info msg="CreateContainer within sandbox \"5f04a5a0da91d840c2cbbe429ee493e0306ae3d9ebc05f3f6481335588255907\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4c8bb0e92c82d3c02f5bfbdd280df36d5e111dbcb70c81635f121cac83688651\"" May 9 23:45:31.494027 containerd[1473]: time="2025-05-09T23:45:31.493778997Z" level=info msg="StartContainer for \"4c8bb0e92c82d3c02f5bfbdd280df36d5e111dbcb70c81635f121cac83688651\"" May 9 23:45:31.575212 systemd[1]: Started cri-containerd-4c8bb0e92c82d3c02f5bfbdd280df36d5e111dbcb70c81635f121cac83688651.scope - libcontainer container 4c8bb0e92c82d3c02f5bfbdd280df36d5e111dbcb70c81635f121cac83688651. May 9 23:45:31.662170 containerd[1473]: time="2025-05-09T23:45:31.662115273Z" level=info msg="StartContainer for \"4c8bb0e92c82d3c02f5bfbdd280df36d5e111dbcb70c81635f121cac83688651\" returns successfully" May 9 23:45:32.287165 kubelet[1771]: E0509 23:45:32.287101 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:32.481779 kubelet[1771]: I0509 23:45:32.481717 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.287357944 podStartE2EDuration="5.481700762s" podCreationTimestamp="2025-05-09 23:45:27 +0000 UTC" firstStartedPulling="2025-05-09 23:45:28.279191358 +0000 UTC m=+29.660713324" lastFinishedPulling="2025-05-09 23:45:31.473534176 +0000 UTC m=+32.855056142" observedRunningTime="2025-05-09 23:45:32.481676799 +0000 UTC m=+33.863198805" watchObservedRunningTime="2025-05-09 23:45:32.481700762 +0000 UTC m=+33.863222768" May 9 23:45:33.287483 kubelet[1771]: E0509 23:45:33.287420 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:34.288230 kubelet[1771]: E0509 23:45:34.288175 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:35.098624 update_engine[1455]: I20250509 23:45:35.098004 1455 update_attempter.cc:509] Updating boot flags... May 9 23:45:35.165013 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3169) May 9 23:45:35.201868 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3167) May 9 23:45:35.225999 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3167) May 9 23:45:35.289105 kubelet[1771]: E0509 23:45:35.289062 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:36.290499 kubelet[1771]: E0509 23:45:36.290428 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:37.291266 kubelet[1771]: E0509 23:45:37.291215 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:38.291376 kubelet[1771]: E0509 23:45:38.291329 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:39.266461 kubelet[1771]: E0509 23:45:39.266416 1771 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:39.291869 kubelet[1771]: E0509 23:45:39.291827 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:40.292856 kubelet[1771]: E0509 23:45:40.292797 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:41.293915 kubelet[1771]: E0509 23:45:41.293857 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:41.333274 systemd[1]: Created slice kubepods-besteffort-podbb4ca5e3_0099_432c_9646_62d239acee59.slice - libcontainer container kubepods-besteffort-podbb4ca5e3_0099_432c_9646_62d239acee59.slice. May 9 23:45:41.517547 kubelet[1771]: I0509 23:45:41.517508 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c069106d-d03d-48f8-b9c9-e650b23ac91b\" (UniqueName: \"kubernetes.io/nfs/bb4ca5e3-0099-432c-9646-62d239acee59-pvc-c069106d-d03d-48f8-b9c9-e650b23ac91b\") pod \"test-pod-1\" (UID: \"bb4ca5e3-0099-432c-9646-62d239acee59\") " pod="default/test-pod-1" May 9 23:45:41.517775 kubelet[1771]: I0509 23:45:41.517741 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5b47\" (UniqueName: \"kubernetes.io/projected/bb4ca5e3-0099-432c-9646-62d239acee59-kube-api-access-b5b47\") pod \"test-pod-1\" (UID: \"bb4ca5e3-0099-432c-9646-62d239acee59\") " pod="default/test-pod-1" May 9 23:45:41.638051 kernel: FS-Cache: Loaded May 9 23:45:41.665097 kernel: RPC: Registered named UNIX socket transport module. May 9 23:45:41.665250 kernel: RPC: Registered udp transport module. May 9 23:45:41.665270 kernel: RPC: Registered tcp transport module. May 9 23:45:41.665294 kernel: RPC: Registered tcp-with-tls transport module. May 9 23:45:41.666737 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 9 23:45:41.843426 kernel: NFS: Registering the id_resolver key type May 9 23:45:41.843532 kernel: Key type id_resolver registered May 9 23:45:41.844072 kernel: Key type id_legacy registered May 9 23:45:41.866190 nfsidmap[3195]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 9 23:45:41.872994 nfsidmap[3198]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 9 23:45:41.936289 containerd[1473]: time="2025-05-09T23:45:41.936173369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:bb4ca5e3-0099-432c-9646-62d239acee59,Namespace:default,Attempt:0,}" May 9 23:45:41.960093 systemd-networkd[1396]: lxc165faf80a103: Link UP May 9 23:45:41.966083 kernel: eth0: renamed from tmp4ec27 May 9 23:45:41.971687 systemd-networkd[1396]: lxc165faf80a103: Gained carrier May 9 23:45:42.111384 containerd[1473]: time="2025-05-09T23:45:42.110725302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:45:42.111384 containerd[1473]: time="2025-05-09T23:45:42.110796787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:45:42.111384 containerd[1473]: time="2025-05-09T23:45:42.110811668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:45:42.111384 containerd[1473]: time="2025-05-09T23:45:42.110893874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:45:42.129143 systemd[1]: Started cri-containerd-4ec27d28eaf5e83f20c99f87f078f9f04288961fb6bc2164b42f245b512825ca.scope - libcontainer container 4ec27d28eaf5e83f20c99f87f078f9f04288961fb6bc2164b42f245b512825ca. May 9 23:45:42.139807 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 23:45:42.159901 containerd[1473]: time="2025-05-09T23:45:42.159817656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:bb4ca5e3-0099-432c-9646-62d239acee59,Namespace:default,Attempt:0,} returns sandbox id \"4ec27d28eaf5e83f20c99f87f078f9f04288961fb6bc2164b42f245b512825ca\"" May 9 23:45:42.161134 containerd[1473]: time="2025-05-09T23:45:42.161017384Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 9 23:45:42.294754 kubelet[1771]: E0509 23:45:42.294703 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:42.360237 containerd[1473]: time="2025-05-09T23:45:42.360159004Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:45:42.360836 containerd[1473]: time="2025-05-09T23:45:42.360781050Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" May 9 23:45:42.363995 containerd[1473]: time="2025-05-09T23:45:42.363946042Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 202.893775ms" May 9 23:45:42.364031 containerd[1473]: time="2025-05-09T23:45:42.363999966Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 9 23:45:42.366412 containerd[1473]: time="2025-05-09T23:45:42.366257491Z" level=info msg="CreateContainer within sandbox \"4ec27d28eaf5e83f20c99f87f078f9f04288961fb6bc2164b42f245b512825ca\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 9 23:45:42.378847 containerd[1473]: time="2025-05-09T23:45:42.378775567Z" level=info msg="CreateContainer within sandbox \"4ec27d28eaf5e83f20c99f87f078f9f04288961fb6bc2164b42f245b512825ca\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"f6ca2ab961e8318d118b84cf960c5c12212a7be521716fa62e8bb6092adf878d\"" May 9 23:45:42.379433 containerd[1473]: time="2025-05-09T23:45:42.379397853Z" level=info msg="StartContainer for \"f6ca2ab961e8318d118b84cf960c5c12212a7be521716fa62e8bb6092adf878d\"" May 9 23:45:42.420188 systemd[1]: Started cri-containerd-f6ca2ab961e8318d118b84cf960c5c12212a7be521716fa62e8bb6092adf878d.scope - libcontainer container f6ca2ab961e8318d118b84cf960c5c12212a7be521716fa62e8bb6092adf878d. May 9 23:45:42.446594 containerd[1473]: time="2025-05-09T23:45:42.446533448Z" level=info msg="StartContainer for \"f6ca2ab961e8318d118b84cf960c5c12212a7be521716fa62e8bb6092adf878d\" returns successfully" May 9 23:45:42.512489 kubelet[1771]: I0509 23:45:42.512424 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.308417896 podStartE2EDuration="15.512404871s" podCreationTimestamp="2025-05-09 23:45:27 +0000 UTC" firstStartedPulling="2025-05-09 23:45:42.16068324 +0000 UTC m=+43.542205246" lastFinishedPulling="2025-05-09 23:45:42.364670215 +0000 UTC m=+43.746192221" observedRunningTime="2025-05-09 23:45:42.51211225 +0000 UTC m=+43.893634256" watchObservedRunningTime="2025-05-09 23:45:42.512404871 +0000 UTC m=+43.893926877" May 9 23:45:43.295067 kubelet[1771]: E0509 23:45:43.295008 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:43.673174 systemd-networkd[1396]: lxc165faf80a103: Gained IPv6LL May 9 23:45:44.295498 kubelet[1771]: E0509 23:45:44.295443 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:45.296283 kubelet[1771]: E0509 23:45:45.296234 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:45.700094 containerd[1473]: time="2025-05-09T23:45:45.700040039Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 23:45:45.705636 containerd[1473]: time="2025-05-09T23:45:45.705588408Z" level=info msg="StopContainer for \"9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8\" with timeout 2 (s)" May 9 23:45:45.705923 containerd[1473]: time="2025-05-09T23:45:45.705899189Z" level=info msg="Stop container \"9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8\" with signal terminated" May 9 23:45:45.710936 systemd-networkd[1396]: lxc_health: Link DOWN May 9 23:45:45.710943 systemd-networkd[1396]: lxc_health: Lost carrier May 9 23:45:45.739145 systemd[1]: cri-containerd-9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8.scope: Deactivated successfully. May 9 23:45:45.739560 systemd[1]: cri-containerd-9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8.scope: Consumed 6.645s CPU time. May 9 23:45:45.757133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8-rootfs.mount: Deactivated successfully. May 9 23:45:45.768323 containerd[1473]: time="2025-05-09T23:45:45.768190335Z" level=info msg="shim disconnected" id=9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8 namespace=k8s.io May 9 23:45:45.768323 containerd[1473]: time="2025-05-09T23:45:45.768254459Z" level=warning msg="cleaning up after shim disconnected" id=9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8 namespace=k8s.io May 9 23:45:45.768323 containerd[1473]: time="2025-05-09T23:45:45.768263700Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:45:45.781163 containerd[1473]: time="2025-05-09T23:45:45.781036150Z" level=info msg="StopContainer for \"9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8\" returns successfully" May 9 23:45:45.781984 containerd[1473]: time="2025-05-09T23:45:45.781739157Z" level=info msg="StopPodSandbox for \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\"" May 9 23:45:45.781984 containerd[1473]: time="2025-05-09T23:45:45.781777279Z" level=info msg="Container to stop \"7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:45:45.781984 containerd[1473]: time="2025-05-09T23:45:45.781787160Z" level=info msg="Container to stop \"1d32d98413a38203725a5f046cc128c5322228e65ee725e89467fcf7a25b9350\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:45:45.781984 containerd[1473]: time="2025-05-09T23:45:45.781795401Z" level=info msg="Container to stop \"817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:45:45.781984 containerd[1473]: time="2025-05-09T23:45:45.781803761Z" level=info msg="Container to stop \"9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:45:45.781984 containerd[1473]: time="2025-05-09T23:45:45.781811682Z" level=info msg="Container to stop \"66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:45:45.783279 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6-shm.mount: Deactivated successfully. May 9 23:45:45.788189 systemd[1]: cri-containerd-4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6.scope: Deactivated successfully. May 9 23:45:45.811166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6-rootfs.mount: Deactivated successfully. May 9 23:45:45.818010 containerd[1473]: time="2025-05-09T23:45:45.817886123Z" level=info msg="shim disconnected" id=4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6 namespace=k8s.io May 9 23:45:45.818010 containerd[1473]: time="2025-05-09T23:45:45.818005331Z" level=warning msg="cleaning up after shim disconnected" id=4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6 namespace=k8s.io May 9 23:45:45.818010 containerd[1473]: time="2025-05-09T23:45:45.818014291Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:45:45.828687 containerd[1473]: time="2025-05-09T23:45:45.828632198Z" level=info msg="TearDown network for sandbox \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\" successfully" May 9 23:45:45.828687 containerd[1473]: time="2025-05-09T23:45:45.828670281Z" level=info msg="StopPodSandbox for \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\" returns successfully" May 9 23:45:45.944045 kubelet[1771]: I0509 23:45:45.943556 1771 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd2tm\" (UniqueName: \"kubernetes.io/projected/34c139b6-c37a-481d-8db7-c787120a4cdf-kube-api-access-vd2tm\") pod \"34c139b6-c37a-481d-8db7-c787120a4cdf\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " May 9 23:45:45.944045 kubelet[1771]: I0509 23:45:45.943600 1771 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-host-proc-sys-kernel\") pod \"34c139b6-c37a-481d-8db7-c787120a4cdf\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " May 9 23:45:45.944045 kubelet[1771]: I0509 23:45:45.943621 1771 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-cni-path\") pod \"34c139b6-c37a-481d-8db7-c787120a4cdf\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " May 9 23:45:45.944045 kubelet[1771]: I0509 23:45:45.943650 1771 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34c139b6-c37a-481d-8db7-c787120a4cdf-cilium-config-path\") pod \"34c139b6-c37a-481d-8db7-c787120a4cdf\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " May 9 23:45:45.944045 kubelet[1771]: I0509 23:45:45.943665 1771 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-cilium-cgroup\") pod \"34c139b6-c37a-481d-8db7-c787120a4cdf\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " May 9 23:45:45.944045 kubelet[1771]: I0509 23:45:45.943680 1771 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-hostproc\") pod \"34c139b6-c37a-481d-8db7-c787120a4cdf\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " May 9 23:45:45.944329 kubelet[1771]: I0509 23:45:45.943696 1771 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-cilium-run\") pod \"34c139b6-c37a-481d-8db7-c787120a4cdf\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " May 9 23:45:45.944329 kubelet[1771]: I0509 23:45:45.943711 1771 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-xtables-lock\") pod \"34c139b6-c37a-481d-8db7-c787120a4cdf\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " May 9 23:45:45.944329 kubelet[1771]: I0509 23:45:45.943727 1771 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-etc-cni-netd\") pod \"34c139b6-c37a-481d-8db7-c787120a4cdf\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " May 9 23:45:45.944329 kubelet[1771]: I0509 23:45:45.943746 1771 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34c139b6-c37a-481d-8db7-c787120a4cdf-clustermesh-secrets\") pod \"34c139b6-c37a-481d-8db7-c787120a4cdf\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " May 9 23:45:45.944329 kubelet[1771]: I0509 23:45:45.943762 1771 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34c139b6-c37a-481d-8db7-c787120a4cdf-hubble-tls\") pod \"34c139b6-c37a-481d-8db7-c787120a4cdf\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " May 9 23:45:45.944329 kubelet[1771]: I0509 23:45:45.943778 1771 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-host-proc-sys-net\") pod \"34c139b6-c37a-481d-8db7-c787120a4cdf\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " May 9 23:45:45.944465 kubelet[1771]: I0509 23:45:45.943792 1771 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-bpf-maps\") pod \"34c139b6-c37a-481d-8db7-c787120a4cdf\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " May 9 23:45:45.944465 kubelet[1771]: I0509 23:45:45.943813 1771 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-lib-modules\") pod \"34c139b6-c37a-481d-8db7-c787120a4cdf\" (UID: \"34c139b6-c37a-481d-8db7-c787120a4cdf\") " May 9 23:45:45.944465 kubelet[1771]: I0509 23:45:45.943865 1771 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "34c139b6-c37a-481d-8db7-c787120a4cdf" (UID: "34c139b6-c37a-481d-8db7-c787120a4cdf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:45:45.944465 kubelet[1771]: I0509 23:45:45.943910 1771 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "34c139b6-c37a-481d-8db7-c787120a4cdf" (UID: "34c139b6-c37a-481d-8db7-c787120a4cdf"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:45:45.944465 kubelet[1771]: I0509 23:45:45.943929 1771 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-cni-path" (OuterVolumeSpecName: "cni-path") pod "34c139b6-c37a-481d-8db7-c787120a4cdf" (UID: "34c139b6-c37a-481d-8db7-c787120a4cdf"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:45:45.944583 kubelet[1771]: I0509 23:45:45.944313 1771 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "34c139b6-c37a-481d-8db7-c787120a4cdf" (UID: "34c139b6-c37a-481d-8db7-c787120a4cdf"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:45:45.944583 kubelet[1771]: I0509 23:45:45.944348 1771 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "34c139b6-c37a-481d-8db7-c787120a4cdf" (UID: "34c139b6-c37a-481d-8db7-c787120a4cdf"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:45:45.944583 kubelet[1771]: I0509 23:45:45.944366 1771 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-hostproc" (OuterVolumeSpecName: "hostproc") pod "34c139b6-c37a-481d-8db7-c787120a4cdf" (UID: "34c139b6-c37a-481d-8db7-c787120a4cdf"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:45:45.944583 kubelet[1771]: I0509 23:45:45.944383 1771 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "34c139b6-c37a-481d-8db7-c787120a4cdf" (UID: "34c139b6-c37a-481d-8db7-c787120a4cdf"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:45:45.944583 kubelet[1771]: I0509 23:45:45.944399 1771 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "34c139b6-c37a-481d-8db7-c787120a4cdf" (UID: "34c139b6-c37a-481d-8db7-c787120a4cdf"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:45:45.945073 kubelet[1771]: I0509 23:45:45.944747 1771 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "34c139b6-c37a-481d-8db7-c787120a4cdf" (UID: "34c139b6-c37a-481d-8db7-c787120a4cdf"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:45:45.945073 kubelet[1771]: I0509 23:45:45.944807 1771 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "34c139b6-c37a-481d-8db7-c787120a4cdf" (UID: "34c139b6-c37a-481d-8db7-c787120a4cdf"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:45:45.945809 kubelet[1771]: I0509 23:45:45.945771 1771 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34c139b6-c37a-481d-8db7-c787120a4cdf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "34c139b6-c37a-481d-8db7-c787120a4cdf" (UID: "34c139b6-c37a-481d-8db7-c787120a4cdf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 9 23:45:45.946613 kubelet[1771]: I0509 23:45:45.946545 1771 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34c139b6-c37a-481d-8db7-c787120a4cdf-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "34c139b6-c37a-481d-8db7-c787120a4cdf" (UID: "34c139b6-c37a-481d-8db7-c787120a4cdf"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 9 23:45:45.947425 kubelet[1771]: I0509 23:45:45.947377 1771 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34c139b6-c37a-481d-8db7-c787120a4cdf-kube-api-access-vd2tm" (OuterVolumeSpecName: "kube-api-access-vd2tm") pod "34c139b6-c37a-481d-8db7-c787120a4cdf" (UID: "34c139b6-c37a-481d-8db7-c787120a4cdf"). InnerVolumeSpecName "kube-api-access-vd2tm". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 9 23:45:45.947444 systemd[1]: var-lib-kubelet-pods-34c139b6\x2dc37a\x2d481d\x2d8db7\x2dc787120a4cdf-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 9 23:45:45.948533 kubelet[1771]: I0509 23:45:45.948460 1771 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c139b6-c37a-481d-8db7-c787120a4cdf-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "34c139b6-c37a-481d-8db7-c787120a4cdf" (UID: "34c139b6-c37a-481d-8db7-c787120a4cdf"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 9 23:45:46.044829 kubelet[1771]: I0509 23:45:46.044598 1771 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34c139b6-c37a-481d-8db7-c787120a4cdf-hubble-tls\") on node \"10.0.0.40\" DevicePath \"\"" May 9 23:45:46.044829 kubelet[1771]: I0509 23:45:46.044630 1771 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-host-proc-sys-net\") on node \"10.0.0.40\" DevicePath \"\"" May 9 23:45:46.044829 kubelet[1771]: I0509 23:45:46.044641 1771 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-hostproc\") on node \"10.0.0.40\" DevicePath \"\"" May 9 23:45:46.044829 kubelet[1771]: I0509 23:45:46.044650 1771 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-cilium-run\") on node \"10.0.0.40\" DevicePath \"\"" May 9 23:45:46.044829 kubelet[1771]: I0509 23:45:46.044659 1771 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-xtables-lock\") on node \"10.0.0.40\" DevicePath \"\"" May 9 23:45:46.044829 kubelet[1771]: I0509 23:45:46.044667 1771 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-etc-cni-netd\") on node \"10.0.0.40\" DevicePath \"\"" May 9 23:45:46.044829 kubelet[1771]: I0509 23:45:46.044675 1771 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34c139b6-c37a-481d-8db7-c787120a4cdf-clustermesh-secrets\") on node \"10.0.0.40\" DevicePath \"\"" May 9 23:45:46.044829 kubelet[1771]: I0509 23:45:46.044682 1771 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-bpf-maps\") on node \"10.0.0.40\" DevicePath \"\"" May 9 23:45:46.045113 kubelet[1771]: I0509 23:45:46.044690 1771 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-lib-modules\") on node \"10.0.0.40\" DevicePath \"\"" May 9 23:45:46.045113 kubelet[1771]: I0509 23:45:46.044698 1771 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34c139b6-c37a-481d-8db7-c787120a4cdf-cilium-config-path\") on node \"10.0.0.40\" DevicePath \"\"" May 9 23:45:46.045113 kubelet[1771]: I0509 23:45:46.044706 1771 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vd2tm\" (UniqueName: \"kubernetes.io/projected/34c139b6-c37a-481d-8db7-c787120a4cdf-kube-api-access-vd2tm\") on node \"10.0.0.40\" DevicePath \"\"" May 9 23:45:46.045113 kubelet[1771]: I0509 23:45:46.044713 1771 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-host-proc-sys-kernel\") on node \"10.0.0.40\" DevicePath \"\"" May 9 23:45:46.045113 kubelet[1771]: I0509 23:45:46.044721 1771 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-cni-path\") on node \"10.0.0.40\" DevicePath \"\"" May 9 23:45:46.045113 kubelet[1771]: I0509 23:45:46.044729 1771 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34c139b6-c37a-481d-8db7-c787120a4cdf-cilium-cgroup\") on node \"10.0.0.40\" DevicePath \"\"" May 9 23:45:46.296663 kubelet[1771]: E0509 23:45:46.296541 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:46.514874 kubelet[1771]: I0509 23:45:46.514839 1771 scope.go:117] "RemoveContainer" containerID="9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8" May 9 23:45:46.517306 containerd[1473]: time="2025-05-09T23:45:46.517052309Z" level=info msg="RemoveContainer for \"9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8\"" May 9 23:45:46.519557 systemd[1]: Removed slice kubepods-burstable-pod34c139b6_c37a_481d_8db7_c787120a4cdf.slice - libcontainer container kubepods-burstable-pod34c139b6_c37a_481d_8db7_c787120a4cdf.slice. May 9 23:45:46.519643 systemd[1]: kubepods-burstable-pod34c139b6_c37a_481d_8db7_c787120a4cdf.slice: Consumed 6.780s CPU time. May 9 23:45:46.522487 containerd[1473]: time="2025-05-09T23:45:46.522352211Z" level=info msg="RemoveContainer for \"9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8\" returns successfully" May 9 23:45:46.522738 kubelet[1771]: I0509 23:45:46.522707 1771 scope.go:117] "RemoveContainer" containerID="66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e" May 9 23:45:46.524066 containerd[1473]: time="2025-05-09T23:45:46.523948674Z" level=info msg="RemoveContainer for \"66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e\"" May 9 23:45:46.526946 containerd[1473]: time="2025-05-09T23:45:46.526907825Z" level=info msg="RemoveContainer for \"66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e\" returns successfully" May 9 23:45:46.527171 kubelet[1771]: I0509 23:45:46.527144 1771 scope.go:117] "RemoveContainer" containerID="817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62" May 9 23:45:46.530308 containerd[1473]: time="2025-05-09T23:45:46.530202877Z" level=info msg="RemoveContainer for \"817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62\"" May 9 23:45:46.532763 containerd[1473]: time="2025-05-09T23:45:46.532719759Z" level=info msg="RemoveContainer for \"817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62\" returns successfully" May 9 23:45:46.533183 kubelet[1771]: I0509 23:45:46.533092 1771 scope.go:117] "RemoveContainer" containerID="1d32d98413a38203725a5f046cc128c5322228e65ee725e89467fcf7a25b9350" May 9 23:45:46.537736 containerd[1473]: time="2025-05-09T23:45:46.537281573Z" level=info msg="RemoveContainer for \"1d32d98413a38203725a5f046cc128c5322228e65ee725e89467fcf7a25b9350\"" May 9 23:45:46.540403 containerd[1473]: time="2025-05-09T23:45:46.540363492Z" level=info msg="RemoveContainer for \"1d32d98413a38203725a5f046cc128c5322228e65ee725e89467fcf7a25b9350\" returns successfully" May 9 23:45:46.540727 kubelet[1771]: I0509 23:45:46.540695 1771 scope.go:117] "RemoveContainer" containerID="7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb" May 9 23:45:46.541848 containerd[1473]: time="2025-05-09T23:45:46.541814386Z" level=info msg="RemoveContainer for \"7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb\"" May 9 23:45:46.544016 containerd[1473]: time="2025-05-09T23:45:46.543978805Z" level=info msg="RemoveContainer for \"7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb\" returns successfully" May 9 23:45:46.544210 kubelet[1771]: I0509 23:45:46.544175 1771 scope.go:117] "RemoveContainer" containerID="9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8" May 9 23:45:46.544532 containerd[1473]: time="2025-05-09T23:45:46.544490038Z" level=error msg="ContainerStatus for \"9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8\": not found" May 9 23:45:46.544841 kubelet[1771]: E0509 23:45:46.544667 1771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8\": not found" containerID="9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8" May 9 23:45:46.544841 kubelet[1771]: I0509 23:45:46.544705 1771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8"} err="failed to get container status \"9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"9a34ed53ccbf8c7c3b962df2539b96f5df499c50ed323326828de16eb064a5c8\": not found" May 9 23:45:46.544841 kubelet[1771]: I0509 23:45:46.544759 1771 scope.go:117] "RemoveContainer" containerID="66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e" May 9 23:45:46.544985 containerd[1473]: time="2025-05-09T23:45:46.544939227Z" level=error msg="ContainerStatus for \"66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e\": not found" May 9 23:45:46.545147 kubelet[1771]: E0509 23:45:46.545094 1771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e\": not found" containerID="66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e" May 9 23:45:46.545147 kubelet[1771]: I0509 23:45:46.545136 1771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e"} err="failed to get container status \"66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e\": rpc error: code = NotFound desc = an error occurred when try to find container \"66ec40fa52044cc3f140b16becbb90acb3335c0c1a1103d0f40b0aa693b7fa1e\": not found" May 9 23:45:46.545217 kubelet[1771]: I0509 23:45:46.545153 1771 scope.go:117] "RemoveContainer" containerID="817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62" May 9 23:45:46.545544 containerd[1473]: time="2025-05-09T23:45:46.545518025Z" level=error msg="ContainerStatus for \"817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62\": not found" May 9 23:45:46.545759 kubelet[1771]: E0509 23:45:46.545645 1771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62\": not found" containerID="817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62" May 9 23:45:46.545759 kubelet[1771]: I0509 23:45:46.545673 1771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62"} err="failed to get container status \"817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62\": rpc error: code = NotFound desc = an error occurred when try to find container \"817c133fdd5f3add324e348e76ace4f7fb90bd9c91ded2531a8dbecebdf2fe62\": not found" May 9 23:45:46.545759 kubelet[1771]: I0509 23:45:46.545694 1771 scope.go:117] "RemoveContainer" containerID="1d32d98413a38203725a5f046cc128c5322228e65ee725e89467fcf7a25b9350" May 9 23:45:46.545879 containerd[1473]: time="2025-05-09T23:45:46.545852086Z" level=error msg="ContainerStatus for \"1d32d98413a38203725a5f046cc128c5322228e65ee725e89467fcf7a25b9350\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d32d98413a38203725a5f046cc128c5322228e65ee725e89467fcf7a25b9350\": not found" May 9 23:45:46.545996 kubelet[1771]: E0509 23:45:46.545971 1771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d32d98413a38203725a5f046cc128c5322228e65ee725e89467fcf7a25b9350\": not found" containerID="1d32d98413a38203725a5f046cc128c5322228e65ee725e89467fcf7a25b9350" May 9 23:45:46.546039 kubelet[1771]: I0509 23:45:46.546001 1771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d32d98413a38203725a5f046cc128c5322228e65ee725e89467fcf7a25b9350"} err="failed to get container status \"1d32d98413a38203725a5f046cc128c5322228e65ee725e89467fcf7a25b9350\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d32d98413a38203725a5f046cc128c5322228e65ee725e89467fcf7a25b9350\": not found" May 9 23:45:46.546039 kubelet[1771]: I0509 23:45:46.546020 1771 scope.go:117] "RemoveContainer" containerID="7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb" May 9 23:45:46.546285 containerd[1473]: time="2025-05-09T23:45:46.546247192Z" level=error msg="ContainerStatus for \"7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb\": not found" May 9 23:45:46.546377 kubelet[1771]: E0509 23:45:46.546364 1771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb\": not found" containerID="7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb" May 9 23:45:46.546413 kubelet[1771]: I0509 23:45:46.546383 1771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb"} err="failed to get container status \"7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a8c0203b195d9974ecad49f5e1bc086e0ed35572a374b04bf0e2631070470fb\": not found" May 9 23:45:46.686106 systemd[1]: var-lib-kubelet-pods-34c139b6\x2dc37a\x2d481d\x2d8db7\x2dc787120a4cdf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvd2tm.mount: Deactivated successfully. May 9 23:45:46.686215 systemd[1]: var-lib-kubelet-pods-34c139b6\x2dc37a\x2d481d\x2d8db7\x2dc787120a4cdf-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 9 23:45:47.297370 kubelet[1771]: E0509 23:45:47.297320 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:47.395722 kubelet[1771]: I0509 23:45:47.395674 1771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34c139b6-c37a-481d-8db7-c787120a4cdf" path="/var/lib/kubelet/pods/34c139b6-c37a-481d-8db7-c787120a4cdf/volumes" May 9 23:45:48.297885 kubelet[1771]: E0509 23:45:48.297831 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:48.339134 kubelet[1771]: I0509 23:45:48.339095 1771 memory_manager.go:355] "RemoveStaleState removing state" podUID="34c139b6-c37a-481d-8db7-c787120a4cdf" containerName="cilium-agent" May 9 23:45:48.345824 systemd[1]: Created slice kubepods-besteffort-pod8d2c7867_30d6_41f9_9a10_23624a3d5433.slice - libcontainer container kubepods-besteffort-pod8d2c7867_30d6_41f9_9a10_23624a3d5433.slice. May 9 23:45:48.367970 kubelet[1771]: W0509 23:45:48.367926 1771 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.0.0.40" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.40' and this object May 9 23:45:48.368072 kubelet[1771]: E0509 23:45:48.367988 1771 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:10.0.0.40\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.40' and this object" logger="UnhandledError" May 9 23:45:48.368072 kubelet[1771]: I0509 23:45:48.368039 1771 status_manager.go:890] "Failed to get status for pod" podUID="a8618fb0-a10e-4148-87f9-30d5f7f02665" pod="kube-system/cilium-5stwh" err="pods \"cilium-5stwh\" is forbidden: User \"system:node:10.0.0.40\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.40' and this object" May 9 23:45:48.368072 kubelet[1771]: W0509 23:45:48.368055 1771 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.0.0.40" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.40' and this object May 9 23:45:48.368236 kubelet[1771]: E0509 23:45:48.368089 1771 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:10.0.0.40\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.40' and this object" logger="UnhandledError" May 9 23:45:48.373822 systemd[1]: Created slice kubepods-burstable-poda8618fb0_a10e_4148_87f9_30d5f7f02665.slice - libcontainer container kubepods-burstable-poda8618fb0_a10e_4148_87f9_30d5f7f02665.slice. May 9 23:45:48.457183 kubelet[1771]: I0509 23:45:48.457140 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhmmn\" (UniqueName: \"kubernetes.io/projected/8d2c7867-30d6-41f9-9a10-23624a3d5433-kube-api-access-zhmmn\") pod \"cilium-operator-6c4d7847fc-qtqh6\" (UID: \"8d2c7867-30d6-41f9-9a10-23624a3d5433\") " pod="kube-system/cilium-operator-6c4d7847fc-qtqh6" May 9 23:45:48.457183 kubelet[1771]: I0509 23:45:48.457186 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d2c7867-30d6-41f9-9a10-23624a3d5433-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-qtqh6\" (UID: \"8d2c7867-30d6-41f9-9a10-23624a3d5433\") " pod="kube-system/cilium-operator-6c4d7847fc-qtqh6" May 9 23:45:48.557822 kubelet[1771]: I0509 23:45:48.557695 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8618fb0-a10e-4148-87f9-30d5f7f02665-hostproc\") pod \"cilium-5stwh\" (UID: \"a8618fb0-a10e-4148-87f9-30d5f7f02665\") " pod="kube-system/cilium-5stwh" May 9 23:45:48.557822 kubelet[1771]: I0509 23:45:48.557743 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8618fb0-a10e-4148-87f9-30d5f7f02665-cni-path\") pod \"cilium-5stwh\" (UID: \"a8618fb0-a10e-4148-87f9-30d5f7f02665\") " pod="kube-system/cilium-5stwh" May 9 23:45:48.557822 kubelet[1771]: I0509 23:45:48.557762 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8618fb0-a10e-4148-87f9-30d5f7f02665-etc-cni-netd\") pod \"cilium-5stwh\" (UID: \"a8618fb0-a10e-4148-87f9-30d5f7f02665\") " pod="kube-system/cilium-5stwh" May 9 23:45:48.557822 kubelet[1771]: I0509 23:45:48.557783 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8618fb0-a10e-4148-87f9-30d5f7f02665-cilium-config-path\") pod \"cilium-5stwh\" (UID: \"a8618fb0-a10e-4148-87f9-30d5f7f02665\") " pod="kube-system/cilium-5stwh" May 9 23:45:48.557822 kubelet[1771]: I0509 23:45:48.557798 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8618fb0-a10e-4148-87f9-30d5f7f02665-hubble-tls\") pod \"cilium-5stwh\" (UID: \"a8618fb0-a10e-4148-87f9-30d5f7f02665\") " pod="kube-system/cilium-5stwh" May 9 23:45:48.557822 kubelet[1771]: I0509 23:45:48.557813 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kls9m\" (UniqueName: \"kubernetes.io/projected/a8618fb0-a10e-4148-87f9-30d5f7f02665-kube-api-access-kls9m\") pod \"cilium-5stwh\" (UID: \"a8618fb0-a10e-4148-87f9-30d5f7f02665\") " pod="kube-system/cilium-5stwh" May 9 23:45:48.558074 kubelet[1771]: I0509 23:45:48.557843 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8618fb0-a10e-4148-87f9-30d5f7f02665-bpf-maps\") pod \"cilium-5stwh\" (UID: \"a8618fb0-a10e-4148-87f9-30d5f7f02665\") " pod="kube-system/cilium-5stwh" May 9 23:45:48.558074 kubelet[1771]: I0509 23:45:48.557883 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8618fb0-a10e-4148-87f9-30d5f7f02665-cilium-cgroup\") pod \"cilium-5stwh\" (UID: \"a8618fb0-a10e-4148-87f9-30d5f7f02665\") " pod="kube-system/cilium-5stwh" May 9 23:45:48.558074 kubelet[1771]: I0509 23:45:48.557996 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8618fb0-a10e-4148-87f9-30d5f7f02665-lib-modules\") pod \"cilium-5stwh\" (UID: \"a8618fb0-a10e-4148-87f9-30d5f7f02665\") " pod="kube-system/cilium-5stwh" May 9 23:45:48.558074 kubelet[1771]: I0509 23:45:48.558014 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8618fb0-a10e-4148-87f9-30d5f7f02665-host-proc-sys-net\") pod \"cilium-5stwh\" (UID: \"a8618fb0-a10e-4148-87f9-30d5f7f02665\") " pod="kube-system/cilium-5stwh" May 9 23:45:48.558074 kubelet[1771]: I0509 23:45:48.558061 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8618fb0-a10e-4148-87f9-30d5f7f02665-cilium-run\") pod \"cilium-5stwh\" (UID: \"a8618fb0-a10e-4148-87f9-30d5f7f02665\") " pod="kube-system/cilium-5stwh" May 9 23:45:48.558185 kubelet[1771]: I0509 23:45:48.558083 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8618fb0-a10e-4148-87f9-30d5f7f02665-xtables-lock\") pod \"cilium-5stwh\" (UID: \"a8618fb0-a10e-4148-87f9-30d5f7f02665\") " pod="kube-system/cilium-5stwh" May 9 23:45:48.558185 kubelet[1771]: I0509 23:45:48.558104 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8618fb0-a10e-4148-87f9-30d5f7f02665-clustermesh-secrets\") pod \"cilium-5stwh\" (UID: \"a8618fb0-a10e-4148-87f9-30d5f7f02665\") " pod="kube-system/cilium-5stwh" May 9 23:45:48.558185 kubelet[1771]: I0509 23:45:48.558119 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a8618fb0-a10e-4148-87f9-30d5f7f02665-cilium-ipsec-secrets\") pod \"cilium-5stwh\" (UID: \"a8618fb0-a10e-4148-87f9-30d5f7f02665\") " pod="kube-system/cilium-5stwh" May 9 23:45:48.558185 kubelet[1771]: I0509 23:45:48.558134 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8618fb0-a10e-4148-87f9-30d5f7f02665-host-proc-sys-kernel\") pod \"cilium-5stwh\" (UID: \"a8618fb0-a10e-4148-87f9-30d5f7f02665\") " pod="kube-system/cilium-5stwh" May 9 23:45:48.647755 kubelet[1771]: E0509 23:45:48.647692 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:48.648293 containerd[1473]: time="2025-05-09T23:45:48.648258526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qtqh6,Uid:8d2c7867-30d6-41f9-9a10-23624a3d5433,Namespace:kube-system,Attempt:0,}" May 9 23:45:48.667231 containerd[1473]: time="2025-05-09T23:45:48.666906094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:45:48.667231 containerd[1473]: time="2025-05-09T23:45:48.666978338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:45:48.667231 containerd[1473]: time="2025-05-09T23:45:48.666995859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:45:48.667231 containerd[1473]: time="2025-05-09T23:45:48.667173110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:45:48.687185 systemd[1]: Started cri-containerd-4b129fb95a4b0565050bee30458d75064ea38f74e808be3a26d7e9c47987e4a5.scope - libcontainer container 4b129fb95a4b0565050bee30458d75064ea38f74e808be3a26d7e9c47987e4a5. May 9 23:45:48.713497 containerd[1473]: time="2025-05-09T23:45:48.713436110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qtqh6,Uid:8d2c7867-30d6-41f9-9a10-23624a3d5433,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b129fb95a4b0565050bee30458d75064ea38f74e808be3a26d7e9c47987e4a5\"" May 9 23:45:48.714215 kubelet[1771]: E0509 23:45:48.714187 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:48.715258 containerd[1473]: time="2025-05-09T23:45:48.715231339Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 9 23:45:49.284917 kubelet[1771]: E0509 23:45:49.284540 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:49.285340 containerd[1473]: time="2025-05-09T23:45:49.285055335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5stwh,Uid:a8618fb0-a10e-4148-87f9-30d5f7f02665,Namespace:kube-system,Attempt:0,}" May 9 23:45:49.298856 kubelet[1771]: E0509 23:45:49.298736 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:49.304769 containerd[1473]: time="2025-05-09T23:45:49.304666645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:45:49.304769 containerd[1473]: time="2025-05-09T23:45:49.304733689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:45:49.304769 containerd[1473]: time="2025-05-09T23:45:49.304745290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:45:49.305057 containerd[1473]: time="2025-05-09T23:45:49.304830334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:45:49.327166 systemd[1]: Started cri-containerd-654e01e6dc66cc4fcd21c9458cc7f452b0ceea026e16867abfb2f09faf6e45a5.scope - libcontainer container 654e01e6dc66cc4fcd21c9458cc7f452b0ceea026e16867abfb2f09faf6e45a5. May 9 23:45:49.349141 containerd[1473]: time="2025-05-09T23:45:49.349079129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5stwh,Uid:a8618fb0-a10e-4148-87f9-30d5f7f02665,Namespace:kube-system,Attempt:0,} returns sandbox id \"654e01e6dc66cc4fcd21c9458cc7f452b0ceea026e16867abfb2f09faf6e45a5\"" May 9 23:45:49.349801 kubelet[1771]: E0509 23:45:49.349774 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:49.351773 containerd[1473]: time="2025-05-09T23:45:49.351699522Z" level=info msg="CreateContainer within sandbox \"654e01e6dc66cc4fcd21c9458cc7f452b0ceea026e16867abfb2f09faf6e45a5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 23:45:49.374083 containerd[1473]: time="2025-05-09T23:45:49.374013110Z" level=info msg="CreateContainer within sandbox \"654e01e6dc66cc4fcd21c9458cc7f452b0ceea026e16867abfb2f09faf6e45a5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cd5215dff36e690a7f0b199d9ff44b71029a76461f22b59472db2b404802d86a\"" May 9 23:45:49.374797 containerd[1473]: time="2025-05-09T23:45:49.374759634Z" level=info msg="StartContainer for \"cd5215dff36e690a7f0b199d9ff44b71029a76461f22b59472db2b404802d86a\"" May 9 23:45:49.401281 systemd[1]: Started cri-containerd-cd5215dff36e690a7f0b199d9ff44b71029a76461f22b59472db2b404802d86a.scope - libcontainer container cd5215dff36e690a7f0b199d9ff44b71029a76461f22b59472db2b404802d86a. May 9 23:45:49.410038 kubelet[1771]: E0509 23:45:49.409986 1771 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 9 23:45:49.439574 containerd[1473]: time="2025-05-09T23:45:49.437750887Z" level=info msg="StartContainer for \"cd5215dff36e690a7f0b199d9ff44b71029a76461f22b59472db2b404802d86a\" returns successfully" May 9 23:45:49.513133 systemd[1]: cri-containerd-cd5215dff36e690a7f0b199d9ff44b71029a76461f22b59472db2b404802d86a.scope: Deactivated successfully. May 9 23:45:49.522927 kubelet[1771]: E0509 23:45:49.522765 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:49.548809 containerd[1473]: time="2025-05-09T23:45:49.548666710Z" level=info msg="shim disconnected" id=cd5215dff36e690a7f0b199d9ff44b71029a76461f22b59472db2b404802d86a namespace=k8s.io May 9 23:45:49.548809 containerd[1473]: time="2025-05-09T23:45:49.548725393Z" level=warning msg="cleaning up after shim disconnected" id=cd5215dff36e690a7f0b199d9ff44b71029a76461f22b59472db2b404802d86a namespace=k8s.io May 9 23:45:49.548809 containerd[1473]: time="2025-05-09T23:45:49.548735474Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:45:49.790309 containerd[1473]: time="2025-05-09T23:45:49.790140426Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:45:49.791214 containerd[1473]: time="2025-05-09T23:45:49.791165126Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 9 23:45:49.796650 containerd[1473]: time="2025-05-09T23:45:49.796590484Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:45:49.798402 containerd[1473]: time="2025-05-09T23:45:49.798346427Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.08296832s" May 9 23:45:49.798402 containerd[1473]: time="2025-05-09T23:45:49.798386710Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 9 23:45:49.800663 containerd[1473]: time="2025-05-09T23:45:49.800481432Z" level=info msg="CreateContainer within sandbox \"4b129fb95a4b0565050bee30458d75064ea38f74e808be3a26d7e9c47987e4a5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 9 23:45:49.812788 containerd[1473]: time="2025-05-09T23:45:49.812694868Z" level=info msg="CreateContainer within sandbox \"4b129fb95a4b0565050bee30458d75064ea38f74e808be3a26d7e9c47987e4a5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"53c8de5c8045a3671df80749905a574ce83646ba34adacbfbee6e021d757ca1a\"" May 9 23:45:49.813574 containerd[1473]: time="2025-05-09T23:45:49.813327305Z" level=info msg="StartContainer for \"53c8de5c8045a3671df80749905a574ce83646ba34adacbfbee6e021d757ca1a\"" May 9 23:45:49.841186 systemd[1]: Started cri-containerd-53c8de5c8045a3671df80749905a574ce83646ba34adacbfbee6e021d757ca1a.scope - libcontainer container 53c8de5c8045a3671df80749905a574ce83646ba34adacbfbee6e021d757ca1a. May 9 23:45:49.896681 kubelet[1771]: I0509 23:45:49.896614 1771 setters.go:602] "Node became not ready" node="10.0.0.40" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-09T23:45:49Z","lastTransitionTime":"2025-05-09T23:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 9 23:45:49.949733 containerd[1473]: time="2025-05-09T23:45:49.949668418Z" level=info msg="StartContainer for \"53c8de5c8045a3671df80749905a574ce83646ba34adacbfbee6e021d757ca1a\" returns successfully" May 9 23:45:50.299068 kubelet[1771]: E0509 23:45:50.299007 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:50.527012 kubelet[1771]: E0509 23:45:50.526910 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:50.528369 kubelet[1771]: E0509 23:45:50.528337 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:50.529363 containerd[1473]: time="2025-05-09T23:45:50.529326193Z" level=info msg="CreateContainer within sandbox \"654e01e6dc66cc4fcd21c9458cc7f452b0ceea026e16867abfb2f09faf6e45a5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 23:45:50.607739 kubelet[1771]: I0509 23:45:50.607585 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-qtqh6" podStartSLOduration=1.5233874059999999 podStartE2EDuration="2.607566917s" podCreationTimestamp="2025-05-09 23:45:48 +0000 UTC" firstStartedPulling="2025-05-09 23:45:48.714973003 +0000 UTC m=+50.096495009" lastFinishedPulling="2025-05-09 23:45:49.799152514 +0000 UTC m=+51.180674520" observedRunningTime="2025-05-09 23:45:50.607208937 +0000 UTC m=+51.988730943" watchObservedRunningTime="2025-05-09 23:45:50.607566917 +0000 UTC m=+51.989088923" May 9 23:45:50.611659 containerd[1473]: time="2025-05-09T23:45:50.611564344Z" level=info msg="CreateContainer within sandbox \"654e01e6dc66cc4fcd21c9458cc7f452b0ceea026e16867abfb2f09faf6e45a5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f1f2f0c344ed9dcdd59c64a8b09635cf6272dcbfba66726553d33b8a69826a98\"" May 9 23:45:50.611646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount382080996.mount: Deactivated successfully. May 9 23:45:50.612694 containerd[1473]: time="2025-05-09T23:45:50.612444954Z" level=info msg="StartContainer for \"f1f2f0c344ed9dcdd59c64a8b09635cf6272dcbfba66726553d33b8a69826a98\"" May 9 23:45:50.653198 systemd[1]: Started cri-containerd-f1f2f0c344ed9dcdd59c64a8b09635cf6272dcbfba66726553d33b8a69826a98.scope - libcontainer container f1f2f0c344ed9dcdd59c64a8b09635cf6272dcbfba66726553d33b8a69826a98. May 9 23:45:50.682487 containerd[1473]: time="2025-05-09T23:45:50.682429729Z" level=info msg="StartContainer for \"f1f2f0c344ed9dcdd59c64a8b09635cf6272dcbfba66726553d33b8a69826a98\" returns successfully" May 9 23:45:50.702853 systemd[1]: cri-containerd-f1f2f0c344ed9dcdd59c64a8b09635cf6272dcbfba66726553d33b8a69826a98.scope: Deactivated successfully. May 9 23:45:50.725385 containerd[1473]: time="2025-05-09T23:45:50.725311404Z" level=info msg="shim disconnected" id=f1f2f0c344ed9dcdd59c64a8b09635cf6272dcbfba66726553d33b8a69826a98 namespace=k8s.io May 9 23:45:50.725385 containerd[1473]: time="2025-05-09T23:45:50.725377328Z" level=warning msg="cleaning up after shim disconnected" id=f1f2f0c344ed9dcdd59c64a8b09635cf6272dcbfba66726553d33b8a69826a98 namespace=k8s.io May 9 23:45:50.725385 containerd[1473]: time="2025-05-09T23:45:50.725386408Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:45:51.299966 kubelet[1771]: E0509 23:45:51.299916 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:51.532436 kubelet[1771]: E0509 23:45:51.532063 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:51.532436 kubelet[1771]: E0509 23:45:51.532216 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:51.535077 containerd[1473]: time="2025-05-09T23:45:51.534932175Z" level=info msg="CreateContainer within sandbox \"654e01e6dc66cc4fcd21c9458cc7f452b0ceea026e16867abfb2f09faf6e45a5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 23:45:51.568625 containerd[1473]: time="2025-05-09T23:45:51.568498822Z" level=info msg="CreateContainer within sandbox \"654e01e6dc66cc4fcd21c9458cc7f452b0ceea026e16867abfb2f09faf6e45a5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cb25dead5ffda1faf8c95121d9ab9f5439036991db31f3ad87b28ee63993aa0e\"" May 9 23:45:51.570400 containerd[1473]: time="2025-05-09T23:45:51.569104695Z" level=info msg="StartContainer for \"cb25dead5ffda1faf8c95121d9ab9f5439036991db31f3ad87b28ee63993aa0e\"" May 9 23:45:51.569708 systemd[1]: run-containerd-runc-k8s.io-f1f2f0c344ed9dcdd59c64a8b09635cf6272dcbfba66726553d33b8a69826a98-runc.LbdEya.mount: Deactivated successfully. May 9 23:45:51.569802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1f2f0c344ed9dcdd59c64a8b09635cf6272dcbfba66726553d33b8a69826a98-rootfs.mount: Deactivated successfully. May 9 23:45:51.606161 systemd[1]: Started cri-containerd-cb25dead5ffda1faf8c95121d9ab9f5439036991db31f3ad87b28ee63993aa0e.scope - libcontainer container cb25dead5ffda1faf8c95121d9ab9f5439036991db31f3ad87b28ee63993aa0e. May 9 23:45:51.630895 systemd[1]: cri-containerd-cb25dead5ffda1faf8c95121d9ab9f5439036991db31f3ad87b28ee63993aa0e.scope: Deactivated successfully. May 9 23:45:51.632565 containerd[1473]: time="2025-05-09T23:45:51.632517624Z" level=info msg="StartContainer for \"cb25dead5ffda1faf8c95121d9ab9f5439036991db31f3ad87b28ee63993aa0e\" returns successfully" May 9 23:45:51.648131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb25dead5ffda1faf8c95121d9ab9f5439036991db31f3ad87b28ee63993aa0e-rootfs.mount: Deactivated successfully. May 9 23:45:51.653232 containerd[1473]: time="2025-05-09T23:45:51.652944748Z" level=info msg="shim disconnected" id=cb25dead5ffda1faf8c95121d9ab9f5439036991db31f3ad87b28ee63993aa0e namespace=k8s.io May 9 23:45:51.653232 containerd[1473]: time="2025-05-09T23:45:51.653107077Z" level=warning msg="cleaning up after shim disconnected" id=cb25dead5ffda1faf8c95121d9ab9f5439036991db31f3ad87b28ee63993aa0e namespace=k8s.io May 9 23:45:51.653232 containerd[1473]: time="2025-05-09T23:45:51.653115277Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:45:52.300406 kubelet[1771]: E0509 23:45:52.300360 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:52.535860 kubelet[1771]: E0509 23:45:52.535741 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:52.537524 containerd[1473]: time="2025-05-09T23:45:52.537486631Z" level=info msg="CreateContainer within sandbox \"654e01e6dc66cc4fcd21c9458cc7f452b0ceea026e16867abfb2f09faf6e45a5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 23:45:52.552698 containerd[1473]: time="2025-05-09T23:45:52.552599117Z" level=info msg="CreateContainer within sandbox \"654e01e6dc66cc4fcd21c9458cc7f452b0ceea026e16867abfb2f09faf6e45a5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"755fca9ef3ac76631123d5a3c205af1f48f25f7bbcc38e548bc9d7083ce77da3\"" May 9 23:45:52.553504 containerd[1473]: time="2025-05-09T23:45:52.553435721Z" level=info msg="StartContainer for \"755fca9ef3ac76631123d5a3c205af1f48f25f7bbcc38e548bc9d7083ce77da3\"" May 9 23:45:52.584167 systemd[1]: Started cri-containerd-755fca9ef3ac76631123d5a3c205af1f48f25f7bbcc38e548bc9d7083ce77da3.scope - libcontainer container 755fca9ef3ac76631123d5a3c205af1f48f25f7bbcc38e548bc9d7083ce77da3. May 9 23:45:52.604213 systemd[1]: cri-containerd-755fca9ef3ac76631123d5a3c205af1f48f25f7bbcc38e548bc9d7083ce77da3.scope: Deactivated successfully. May 9 23:45:52.606417 containerd[1473]: time="2025-05-09T23:45:52.606320380Z" level=info msg="StartContainer for \"755fca9ef3ac76631123d5a3c205af1f48f25f7bbcc38e548bc9d7083ce77da3\" returns successfully" May 9 23:45:52.623227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-755fca9ef3ac76631123d5a3c205af1f48f25f7bbcc38e548bc9d7083ce77da3-rootfs.mount: Deactivated successfully. May 9 23:45:52.630440 containerd[1473]: time="2025-05-09T23:45:52.630268536Z" level=info msg="shim disconnected" id=755fca9ef3ac76631123d5a3c205af1f48f25f7bbcc38e548bc9d7083ce77da3 namespace=k8s.io May 9 23:45:52.630440 containerd[1473]: time="2025-05-09T23:45:52.630324499Z" level=warning msg="cleaning up after shim disconnected" id=755fca9ef3ac76631123d5a3c205af1f48f25f7bbcc38e548bc9d7083ce77da3 namespace=k8s.io May 9 23:45:52.630440 containerd[1473]: time="2025-05-09T23:45:52.630334700Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:45:53.300712 kubelet[1771]: E0509 23:45:53.300665 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:53.542077 kubelet[1771]: E0509 23:45:53.542046 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:53.546843 containerd[1473]: time="2025-05-09T23:45:53.546717655Z" level=info msg="CreateContainer within sandbox \"654e01e6dc66cc4fcd21c9458cc7f452b0ceea026e16867abfb2f09faf6e45a5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 23:45:53.561769 containerd[1473]: time="2025-05-09T23:45:53.561649186Z" level=info msg="CreateContainer within sandbox \"654e01e6dc66cc4fcd21c9458cc7f452b0ceea026e16867abfb2f09faf6e45a5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"21d74b40a0ec75e076c07014f819d33133fba6142ec361f2ad84c25f62e99bc1\"" May 9 23:45:53.562460 containerd[1473]: time="2025-05-09T23:45:53.562436827Z" level=info msg="StartContainer for \"21d74b40a0ec75e076c07014f819d33133fba6142ec361f2ad84c25f62e99bc1\"" May 9 23:45:53.586152 systemd[1]: Started cri-containerd-21d74b40a0ec75e076c07014f819d33133fba6142ec361f2ad84c25f62e99bc1.scope - libcontainer container 21d74b40a0ec75e076c07014f819d33133fba6142ec361f2ad84c25f62e99bc1. May 9 23:45:53.612486 containerd[1473]: time="2025-05-09T23:45:53.612440649Z" level=info msg="StartContainer for \"21d74b40a0ec75e076c07014f819d33133fba6142ec361f2ad84c25f62e99bc1\" returns successfully" May 9 23:45:53.868992 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 9 23:45:54.301801 kubelet[1771]: E0509 23:45:54.301747 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:54.546618 kubelet[1771]: E0509 23:45:54.546318 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:54.565113 kubelet[1771]: I0509 23:45:54.564608 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5stwh" podStartSLOduration=6.564590085 podStartE2EDuration="6.564590085s" podCreationTimestamp="2025-05-09 23:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:45:54.563826407 +0000 UTC m=+55.945348413" watchObservedRunningTime="2025-05-09 23:45:54.564590085 +0000 UTC m=+55.946112091" May 9 23:45:55.302316 kubelet[1771]: E0509 23:45:55.302260 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:55.548494 kubelet[1771]: E0509 23:45:55.548444 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:56.302617 kubelet[1771]: E0509 23:45:56.302552 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:56.756716 systemd-networkd[1396]: lxc_health: Link UP May 9 23:45:56.767158 systemd-networkd[1396]: lxc_health: Gained carrier May 9 23:45:57.136657 systemd[1]: run-containerd-runc-k8s.io-21d74b40a0ec75e076c07014f819d33133fba6142ec361f2ad84c25f62e99bc1-runc.sKoCtv.mount: Deactivated successfully. May 9 23:45:57.287380 kubelet[1771]: E0509 23:45:57.287196 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:57.303531 kubelet[1771]: E0509 23:45:57.303481 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:57.552934 kubelet[1771]: E0509 23:45:57.552897 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:58.304064 kubelet[1771]: E0509 23:45:58.304013 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:58.521136 systemd-networkd[1396]: lxc_health: Gained IPv6LL May 9 23:45:58.554446 kubelet[1771]: E0509 23:45:58.554113 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:59.266852 kubelet[1771]: E0509 23:45:59.266790 1771 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:45:59.285987 containerd[1473]: time="2025-05-09T23:45:59.285916927Z" level=info msg="StopPodSandbox for \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\"" May 9 23:45:59.286377 containerd[1473]: time="2025-05-09T23:45:59.286018132Z" level=info msg="TearDown network for sandbox \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\" successfully" May 9 23:45:59.286377 containerd[1473]: time="2025-05-09T23:45:59.286030212Z" level=info msg="StopPodSandbox for \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\" returns successfully" May 9 23:45:59.286489 containerd[1473]: time="2025-05-09T23:45:59.286377347Z" level=info msg="RemovePodSandbox for \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\"" May 9 23:45:59.286489 containerd[1473]: time="2025-05-09T23:45:59.286403868Z" level=info msg="Forcibly stopping sandbox \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\"" May 9 23:45:59.286489 containerd[1473]: time="2025-05-09T23:45:59.286451750Z" level=info msg="TearDown network for sandbox \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\" successfully" May 9 23:45:59.295751 containerd[1473]: time="2025-05-09T23:45:59.295680464Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 23:45:59.295902 containerd[1473]: time="2025-05-09T23:45:59.295772028Z" level=info msg="RemovePodSandbox \"4b7aad288c67dff02c3a442cbd65a2193da3fa12f3a88542c6bcc0077a119ba6\" returns successfully" May 9 23:45:59.304505 kubelet[1771]: E0509 23:45:59.304406 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:46:00.305274 kubelet[1771]: E0509 23:46:00.305232 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:46:01.306389 kubelet[1771]: E0509 23:46:01.306347 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:46:02.307403 kubelet[1771]: E0509 23:46:02.307356 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:46:03.307501 kubelet[1771]: E0509 23:46:03.307457 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:46:04.308302 kubelet[1771]: E0509 23:46:04.308256 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 23:46:05.309464 kubelet[1771]: E0509 23:46:05.309401 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"