May 13 23:40:37.905374 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 23:40:37.905396 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 22:16:18 -00 2025 May 13 23:40:37.905406 kernel: KASLR enabled May 13 23:40:37.905412 kernel: efi: EFI v2.7 by EDK II May 13 23:40:37.905417 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb4ff018 ACPI 2.0=0xd93ef018 RNG=0xd93efa18 MEMRESERVE=0xd91e1f18 May 13 23:40:37.905423 kernel: random: crng init done May 13 23:40:37.905430 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 13 23:40:37.905436 kernel: secureboot: Secure boot enabled May 13 23:40:37.905442 kernel: ACPI: Early table checksum verification disabled May 13 23:40:37.905448 kernel: ACPI: RSDP 0x00000000D93EF018 000024 (v02 BOCHS ) May 13 23:40:37.905455 kernel: ACPI: XSDT 0x00000000D93EFF18 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 23:40:37.905461 kernel: ACPI: FACP 0x00000000D93EFB18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:40:37.905467 kernel: ACPI: DSDT 0x00000000D93ED018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:40:37.905473 kernel: ACPI: APIC 0x00000000D93EFC98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:40:37.905480 kernel: ACPI: PPTT 0x00000000D93EF098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:40:37.905487 kernel: ACPI: GTDT 0x00000000D93EF818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:40:37.905493 kernel: ACPI: MCFG 0x00000000D93EFA98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:40:37.905500 kernel: ACPI: SPCR 0x00000000D93EF918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:40:37.905509 kernel: ACPI: DBG2 0x00000000D93EF998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:40:37.905516 kernel: ACPI: IORT 0x00000000D93EF198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:40:37.905522 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 23:40:37.905528 kernel: NUMA: Failed to initialise from firmware May 13 23:40:37.905534 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:40:37.905540 kernel: NUMA: NODE_DATA [mem 0xdc729800-0xdc72efff] May 13 23:40:37.905548 kernel: Zone ranges: May 13 23:40:37.905556 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:40:37.905562 kernel: DMA32 empty May 13 23:40:37.905568 kernel: Normal empty May 13 23:40:37.905574 kernel: Movable zone start for each node May 13 23:40:37.905579 kernel: Early memory node ranges May 13 23:40:37.905586 kernel: node 0: [mem 0x0000000040000000-0x00000000d93effff] May 13 23:40:37.905592 kernel: node 0: [mem 0x00000000d93f0000-0x00000000d972ffff] May 13 23:40:37.905598 kernel: node 0: [mem 0x00000000d9730000-0x00000000dcbfffff] May 13 23:40:37.905604 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] May 13 23:40:37.905610 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 23:40:37.905616 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:40:37.905622 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 23:40:37.905644 kernel: psci: probing for conduit method from ACPI. May 13 23:40:37.905650 kernel: psci: PSCIv1.1 detected in firmware. May 13 23:40:37.905656 kernel: psci: Using standard PSCI v0.2 function IDs May 13 23:40:37.905666 kernel: psci: Trusted OS migration not required May 13 23:40:37.905672 kernel: psci: SMC Calling Convention v1.1 May 13 23:40:37.905678 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 23:40:37.905685 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 23:40:37.905693 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 23:40:37.905700 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 23:40:37.905707 kernel: Detected PIPT I-cache on CPU0 May 13 23:40:37.905713 kernel: CPU features: detected: GIC system register CPU interface May 13 23:40:37.905720 kernel: CPU features: detected: Hardware dirty bit management May 13 23:40:37.905726 kernel: CPU features: detected: Spectre-v4 May 13 23:40:37.905732 kernel: CPU features: detected: Spectre-BHB May 13 23:40:37.905738 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 23:40:37.905745 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 23:40:37.905752 kernel: CPU features: detected: ARM erratum 1418040 May 13 23:40:37.905760 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 23:40:37.905766 kernel: alternatives: applying boot alternatives May 13 23:40:37.905774 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:40:37.905780 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:40:37.905787 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:40:37.905793 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:40:37.905799 kernel: Fallback order for Node 0: 0 May 13 23:40:37.905806 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 23:40:37.905812 kernel: Policy zone: DMA May 13 23:40:37.905819 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:40:37.905827 kernel: software IO TLB: area num 4. May 13 23:40:37.905834 kernel: software IO TLB: mapped [mem 0x00000000d2800000-0x00000000d6800000] (64MB) May 13 23:40:37.905841 kernel: Memory: 2385752K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38464K init, 897K bss, 186536K reserved, 0K cma-reserved) May 13 23:40:37.905848 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 23:40:37.905854 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:40:37.905861 kernel: rcu: RCU event tracing is enabled. May 13 23:40:37.905868 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 23:40:37.905874 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:40:37.905880 kernel: Tracing variant of Tasks RCU enabled. May 13 23:40:37.905887 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:40:37.905893 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 23:40:37.905900 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 23:40:37.905915 kernel: GICv3: 256 SPIs implemented May 13 23:40:37.905922 kernel: GICv3: 0 Extended SPIs implemented May 13 23:40:37.905939 kernel: Root IRQ handler: gic_handle_irq May 13 23:40:37.905946 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 23:40:37.905952 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 23:40:37.905959 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 23:40:37.905965 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 13 23:40:37.905972 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 13 23:40:37.905978 kernel: GICv3: using LPI property table @0x00000000400f0000 May 13 23:40:37.905985 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 13 23:40:37.905991 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:40:37.906000 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:40:37.906006 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 23:40:37.906013 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 23:40:37.906019 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 23:40:37.906025 kernel: arm-pv: using stolen time PV May 13 23:40:37.906032 kernel: Console: colour dummy device 80x25 May 13 23:40:37.906039 kernel: ACPI: Core revision 20230628 May 13 23:40:37.906046 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 23:40:37.906052 kernel: pid_max: default: 32768 minimum: 301 May 13 23:40:37.906059 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:40:37.906067 kernel: landlock: Up and running. May 13 23:40:37.906073 kernel: SELinux: Initializing. May 13 23:40:37.906079 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:40:37.906086 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:40:37.906093 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 23:40:37.906100 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:40:37.906106 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:40:37.906113 kernel: rcu: Hierarchical SRCU implementation. May 13 23:40:37.906120 kernel: rcu: Max phase no-delay instances is 400. May 13 23:40:37.906128 kernel: Platform MSI: ITS@0x8080000 domain created May 13 23:40:37.906134 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 23:40:37.906141 kernel: Remapping and enabling EFI services. May 13 23:40:37.906147 kernel: smp: Bringing up secondary CPUs ... May 13 23:40:37.906154 kernel: Detected PIPT I-cache on CPU1 May 13 23:40:37.906160 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 23:40:37.906167 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 13 23:40:37.906174 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:40:37.906180 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 23:40:37.906187 kernel: Detected PIPT I-cache on CPU2 May 13 23:40:37.906195 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 23:40:37.906202 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 13 23:40:37.906214 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:40:37.906222 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 23:40:37.906229 kernel: Detected PIPT I-cache on CPU3 May 13 23:40:37.906236 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 23:40:37.906243 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 13 23:40:37.906250 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:40:37.906256 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 23:40:37.906263 kernel: smp: Brought up 1 node, 4 CPUs May 13 23:40:37.906270 kernel: SMP: Total of 4 processors activated. May 13 23:40:37.906278 kernel: CPU features: detected: 32-bit EL0 Support May 13 23:40:37.906285 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 23:40:37.906295 kernel: CPU features: detected: Common not Private translations May 13 23:40:37.906302 kernel: CPU features: detected: CRC32 instructions May 13 23:40:37.906311 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 23:40:37.906318 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 23:40:37.906332 kernel: CPU features: detected: LSE atomic instructions May 13 23:40:37.906339 kernel: CPU features: detected: Privileged Access Never May 13 23:40:37.906346 kernel: CPU features: detected: RAS Extension Support May 13 23:40:37.906353 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 23:40:37.906360 kernel: CPU: All CPU(s) started at EL1 May 13 23:40:37.906367 kernel: alternatives: applying system-wide alternatives May 13 23:40:37.906374 kernel: devtmpfs: initialized May 13 23:40:37.906381 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:40:37.906388 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 23:40:37.906396 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:40:37.906403 kernel: SMBIOS 3.0.0 present. May 13 23:40:37.906410 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 13 23:40:37.906417 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:40:37.906424 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 23:40:37.906431 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 23:40:37.906438 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 23:40:37.906445 kernel: audit: initializing netlink subsys (disabled) May 13 23:40:37.906452 kernel: audit: type=2000 audit(0.022:1): state=initialized audit_enabled=0 res=1 May 13 23:40:37.906460 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:40:37.906467 kernel: cpuidle: using governor menu May 13 23:40:37.906474 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 23:40:37.906481 kernel: ASID allocator initialised with 32768 entries May 13 23:40:37.906488 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:40:37.906495 kernel: Serial: AMBA PL011 UART driver May 13 23:40:37.906502 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 23:40:37.906508 kernel: Modules: 0 pages in range for non-PLT usage May 13 23:40:37.906515 kernel: Modules: 509232 pages in range for PLT usage May 13 23:40:37.906524 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:40:37.906531 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:40:37.906538 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 23:40:37.906545 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 23:40:37.906552 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:40:37.906560 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:40:37.906568 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 23:40:37.906575 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 23:40:37.906581 kernel: ACPI: Added _OSI(Module Device) May 13 23:40:37.906590 kernel: ACPI: Added _OSI(Processor Device) May 13 23:40:37.906597 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:40:37.906603 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:40:37.906610 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:40:37.906617 kernel: ACPI: Interpreter enabled May 13 23:40:37.906624 kernel: ACPI: Using GIC for interrupt routing May 13 23:40:37.906631 kernel: ACPI: MCFG table detected, 1 entries May 13 23:40:37.906637 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 23:40:37.906644 kernel: printk: console [ttyAMA0] enabled May 13 23:40:37.906653 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 23:40:37.906795 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:40:37.906870 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 23:40:37.906977 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 23:40:37.907047 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 23:40:37.907114 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 23:40:37.907124 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 23:40:37.907135 kernel: PCI host bridge to bus 0000:00 May 13 23:40:37.907208 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 23:40:37.907268 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 23:40:37.907328 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 23:40:37.907387 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 23:40:37.907475 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 23:40:37.907552 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 23:40:37.907627 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 23:40:37.907702 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 23:40:37.907775 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 23:40:37.907852 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 23:40:37.907956 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 23:40:37.908025 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 23:40:37.908133 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 23:40:37.908194 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 23:40:37.908253 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 23:40:37.908262 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 23:40:37.908269 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 23:40:37.908276 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 23:40:37.908283 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 23:40:37.908290 kernel: iommu: Default domain type: Translated May 13 23:40:37.908300 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 23:40:37.908307 kernel: efivars: Registered efivars operations May 13 23:40:37.908314 kernel: vgaarb: loaded May 13 23:40:37.908321 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 23:40:37.908328 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:40:37.908335 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:40:37.908342 kernel: pnp: PnP ACPI init May 13 23:40:37.908413 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 23:40:37.908423 kernel: pnp: PnP ACPI: found 1 devices May 13 23:40:37.908432 kernel: NET: Registered PF_INET protocol family May 13 23:40:37.908439 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:40:37.908446 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 23:40:37.908453 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:40:37.908460 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:40:37.908467 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 23:40:37.908474 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 23:40:37.908481 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:40:37.908488 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:40:37.908497 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:40:37.908504 kernel: PCI: CLS 0 bytes, default 64 May 13 23:40:37.908510 kernel: kvm [1]: HYP mode not available May 13 23:40:37.908517 kernel: Initialise system trusted keyrings May 13 23:40:37.908524 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 23:40:37.908531 kernel: Key type asymmetric registered May 13 23:40:37.908538 kernel: Asymmetric key parser 'x509' registered May 13 23:40:37.908545 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 23:40:37.908552 kernel: io scheduler mq-deadline registered May 13 23:40:37.908561 kernel: io scheduler kyber registered May 13 23:40:37.908568 kernel: io scheduler bfq registered May 13 23:40:37.908575 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 23:40:37.908583 kernel: ACPI: button: Power Button [PWRB] May 13 23:40:37.908590 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 23:40:37.908656 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 23:40:37.908666 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:40:37.908672 kernel: thunder_xcv, ver 1.0 May 13 23:40:37.908679 kernel: thunder_bgx, ver 1.0 May 13 23:40:37.908688 kernel: nicpf, ver 1.0 May 13 23:40:37.908695 kernel: nicvf, ver 1.0 May 13 23:40:37.908768 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 23:40:37.908830 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T23:40:37 UTC (1747179637) May 13 23:40:37.908840 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 23:40:37.908847 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 23:40:37.908854 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 23:40:37.908861 kernel: watchdog: Hard watchdog permanently disabled May 13 23:40:37.908870 kernel: NET: Registered PF_INET6 protocol family May 13 23:40:37.908877 kernel: Segment Routing with IPv6 May 13 23:40:37.908883 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:40:37.908890 kernel: NET: Registered PF_PACKET protocol family May 13 23:40:37.908897 kernel: Key type dns_resolver registered May 13 23:40:37.908911 kernel: registered taskstats version 1 May 13 23:40:37.908918 kernel: Loading compiled-in X.509 certificates May 13 23:40:37.908925 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 568a15bbab977599d8f910f319ba50c03c8a57bd' May 13 23:40:37.908941 kernel: Key type .fscrypt registered May 13 23:40:37.908950 kernel: Key type fscrypt-provisioning registered May 13 23:40:37.908957 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:40:37.908964 kernel: ima: Allocated hash algorithm: sha1 May 13 23:40:37.908971 kernel: ima: No architecture policies found May 13 23:40:37.908978 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 23:40:37.908984 kernel: clk: Disabling unused clocks May 13 23:40:37.908991 kernel: Freeing unused kernel memory: 38464K May 13 23:40:37.908998 kernel: Run /init as init process May 13 23:40:37.909005 kernel: with arguments: May 13 23:40:37.909013 kernel: /init May 13 23:40:37.909020 kernel: with environment: May 13 23:40:37.909027 kernel: HOME=/ May 13 23:40:37.909033 kernel: TERM=linux May 13 23:40:37.909040 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:40:37.909048 systemd[1]: Successfully made /usr/ read-only. May 13 23:40:37.909058 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:40:37.909066 systemd[1]: Detected virtualization kvm. May 13 23:40:37.909074 systemd[1]: Detected architecture arm64. May 13 23:40:37.909082 systemd[1]: Running in initrd. May 13 23:40:37.909089 systemd[1]: No hostname configured, using default hostname. May 13 23:40:37.909097 systemd[1]: Hostname set to . May 13 23:40:37.909104 systemd[1]: Initializing machine ID from VM UUID. May 13 23:40:37.909111 systemd[1]: Queued start job for default target initrd.target. May 13 23:40:37.909119 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:40:37.909126 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:40:37.909136 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:40:37.909144 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:40:37.909151 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:40:37.909159 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:40:37.909168 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:40:37.909175 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:40:37.909185 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:40:37.909192 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:40:37.909200 systemd[1]: Reached target paths.target - Path Units. May 13 23:40:37.909207 systemd[1]: Reached target slices.target - Slice Units. May 13 23:40:37.909215 systemd[1]: Reached target swap.target - Swaps. May 13 23:40:37.909222 systemd[1]: Reached target timers.target - Timer Units. May 13 23:40:37.909230 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:40:37.909237 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:40:37.909245 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:40:37.909254 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:40:37.909261 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:40:37.909269 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:40:37.909277 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:40:37.909284 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:40:37.909296 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:40:37.909304 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:40:37.909311 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:40:37.909318 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:40:37.909328 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:40:37.909335 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:40:37.909343 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:40:37.909351 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:40:37.909359 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:40:37.909368 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:40:37.909376 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:40:37.909384 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:40:37.909392 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:40:37.909416 systemd-journald[236]: Collecting audit messages is disabled. May 13 23:40:37.909437 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:40:37.909445 systemd-journald[236]: Journal started May 13 23:40:37.909464 systemd-journald[236]: Runtime Journal (/run/log/journal/88acab9b117348cc858eb6f16e1644ce) is 5.9M, max 47.3M, 41.4M free. May 13 23:40:37.898242 systemd-modules-load[239]: Inserted module 'overlay' May 13 23:40:37.911499 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:40:37.912945 kernel: Bridge firewalling registered May 13 23:40:37.912994 systemd-modules-load[239]: Inserted module 'br_netfilter' May 13 23:40:37.913428 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:40:37.915751 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:40:37.921250 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:40:37.922836 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:40:37.926074 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:40:37.933152 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:40:37.935711 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:40:37.941206 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:40:37.943968 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:40:37.945780 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:40:37.949371 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:40:37.951537 dracut-cmdline[273]: dracut-dracut-053 May 13 23:40:37.953985 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:40:37.991428 systemd-resolved[286]: Positive Trust Anchors: May 13 23:40:37.991448 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:40:37.991480 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:40:37.996870 systemd-resolved[286]: Defaulting to hostname 'linux'. May 13 23:40:37.998013 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:40:38.001315 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:40:38.024963 kernel: SCSI subsystem initialized May 13 23:40:38.029943 kernel: Loading iSCSI transport class v2.0-870. May 13 23:40:38.038964 kernel: iscsi: registered transport (tcp) May 13 23:40:38.052952 kernel: iscsi: registered transport (qla4xxx) May 13 23:40:38.052972 kernel: QLogic iSCSI HBA Driver May 13 23:40:38.096182 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:40:38.098616 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:40:38.130158 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:40:38.130222 kernel: device-mapper: uevent: version 1.0.3 May 13 23:40:38.131623 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:40:38.177962 kernel: raid6: neonx8 gen() 15752 MB/s May 13 23:40:38.194950 kernel: raid6: neonx4 gen() 15750 MB/s May 13 23:40:38.211948 kernel: raid6: neonx2 gen() 13111 MB/s May 13 23:40:38.228954 kernel: raid6: neonx1 gen() 10417 MB/s May 13 23:40:38.245948 kernel: raid6: int64x8 gen() 6730 MB/s May 13 23:40:38.262952 kernel: raid6: int64x4 gen() 7290 MB/s May 13 23:40:38.279948 kernel: raid6: int64x2 gen() 6070 MB/s May 13 23:40:38.297042 kernel: raid6: int64x1 gen() 5025 MB/s May 13 23:40:38.297056 kernel: raid6: using algorithm neonx8 gen() 15752 MB/s May 13 23:40:38.315035 kernel: raid6: .... xor() 11830 MB/s, rmw enabled May 13 23:40:38.315054 kernel: raid6: using neon recovery algorithm May 13 23:40:38.320328 kernel: xor: measuring software checksum speed May 13 23:40:38.320348 kernel: 8regs : 21271 MB/sec May 13 23:40:38.320988 kernel: 32regs : 21693 MB/sec May 13 23:40:38.322238 kernel: arm64_neon : 27898 MB/sec May 13 23:40:38.322259 kernel: xor: using function: arm64_neon (27898 MB/sec) May 13 23:40:38.373955 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:40:38.385971 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:40:38.388426 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:40:38.413924 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 13 23:40:38.417761 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:40:38.420631 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:40:38.443892 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation May 13 23:40:38.467793 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:40:38.469973 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:40:38.531687 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:40:38.536091 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:40:38.552778 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:40:38.554674 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:40:38.556988 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:40:38.559687 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:40:38.562585 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:40:38.582279 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:40:38.592962 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 23:40:38.595045 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 23:40:38.598117 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:40:38.598243 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:40:38.603705 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:40:38.604866 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:40:38.611676 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:40:38.611698 kernel: GPT:9289727 != 19775487 May 13 23:40:38.611707 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:40:38.611716 kernel: GPT:9289727 != 19775487 May 13 23:40:38.611727 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:40:38.611736 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:40:38.605026 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:40:38.611692 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:40:38.615298 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:40:38.624958 kernel: BTRFS: device fsid ee830c17-a93d-4109-bd12-3fec8ef6763d devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (516) May 13 23:40:38.628953 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (514) May 13 23:40:38.647185 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 23:40:38.648720 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:40:38.666126 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 23:40:38.673551 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 23:40:38.674788 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 23:40:38.684649 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:40:38.686689 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:40:38.688757 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:40:38.707684 disk-uuid[551]: Primary Header is updated. May 13 23:40:38.707684 disk-uuid[551]: Secondary Entries is updated. May 13 23:40:38.707684 disk-uuid[551]: Secondary Header is updated. May 13 23:40:38.715968 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:40:38.722054 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:40:39.726901 disk-uuid[552]: The operation has completed successfully. May 13 23:40:39.727994 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:40:39.754372 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:40:39.754484 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:40:39.789711 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:40:39.808659 sh[573]: Success May 13 23:40:39.822960 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 23:40:39.855697 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:40:39.858091 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:40:39.880530 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:40:39.887920 kernel: BTRFS info (device dm-0): first mount of filesystem ee830c17-a93d-4109-bd12-3fec8ef6763d May 13 23:40:39.887958 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 23:40:39.887969 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:40:39.887978 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:40:39.888728 kernel: BTRFS info (device dm-0): using free space tree May 13 23:40:39.892546 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:40:39.893899 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:40:39.894636 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:40:39.897329 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:40:39.922474 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:40:39.922511 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:40:39.922522 kernel: BTRFS info (device vda6): using free space tree May 13 23:40:39.924943 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:40:39.929954 kernel: BTRFS info (device vda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:40:39.933283 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:40:39.935473 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:40:39.998057 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:40:40.001266 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:40:40.042413 ignition[669]: Ignition 2.20.0 May 13 23:40:40.042423 ignition[669]: Stage: fetch-offline May 13 23:40:40.042451 ignition[669]: no configs at "/usr/lib/ignition/base.d" May 13 23:40:40.042459 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:40:40.042613 ignition[669]: parsed url from cmdline: "" May 13 23:40:40.042616 ignition[669]: no config URL provided May 13 23:40:40.042621 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:40:40.047097 systemd-networkd[757]: lo: Link UP May 13 23:40:40.042627 ignition[669]: no config at "/usr/lib/ignition/user.ign" May 13 23:40:40.047101 systemd-networkd[757]: lo: Gained carrier May 13 23:40:40.042653 ignition[669]: op(1): [started] loading QEMU firmware config module May 13 23:40:40.047894 systemd-networkd[757]: Enumeration completed May 13 23:40:40.042658 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 23:40:40.048376 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:40:40.054068 ignition[669]: op(1): [finished] loading QEMU firmware config module May 13 23:40:40.048379 systemd-networkd[757]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:40:40.048901 systemd-networkd[757]: eth0: Link UP May 13 23:40:40.048904 systemd-networkd[757]: eth0: Gained carrier May 13 23:40:40.048910 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:40:40.049586 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:40:40.050729 systemd[1]: Reached target network.target - Network. May 13 23:40:40.068233 ignition[669]: parsing config with SHA512: 3ba9142471d4049a21ada91a4c0de5e5c16b98a406261e6ccb57cef554c4431572661732482b156fb883a491faf4e214bb90cc48045abd0649e58f6f37948cfe May 13 23:40:40.073213 unknown[669]: fetched base config from "system" May 13 23:40:40.073223 unknown[669]: fetched user config from "qemu" May 13 23:40:40.073589 ignition[669]: fetch-offline: fetch-offline passed May 13 23:40:40.073681 ignition[669]: Ignition finished successfully May 13 23:40:40.076065 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:40:40.077988 systemd-networkd[757]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:40:40.078420 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 23:40:40.079275 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:40:40.108405 ignition[772]: Ignition 2.20.0 May 13 23:40:40.108416 ignition[772]: Stage: kargs May 13 23:40:40.108570 ignition[772]: no configs at "/usr/lib/ignition/base.d" May 13 23:40:40.108580 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:40:40.111277 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:40:40.109256 ignition[772]: kargs: kargs passed May 13 23:40:40.109298 ignition[772]: Ignition finished successfully May 13 23:40:40.113732 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:40:40.135809 ignition[782]: Ignition 2.20.0 May 13 23:40:40.135819 ignition[782]: Stage: disks May 13 23:40:40.136038 ignition[782]: no configs at "/usr/lib/ignition/base.d" May 13 23:40:40.136049 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:40:40.136678 ignition[782]: disks: disks passed May 13 23:40:40.136720 ignition[782]: Ignition finished successfully May 13 23:40:40.140215 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:40:40.141586 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:40:40.144287 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:40:40.145519 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:40:40.147386 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:40:40.149065 systemd[1]: Reached target basic.target - Basic System. May 13 23:40:40.151677 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:40:40.175489 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:40:40.179025 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:40:40.181241 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:40:40.237952 kernel: EXT4-fs (vda9): mounted filesystem 9f8d74e6-c079-469f-823a-18a62077a2c7 r/w with ordered data mode. Quota mode: none. May 13 23:40:40.238691 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:40:40.239921 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:40:40.244060 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:40:40.246333 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:40:40.247442 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 23:40:40.247481 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:40:40.247503 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:40:40.265618 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:40:40.267725 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:40:40.275085 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (800) May 13 23:40:40.275133 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:40:40.275145 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:40:40.276633 kernel: BTRFS info (device vda6): using free space tree May 13 23:40:40.279171 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:40:40.280497 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:40:40.310613 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:40:40.313913 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory May 13 23:40:40.317110 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:40:40.320547 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:40:40.396123 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:40:40.398127 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:40:40.399658 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:40:40.414954 kernel: BTRFS info (device vda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:40:40.433117 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:40:40.442614 ignition[914]: INFO : Ignition 2.20.0 May 13 23:40:40.442614 ignition[914]: INFO : Stage: mount May 13 23:40:40.444309 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:40:40.444309 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:40:40.444309 ignition[914]: INFO : mount: mount passed May 13 23:40:40.444309 ignition[914]: INFO : Ignition finished successfully May 13 23:40:40.444615 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:40:40.447135 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:40:40.885966 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:40:40.887472 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:40:40.903951 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (928) May 13 23:40:40.903990 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:40:40.906509 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:40:40.906526 kernel: BTRFS info (device vda6): using free space tree May 13 23:40:40.908949 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:40:40.909849 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:40:40.934529 ignition[945]: INFO : Ignition 2.20.0 May 13 23:40:40.934529 ignition[945]: INFO : Stage: files May 13 23:40:40.936236 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:40:40.936236 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:40:40.936236 ignition[945]: DEBUG : files: compiled without relabeling support, skipping May 13 23:40:40.939808 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:40:40.939808 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:40:40.942729 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:40:40.942729 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:40:40.942729 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:40:40.942086 unknown[945]: wrote ssh authorized keys file for user: core May 13 23:40:40.951329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 13 23:40:40.951329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:40:40.951329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:40:40.951329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:40:40.951329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:40:40.951329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:40:40.951329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:40:40.951329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 13 23:40:41.277814 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 13 23:40:41.623343 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:40:41.623343 ignition[945]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 13 23:40:41.627011 ignition[945]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:40:41.627011 ignition[945]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:40:41.627011 ignition[945]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 13 23:40:41.627011 ignition[945]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 13 23:40:41.646472 ignition[945]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:40:41.650471 ignition[945]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:40:41.652015 ignition[945]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 13 23:40:41.652015 ignition[945]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:40:41.652015 ignition[945]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:40:41.652015 ignition[945]: INFO : files: files passed May 13 23:40:41.652015 ignition[945]: INFO : Ignition finished successfully May 13 23:40:41.652485 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:40:41.655341 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:40:41.657545 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:40:41.673154 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:40:41.674250 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory May 13 23:40:41.675348 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:40:41.679460 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:40:41.679460 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:40:41.682441 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:40:41.681891 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:40:41.683840 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:40:41.686533 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:40:41.717871 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:40:41.717998 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:40:41.720258 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:40:41.722185 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:40:41.724077 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:40:41.724849 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:40:41.748145 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:40:41.750677 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:40:41.776404 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:40:41.777700 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:40:41.779964 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:40:41.781806 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:40:41.781965 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:40:41.784518 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:40:41.786696 systemd[1]: Stopped target basic.target - Basic System. May 13 23:40:41.788402 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:40:41.790162 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:40:41.792151 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:40:41.794120 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:40:41.796094 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:40:41.798118 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:40:41.800177 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:40:41.801973 systemd[1]: Stopped target swap.target - Swaps. May 13 23:40:41.803652 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:40:41.803784 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:40:41.806165 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:40:41.808233 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:40:41.810350 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:40:41.810987 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:40:41.812584 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:40:41.812712 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:40:41.815646 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:40:41.815778 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:40:41.817852 systemd[1]: Stopped target paths.target - Path Units. May 13 23:40:41.819448 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:40:41.822989 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:40:41.824401 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:40:41.826492 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:40:41.828103 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:40:41.828198 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:40:41.829850 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:40:41.829952 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:40:41.831615 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:40:41.831757 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:40:41.833594 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:40:41.833705 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:40:41.836057 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:40:41.838584 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:40:41.839955 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:40:41.840086 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:40:41.842098 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:40:41.842204 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:40:41.856191 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:40:41.856295 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:40:41.865543 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:40:41.866555 ignition[1001]: INFO : Ignition 2.20.0 May 13 23:40:41.866555 ignition[1001]: INFO : Stage: umount May 13 23:40:41.866555 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:40:41.866555 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:40:41.870617 ignition[1001]: INFO : umount: umount passed May 13 23:40:41.870617 ignition[1001]: INFO : Ignition finished successfully May 13 23:40:41.869714 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:40:41.869864 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:40:41.871843 systemd[1]: Stopped target network.target - Network. May 13 23:40:41.873226 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:40:41.873301 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:40:41.874908 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:40:41.874972 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:40:41.876792 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:40:41.876849 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:40:41.878647 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:40:41.878693 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:40:41.880735 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:40:41.882559 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:40:41.889008 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:40:41.889128 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:40:41.892419 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:40:41.892681 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:40:41.892723 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:40:41.896336 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:40:41.898598 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:40:41.898705 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:40:41.901772 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:40:41.901985 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:40:41.902018 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:40:41.903993 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:40:41.904871 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:40:41.904951 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:40:41.906990 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:40:41.907039 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:40:41.909906 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:40:41.909965 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:40:41.912096 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:40:41.915257 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:40:41.921539 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:40:41.921630 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:40:41.926165 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:40:41.926298 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:40:41.928564 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:40:41.928651 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:40:41.930944 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:40:41.930998 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:40:41.932157 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:40:41.932189 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:40:41.933925 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:40:41.933986 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:40:41.936697 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:40:41.936744 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:40:41.939360 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:40:41.939408 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:40:41.942156 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:40:41.942204 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:40:41.944636 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:40:41.945779 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:40:41.945846 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:40:41.948999 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 23:40:41.949042 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:40:41.951099 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:40:41.951145 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:40:41.953127 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:40:41.953174 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:40:41.956915 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:40:41.956992 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:40:41.959673 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:40:41.959781 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:40:41.961365 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:40:41.963730 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:40:41.980964 systemd[1]: Switching root. May 13 23:40:42.011278 systemd-journald[236]: Journal stopped May 13 23:40:42.742913 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). May 13 23:40:42.742998 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:40:42.743015 kernel: SELinux: policy capability open_perms=1 May 13 23:40:42.743025 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:40:42.743034 kernel: SELinux: policy capability always_check_network=0 May 13 23:40:42.743044 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:40:42.743054 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:40:42.743066 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:40:42.743078 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:40:42.743087 kernel: audit: type=1403 audit(1747179642.137:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:40:42.743098 systemd[1]: Successfully loaded SELinux policy in 35.045ms. May 13 23:40:42.743115 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.426ms. May 13 23:40:42.743127 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:40:42.743139 systemd[1]: Detected virtualization kvm. May 13 23:40:42.743150 systemd[1]: Detected architecture arm64. May 13 23:40:42.743160 systemd[1]: Detected first boot. May 13 23:40:42.743170 systemd[1]: Initializing machine ID from VM UUID. May 13 23:40:42.743180 zram_generator::config[1048]: No configuration found. May 13 23:40:42.743191 kernel: NET: Registered PF_VSOCK protocol family May 13 23:40:42.743200 systemd[1]: Populated /etc with preset unit settings. May 13 23:40:42.743211 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:40:42.743223 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:40:42.743233 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:40:42.743243 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:40:42.743254 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:40:42.743264 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:40:42.743275 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:40:42.743285 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:40:42.743295 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:40:42.743309 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:40:42.743323 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:40:42.743334 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:40:42.743345 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:40:42.743355 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:40:42.743366 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:40:42.743376 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:40:42.743386 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:40:42.743401 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:40:42.743412 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 23:40:42.743423 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:40:42.743433 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:40:42.743443 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:40:42.743454 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:40:42.743464 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:40:42.743474 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:40:42.743484 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:40:42.743496 systemd[1]: Reached target slices.target - Slice Units. May 13 23:40:42.743506 systemd[1]: Reached target swap.target - Swaps. May 13 23:40:42.743517 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:40:42.743527 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:40:42.743539 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:40:42.743550 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:40:42.743560 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:40:42.743571 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:40:42.743581 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:40:42.743594 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:40:42.743604 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:40:42.743614 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:40:42.743625 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:40:42.743635 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:40:42.743645 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:40:42.743655 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:40:42.743666 systemd[1]: Reached target machines.target - Containers. May 13 23:40:42.743675 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:40:42.743688 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:40:42.743698 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:40:42.743709 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:40:42.743719 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:40:42.743729 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:40:42.743739 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:40:42.743751 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:40:42.743763 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:40:42.743776 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:40:42.743787 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:40:42.743802 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:40:42.743812 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:40:42.743837 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:40:42.743847 kernel: fuse: init (API version 7.39) May 13 23:40:42.743857 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:40:42.743868 kernel: loop: module loaded May 13 23:40:42.743878 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:40:42.743900 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:40:42.743910 kernel: ACPI: bus type drm_connector registered May 13 23:40:42.743920 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:40:42.743971 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:40:42.743983 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:40:42.744014 systemd-journald[1121]: Collecting audit messages is disabled. May 13 23:40:42.744039 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:40:42.744052 systemd-journald[1121]: Journal started May 13 23:40:42.744073 systemd-journald[1121]: Runtime Journal (/run/log/journal/88acab9b117348cc858eb6f16e1644ce) is 5.9M, max 47.3M, 41.4M free. May 13 23:40:42.534455 systemd[1]: Queued start job for default target multi-user.target. May 13 23:40:42.549788 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 23:40:42.550188 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:40:42.746443 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:40:42.746474 systemd[1]: Stopped verity-setup.service. May 13 23:40:42.751513 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:40:42.752218 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:40:42.753352 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:40:42.754649 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:40:42.755731 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:40:42.757009 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:40:42.758256 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:40:42.760991 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:40:42.762460 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:40:42.763940 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:40:42.764119 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:40:42.765433 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:40:42.765594 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:40:42.768307 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:40:42.768470 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:40:42.769722 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:40:42.769895 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:40:42.771487 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:40:42.771640 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:40:42.772940 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:40:42.773106 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:40:42.775381 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:40:42.776806 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:40:42.778253 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:40:42.779731 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:40:42.791919 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:40:42.794363 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:40:42.796434 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:40:42.797637 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:40:42.797666 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:40:42.799557 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:40:42.810691 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:40:42.812727 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:40:42.813818 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:40:42.815000 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:40:42.816820 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:40:42.818025 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:40:42.822036 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:40:42.823384 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:40:42.823727 systemd-journald[1121]: Time spent on flushing to /var/log/journal/88acab9b117348cc858eb6f16e1644ce is 15.422ms for 848 entries. May 13 23:40:42.823727 systemd-journald[1121]: System Journal (/var/log/journal/88acab9b117348cc858eb6f16e1644ce) is 8M, max 195.6M, 187.6M free. May 13 23:40:42.843952 systemd-journald[1121]: Received client request to flush runtime journal. May 13 23:40:42.824286 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:40:42.833095 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:40:42.837156 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:40:42.841355 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:40:42.844235 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:40:42.846507 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:40:42.849627 kernel: loop0: detected capacity change from 0 to 194096 May 13 23:40:42.849968 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:40:42.852566 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:40:42.854718 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:40:42.861575 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:40:42.864961 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:40:42.864456 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:40:42.868550 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:40:42.872019 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:40:42.874743 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. May 13 23:40:42.874756 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. May 13 23:40:42.880109 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:40:42.886115 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:40:42.889922 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:40:42.895566 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:40:42.900041 kernel: loop1: detected capacity change from 0 to 126448 May 13 23:40:42.901123 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 23:40:42.916376 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:40:42.920126 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:40:42.940627 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. May 13 23:40:42.940642 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. May 13 23:40:42.945448 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:40:42.948948 kernel: loop2: detected capacity change from 0 to 103832 May 13 23:40:42.981955 kernel: loop3: detected capacity change from 0 to 194096 May 13 23:40:42.987946 kernel: loop4: detected capacity change from 0 to 126448 May 13 23:40:42.993961 kernel: loop5: detected capacity change from 0 to 103832 May 13 23:40:42.998178 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 23:40:42.998557 (sd-merge)[1195]: Merged extensions into '/usr'. May 13 23:40:43.004031 systemd[1]: Reload requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:40:43.004049 systemd[1]: Reloading... May 13 23:40:43.060966 zram_generator::config[1222]: No configuration found. May 13 23:40:43.108836 ldconfig[1161]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:40:43.153056 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:40:43.203192 systemd[1]: Reloading finished in 198 ms. May 13 23:40:43.219966 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:40:43.221426 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:40:43.240155 systemd[1]: Starting ensure-sysext.service... May 13 23:40:43.241861 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:40:43.253624 systemd[1]: Reload requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... May 13 23:40:43.253643 systemd[1]: Reloading... May 13 23:40:43.258798 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:40:43.259026 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:40:43.259643 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:40:43.259850 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. May 13 23:40:43.259898 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. May 13 23:40:43.262469 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:40:43.262480 systemd-tmpfiles[1259]: Skipping /boot May 13 23:40:43.271585 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:40:43.271600 systemd-tmpfiles[1259]: Skipping /boot May 13 23:40:43.298958 zram_generator::config[1288]: No configuration found. May 13 23:40:43.384785 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:40:43.434885 systemd[1]: Reloading finished in 180 ms. May 13 23:40:43.445465 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:40:43.462203 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:40:43.469967 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:40:43.472409 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:40:43.482915 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:40:43.486102 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:40:43.491186 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:40:43.493363 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:40:43.498028 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:40:43.499150 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:40:43.502251 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:40:43.511478 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:40:43.513216 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:40:43.513376 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:40:43.514539 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:40:43.522053 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:40:43.526290 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:40:43.528490 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:40:43.528733 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:40:43.533925 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:40:43.534135 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:40:43.536127 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:40:43.536303 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:40:43.539100 systemd-udevd[1332]: Using default interface naming scheme 'v255'. May 13 23:40:43.545291 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:40:43.546692 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:40:43.552603 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:40:43.555256 augenrules[1360]: No rules May 13 23:40:43.561217 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:40:43.562414 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:40:43.562533 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:40:43.565174 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:40:43.565381 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:40:43.566914 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:40:43.568777 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:40:43.570454 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:40:43.570619 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:40:43.572248 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:40:43.574391 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:40:43.574555 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:40:43.576530 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:40:43.576729 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:40:43.584857 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:40:43.586882 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:40:43.602561 systemd[1]: Finished ensure-sysext.service. May 13 23:40:43.616006 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:40:43.617481 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:40:43.618757 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:40:43.630236 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:40:43.634181 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:40:43.637875 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:40:43.640282 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:40:43.640333 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:40:43.641966 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1371) May 13 23:40:43.643900 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:40:43.650784 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 23:40:43.652027 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:40:43.652556 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:40:43.652985 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:40:43.655394 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:40:43.655570 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:40:43.656665 augenrules[1398]: /sbin/augenrules: No change May 13 23:40:43.663923 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 23:40:43.671170 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:40:43.671359 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:40:43.677868 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:40:43.678078 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:40:43.684575 augenrules[1427]: No rules May 13 23:40:43.687187 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:40:43.687405 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:40:43.688669 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:40:43.688731 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:40:43.705860 systemd-resolved[1328]: Positive Trust Anchors: May 13 23:40:43.707620 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:40:43.707655 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:40:43.710362 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:40:43.713853 systemd-resolved[1328]: Defaulting to hostname 'linux'. May 13 23:40:43.717095 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:40:43.718858 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:40:43.720099 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:40:43.747481 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 23:40:43.748815 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:40:43.751383 systemd-networkd[1410]: lo: Link UP May 13 23:40:43.751723 systemd-networkd[1410]: lo: Gained carrier May 13 23:40:43.752301 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:40:43.752814 systemd-networkd[1410]: Enumeration completed May 13 23:40:43.753365 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:40:43.753440 systemd-networkd[1410]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:40:43.754093 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:40:43.755502 systemd[1]: Reached target network.target - Network. May 13 23:40:43.755960 systemd-networkd[1410]: eth0: Link UP May 13 23:40:43.756042 systemd-networkd[1410]: eth0: Gained carrier May 13 23:40:43.756100 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:40:43.758339 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:40:43.762554 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:40:43.769114 systemd-networkd[1410]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:40:43.772427 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. May 13 23:40:44.213774 systemd-timesyncd[1412]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 23:40:44.213835 systemd-timesyncd[1412]: Initial clock synchronization to Tue 2025-05-13 23:40:44.213618 UTC. May 13 23:40:44.214040 systemd-resolved[1328]: Clock change detected. Flushing caches. May 13 23:40:44.224555 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:40:44.234604 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:40:44.253078 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:40:44.255915 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:40:44.280744 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:40:44.295370 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:40:44.319278 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:40:44.320892 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:40:44.322008 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:40:44.323258 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:40:44.324559 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:40:44.326019 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:40:44.327210 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:40:44.328492 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:40:44.329924 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:40:44.329959 systemd[1]: Reached target paths.target - Path Units. May 13 23:40:44.330878 systemd[1]: Reached target timers.target - Timer Units. May 13 23:40:44.332775 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:40:44.335107 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:40:44.338228 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:40:44.339723 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:40:44.340988 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:40:44.344048 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:40:44.345438 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:40:44.347674 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:40:44.349285 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:40:44.350473 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:40:44.351477 systemd[1]: Reached target basic.target - Basic System. May 13 23:40:44.352540 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:40:44.352573 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:40:44.353470 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:40:44.355229 lvm[1459]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:40:44.356828 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:40:44.358858 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:40:44.363887 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:40:44.364937 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:40:44.365989 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:40:44.368940 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:40:44.371267 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:40:44.373779 jq[1462]: false May 13 23:40:44.376867 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:40:44.378949 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:40:44.379433 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:40:44.381883 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:40:44.386622 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:40:44.387892 dbus-daemon[1461]: [system] SELinux support is enabled May 13 23:40:44.394904 extend-filesystems[1463]: Found loop3 May 13 23:40:44.394904 extend-filesystems[1463]: Found loop4 May 13 23:40:44.394904 extend-filesystems[1463]: Found loop5 May 13 23:40:44.394904 extend-filesystems[1463]: Found vda May 13 23:40:44.394904 extend-filesystems[1463]: Found vda1 May 13 23:40:44.394904 extend-filesystems[1463]: Found vda2 May 13 23:40:44.394904 extend-filesystems[1463]: Found vda3 May 13 23:40:44.394904 extend-filesystems[1463]: Found usr May 13 23:40:44.394904 extend-filesystems[1463]: Found vda4 May 13 23:40:44.394904 extend-filesystems[1463]: Found vda6 May 13 23:40:44.394904 extend-filesystems[1463]: Found vda7 May 13 23:40:44.394904 extend-filesystems[1463]: Found vda9 May 13 23:40:44.394904 extend-filesystems[1463]: Checking size of /dev/vda9 May 13 23:40:44.394336 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:40:44.435086 extend-filesystems[1463]: Resized partition /dev/vda9 May 13 23:40:44.401172 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:40:44.436968 update_engine[1472]: I20250513 23:40:44.432495 1472 main.cc:92] Flatcar Update Engine starting May 13 23:40:44.444724 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1394) May 13 23:40:44.444821 extend-filesystems[1490]: resize2fs 1.47.2 (1-Jan-2025) May 13 23:40:44.451815 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 23:40:44.406692 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:40:44.451930 update_engine[1472]: I20250513 23:40:44.438100 1472 update_check_scheduler.cc:74] Next update check in 11m35s May 13 23:40:44.451982 jq[1477]: true May 13 23:40:44.406909 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:40:44.452181 jq[1482]: true May 13 23:40:44.407202 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:40:44.407373 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:40:44.414165 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:40:44.414347 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:40:44.422383 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:40:44.422436 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:40:44.425442 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:40:44.425467 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:40:44.440847 systemd[1]: Started update-engine.service - Update Engine. May 13 23:40:44.448067 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:40:44.474018 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 23:40:44.481716 extend-filesystems[1490]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 23:40:44.481716 extend-filesystems[1490]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 23:40:44.481716 extend-filesystems[1490]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 23:40:44.494572 extend-filesystems[1463]: Resized filesystem in /dev/vda9 May 13 23:40:44.483313 (ntainerd)[1493]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:40:44.484014 systemd-logind[1470]: Watching system buttons on /dev/input/event0 (Power Button) May 13 23:40:44.484131 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:40:44.485439 systemd-logind[1470]: New seat seat0. May 13 23:40:44.485747 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:40:44.492141 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:40:44.513544 locksmithd[1491]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:40:44.526829 bash[1516]: Updated "/home/core/.ssh/authorized_keys" May 13 23:40:44.527913 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:40:44.531797 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 23:40:44.679997 containerd[1493]: time="2025-05-13T23:40:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:40:44.681048 containerd[1493]: time="2025-05-13T23:40:44.681017064Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:40:44.692061 containerd[1493]: time="2025-05-13T23:40:44.690427064Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.68µs" May 13 23:40:44.692061 containerd[1493]: time="2025-05-13T23:40:44.690668384Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:40:44.692061 containerd[1493]: time="2025-05-13T23:40:44.690692024Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:40:44.692061 containerd[1493]: time="2025-05-13T23:40:44.690860064Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:40:44.692061 containerd[1493]: time="2025-05-13T23:40:44.690878144Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:40:44.692061 containerd[1493]: time="2025-05-13T23:40:44.690902424Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:40:44.692061 containerd[1493]: time="2025-05-13T23:40:44.690948624Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:40:44.692061 containerd[1493]: time="2025-05-13T23:40:44.690961064Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:40:44.692061 containerd[1493]: time="2025-05-13T23:40:44.691231944Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:40:44.692061 containerd[1493]: time="2025-05-13T23:40:44.691244944Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:40:44.692061 containerd[1493]: time="2025-05-13T23:40:44.691256184Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:40:44.692061 containerd[1493]: time="2025-05-13T23:40:44.691264024Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:40:44.692352 containerd[1493]: time="2025-05-13T23:40:44.691334584Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:40:44.692352 containerd[1493]: time="2025-05-13T23:40:44.691507824Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:40:44.692352 containerd[1493]: time="2025-05-13T23:40:44.691548624Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:40:44.692352 containerd[1493]: time="2025-05-13T23:40:44.691560224Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:40:44.692352 containerd[1493]: time="2025-05-13T23:40:44.691599144Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:40:44.692352 containerd[1493]: time="2025-05-13T23:40:44.691863344Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:40:44.692352 containerd[1493]: time="2025-05-13T23:40:44.691926824Z" level=info msg="metadata content store policy set" policy=shared May 13 23:40:44.695215 containerd[1493]: time="2025-05-13T23:40:44.695187144Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:40:44.695346 containerd[1493]: time="2025-05-13T23:40:44.695328904Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:40:44.695412 containerd[1493]: time="2025-05-13T23:40:44.695390544Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:40:44.695471 containerd[1493]: time="2025-05-13T23:40:44.695458584Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:40:44.695519 containerd[1493]: time="2025-05-13T23:40:44.695507864Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:40:44.695582 containerd[1493]: time="2025-05-13T23:40:44.695568024Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:40:44.695645 containerd[1493]: time="2025-05-13T23:40:44.695631384Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:40:44.695715 containerd[1493]: time="2025-05-13T23:40:44.695686944Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:40:44.695770 containerd[1493]: time="2025-05-13T23:40:44.695757264Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:40:44.695821 containerd[1493]: time="2025-05-13T23:40:44.695808944Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:40:44.695873 containerd[1493]: time="2025-05-13T23:40:44.695860384Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:40:44.695951 containerd[1493]: time="2025-05-13T23:40:44.695935344Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:40:44.696115 containerd[1493]: time="2025-05-13T23:40:44.696095944Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:40:44.696187 containerd[1493]: time="2025-05-13T23:40:44.696171544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:40:44.696249 containerd[1493]: time="2025-05-13T23:40:44.696235064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:40:44.696301 containerd[1493]: time="2025-05-13T23:40:44.696287984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:40:44.696353 containerd[1493]: time="2025-05-13T23:40:44.696340544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:40:44.696414 containerd[1493]: time="2025-05-13T23:40:44.696400424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:40:44.696474 containerd[1493]: time="2025-05-13T23:40:44.696460584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:40:44.696537 containerd[1493]: time="2025-05-13T23:40:44.696512544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:40:44.696593 containerd[1493]: time="2025-05-13T23:40:44.696577544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:40:44.696652 containerd[1493]: time="2025-05-13T23:40:44.696639064Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:40:44.696725 containerd[1493]: time="2025-05-13T23:40:44.696691704Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:40:44.697025 containerd[1493]: time="2025-05-13T23:40:44.697008304Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:40:44.697082 containerd[1493]: time="2025-05-13T23:40:44.697069504Z" level=info msg="Start snapshots syncer" May 13 23:40:44.697147 containerd[1493]: time="2025-05-13T23:40:44.697132104Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:40:44.697558 containerd[1493]: time="2025-05-13T23:40:44.697509184Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:40:44.697754 containerd[1493]: time="2025-05-13T23:40:44.697732064Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:40:44.697875 containerd[1493]: time="2025-05-13T23:40:44.697859664Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:40:44.698032 containerd[1493]: time="2025-05-13T23:40:44.698012504Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:40:44.698147 containerd[1493]: time="2025-05-13T23:40:44.698131224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:40:44.698205 containerd[1493]: time="2025-05-13T23:40:44.698191984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:40:44.698257 containerd[1493]: time="2025-05-13T23:40:44.698243624Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:40:44.698330 containerd[1493]: time="2025-05-13T23:40:44.698314504Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:40:44.698382 containerd[1493]: time="2025-05-13T23:40:44.698369664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:40:44.698434 containerd[1493]: time="2025-05-13T23:40:44.698420544Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:40:44.698502 containerd[1493]: time="2025-05-13T23:40:44.698486704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:40:44.698589 containerd[1493]: time="2025-05-13T23:40:44.698573504Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:40:44.698644 containerd[1493]: time="2025-05-13T23:40:44.698630944Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:40:44.698750 containerd[1493]: time="2025-05-13T23:40:44.698736864Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:40:44.698888 containerd[1493]: time="2025-05-13T23:40:44.698868024Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:40:44.698942 containerd[1493]: time="2025-05-13T23:40:44.698928544Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:40:44.698992 containerd[1493]: time="2025-05-13T23:40:44.698977584Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:40:44.699038 containerd[1493]: time="2025-05-13T23:40:44.699025704Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:40:44.699098 containerd[1493]: time="2025-05-13T23:40:44.699084064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:40:44.699151 containerd[1493]: time="2025-05-13T23:40:44.699138264Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:40:44.699267 containerd[1493]: time="2025-05-13T23:40:44.699255624Z" level=info msg="runtime interface created" May 13 23:40:44.699309 containerd[1493]: time="2025-05-13T23:40:44.699298504Z" level=info msg="created NRI interface" May 13 23:40:44.699360 containerd[1493]: time="2025-05-13T23:40:44.699347184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:40:44.699411 containerd[1493]: time="2025-05-13T23:40:44.699399704Z" level=info msg="Connect containerd service" May 13 23:40:44.699499 containerd[1493]: time="2025-05-13T23:40:44.699484264Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:40:44.700203 containerd[1493]: time="2025-05-13T23:40:44.700172744Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:40:44.799846 containerd[1493]: time="2025-05-13T23:40:44.799740264Z" level=info msg="Start subscribing containerd event" May 13 23:40:44.799846 containerd[1493]: time="2025-05-13T23:40:44.799814024Z" level=info msg="Start recovering state" May 13 23:40:44.799960 containerd[1493]: time="2025-05-13T23:40:44.799897824Z" level=info msg="Start event monitor" May 13 23:40:44.799960 containerd[1493]: time="2025-05-13T23:40:44.799910864Z" level=info msg="Start cni network conf syncer for default" May 13 23:40:44.799960 containerd[1493]: time="2025-05-13T23:40:44.799917624Z" level=info msg="Start streaming server" May 13 23:40:44.799960 containerd[1493]: time="2025-05-13T23:40:44.799925384Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:40:44.799960 containerd[1493]: time="2025-05-13T23:40:44.799932024Z" level=info msg="runtime interface starting up..." May 13 23:40:44.799960 containerd[1493]: time="2025-05-13T23:40:44.799937784Z" level=info msg="starting plugins..." May 13 23:40:44.799960 containerd[1493]: time="2025-05-13T23:40:44.799949984Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:40:44.800245 containerd[1493]: time="2025-05-13T23:40:44.800216904Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:40:44.800332 containerd[1493]: time="2025-05-13T23:40:44.800318984Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:40:44.800443 containerd[1493]: time="2025-05-13T23:40:44.800428464Z" level=info msg="containerd successfully booted in 0.120868s" May 13 23:40:44.800536 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:40:45.370879 systemd-networkd[1410]: eth0: Gained IPv6LL May 13 23:40:45.373208 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:40:45.376266 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:40:45.379778 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 23:40:45.382340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:40:45.392897 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:40:45.414058 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:40:45.415729 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 23:40:45.415897 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 23:40:45.418349 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:40:45.882664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:40:45.898047 (kubelet)[1564]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:40:45.915642 sshd_keygen[1478]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:40:45.935800 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:40:45.938801 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:40:45.956514 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:40:45.956799 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:40:45.959904 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:40:45.981647 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:40:45.984673 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:40:45.986870 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 23:40:45.988228 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:40:45.989324 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:40:45.990446 systemd[1]: Startup finished in 573ms (kernel) + 4.436s (initrd) + 3.448s (userspace) = 8.458s. May 13 23:40:46.421496 kubelet[1564]: E0513 23:40:46.421442 1564 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:40:46.424116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:40:46.424271 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:40:46.424603 systemd[1]: kubelet.service: Consumed 823ms CPU time, 242.1M memory peak. May 13 23:40:50.843410 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:40:50.844726 systemd[1]: Started sshd@0-10.0.0.43:22-10.0.0.1:44168.service - OpenSSH per-connection server daemon (10.0.0.1:44168). May 13 23:40:50.924935 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 44168 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:40:50.926807 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:40:50.934722 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:40:50.935858 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:40:50.941107 systemd-logind[1470]: New session 1 of user core. May 13 23:40:50.964680 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:40:50.967500 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:40:50.985073 (systemd)[1599]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:40:50.987195 systemd-logind[1470]: New session c1 of user core. May 13 23:40:51.092480 systemd[1599]: Queued start job for default target default.target. May 13 23:40:51.103680 systemd[1599]: Created slice app.slice - User Application Slice. May 13 23:40:51.103727 systemd[1599]: Reached target paths.target - Paths. May 13 23:40:51.103772 systemd[1599]: Reached target timers.target - Timers. May 13 23:40:51.105087 systemd[1599]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:40:51.114417 systemd[1599]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:40:51.114485 systemd[1599]: Reached target sockets.target - Sockets. May 13 23:40:51.114525 systemd[1599]: Reached target basic.target - Basic System. May 13 23:40:51.114554 systemd[1599]: Reached target default.target - Main User Target. May 13 23:40:51.114581 systemd[1599]: Startup finished in 121ms. May 13 23:40:51.114804 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:40:51.116328 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:40:51.179644 systemd[1]: Started sshd@1-10.0.0.43:22-10.0.0.1:44172.service - OpenSSH per-connection server daemon (10.0.0.1:44172). May 13 23:40:51.241763 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 44172 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:40:51.243091 sshd-session[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:40:51.247033 systemd-logind[1470]: New session 2 of user core. May 13 23:40:51.255893 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:40:51.309107 sshd[1612]: Connection closed by 10.0.0.1 port 44172 May 13 23:40:51.308637 sshd-session[1610]: pam_unix(sshd:session): session closed for user core May 13 23:40:51.323319 systemd[1]: sshd@1-10.0.0.43:22-10.0.0.1:44172.service: Deactivated successfully. May 13 23:40:51.325047 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:40:51.326648 systemd-logind[1470]: Session 2 logged out. Waiting for processes to exit. May 13 23:40:51.329040 systemd[1]: Started sshd@2-10.0.0.43:22-10.0.0.1:44180.service - OpenSSH per-connection server daemon (10.0.0.1:44180). May 13 23:40:51.329813 systemd-logind[1470]: Removed session 2. May 13 23:40:51.389422 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 44180 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:40:51.390683 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:40:51.394974 systemd-logind[1470]: New session 3 of user core. May 13 23:40:51.400912 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:40:51.450540 sshd[1620]: Connection closed by 10.0.0.1 port 44180 May 13 23:40:51.450940 sshd-session[1617]: pam_unix(sshd:session): session closed for user core May 13 23:40:51.462063 systemd[1]: sshd@2-10.0.0.43:22-10.0.0.1:44180.service: Deactivated successfully. May 13 23:40:51.463650 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:40:51.464927 systemd-logind[1470]: Session 3 logged out. Waiting for processes to exit. May 13 23:40:51.466146 systemd[1]: Started sshd@3-10.0.0.43:22-10.0.0.1:44186.service - OpenSSH per-connection server daemon (10.0.0.1:44186). May 13 23:40:51.468180 systemd-logind[1470]: Removed session 3. May 13 23:40:51.524172 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 44186 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:40:51.525525 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:40:51.532740 systemd-logind[1470]: New session 4 of user core. May 13 23:40:51.546927 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:40:51.600957 sshd[1628]: Connection closed by 10.0.0.1 port 44186 May 13 23:40:51.601313 sshd-session[1625]: pam_unix(sshd:session): session closed for user core May 13 23:40:51.615382 systemd[1]: Started sshd@4-10.0.0.43:22-10.0.0.1:44194.service - OpenSSH per-connection server daemon (10.0.0.1:44194). May 13 23:40:51.615834 systemd[1]: sshd@3-10.0.0.43:22-10.0.0.1:44186.service: Deactivated successfully. May 13 23:40:51.617195 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:40:51.619388 systemd-logind[1470]: Session 4 logged out. Waiting for processes to exit. May 13 23:40:51.625749 systemd-logind[1470]: Removed session 4. May 13 23:40:51.672487 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 44194 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:40:51.672976 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:40:51.677530 systemd-logind[1470]: New session 5 of user core. May 13 23:40:51.690924 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:40:51.753779 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:40:51.754055 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:40:51.767732 sudo[1637]: pam_unix(sudo:session): session closed for user root May 13 23:40:51.770205 sshd[1636]: Connection closed by 10.0.0.1 port 44194 May 13 23:40:51.770611 sshd-session[1631]: pam_unix(sshd:session): session closed for user core May 13 23:40:51.784179 systemd[1]: sshd@4-10.0.0.43:22-10.0.0.1:44194.service: Deactivated successfully. May 13 23:40:51.785798 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:40:51.787181 systemd-logind[1470]: Session 5 logged out. Waiting for processes to exit. May 13 23:40:51.788581 systemd[1]: Started sshd@5-10.0.0.43:22-10.0.0.1:44204.service - OpenSSH per-connection server daemon (10.0.0.1:44204). May 13 23:40:51.789452 systemd-logind[1470]: Removed session 5. May 13 23:40:51.858104 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 44204 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:40:51.859513 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:40:51.863813 systemd-logind[1470]: New session 6 of user core. May 13 23:40:51.869874 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:40:51.922968 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:40:51.923257 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:40:51.926645 sudo[1647]: pam_unix(sudo:session): session closed for user root May 13 23:40:51.931490 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:40:51.931792 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:40:51.940369 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:40:51.982352 augenrules[1669]: No rules May 13 23:40:51.983807 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:40:51.984041 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:40:51.985625 sudo[1646]: pam_unix(sudo:session): session closed for user root May 13 23:40:51.987584 sshd[1645]: Connection closed by 10.0.0.1 port 44204 May 13 23:40:51.988417 sshd-session[1642]: pam_unix(sshd:session): session closed for user core May 13 23:40:52.003162 systemd[1]: sshd@5-10.0.0.43:22-10.0.0.1:44204.service: Deactivated successfully. May 13 23:40:52.006460 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:40:52.007910 systemd-logind[1470]: Session 6 logged out. Waiting for processes to exit. May 13 23:40:52.009372 systemd[1]: Started sshd@6-10.0.0.43:22-10.0.0.1:44206.service - OpenSSH per-connection server daemon (10.0.0.1:44206). May 13 23:40:52.010343 systemd-logind[1470]: Removed session 6. May 13 23:40:52.065460 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 44206 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:40:52.066839 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:40:52.071363 systemd-logind[1470]: New session 7 of user core. May 13 23:40:52.087892 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:40:52.140442 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:40:52.141098 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:40:52.156665 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 23:40:52.201798 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 23:40:52.202059 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 23:40:52.737949 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:40:52.738186 systemd[1]: kubelet.service: Consumed 823ms CPU time, 242.1M memory peak. May 13 23:40:52.740500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:40:52.765673 systemd[1]: Reload requested from client PID 1734 ('systemctl') (unit session-7.scope)... May 13 23:40:52.765693 systemd[1]: Reloading... May 13 23:40:52.845732 zram_generator::config[1777]: No configuration found. May 13 23:40:53.090396 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:40:53.168161 systemd[1]: Reloading finished in 402 ms. May 13 23:40:53.221059 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 23:40:53.221131 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 23:40:53.221426 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:40:53.221477 systemd[1]: kubelet.service: Consumed 90ms CPU time, 82.4M memory peak. May 13 23:40:53.224124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:40:53.338761 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:40:53.354413 (kubelet)[1822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:40:53.406107 kubelet[1822]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:40:53.407721 kubelet[1822]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:40:53.407721 kubelet[1822]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:40:53.407721 kubelet[1822]: I0513 23:40:53.406742 1822 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:40:53.878839 kubelet[1822]: I0513 23:40:53.878801 1822 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 23:40:53.880388 kubelet[1822]: I0513 23:40:53.878973 1822 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:40:53.880388 kubelet[1822]: I0513 23:40:53.879203 1822 server.go:927] "Client rotation is on, will bootstrap in background" May 13 23:40:53.914460 kubelet[1822]: I0513 23:40:53.914420 1822 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:40:53.928815 kubelet[1822]: I0513 23:40:53.928784 1822 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:40:53.929211 kubelet[1822]: I0513 23:40:53.929169 1822 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:40:53.929371 kubelet[1822]: I0513 23:40:53.929199 1822 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.43","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 23:40:53.929458 kubelet[1822]: I0513 23:40:53.929444 1822 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:40:53.929458 kubelet[1822]: I0513 23:40:53.929453 1822 container_manager_linux.go:301] "Creating device plugin manager" May 13 23:40:53.929653 kubelet[1822]: I0513 23:40:53.929640 1822 state_mem.go:36] "Initialized new in-memory state store" May 13 23:40:53.931666 kubelet[1822]: I0513 23:40:53.930849 1822 kubelet.go:400] "Attempting to sync node with API server" May 13 23:40:53.931666 kubelet[1822]: I0513 23:40:53.930872 1822 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:40:53.931666 kubelet[1822]: I0513 23:40:53.931232 1822 kubelet.go:312] "Adding apiserver pod source" May 13 23:40:53.931666 kubelet[1822]: I0513 23:40:53.931377 1822 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:40:53.931666 kubelet[1822]: E0513 23:40:53.931522 1822 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:40:53.931666 kubelet[1822]: E0513 23:40:53.931585 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:40:53.932849 kubelet[1822]: I0513 23:40:53.932747 1822 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:40:53.933328 kubelet[1822]: I0513 23:40:53.933291 1822 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:40:53.933385 kubelet[1822]: W0513 23:40:53.933363 1822 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:40:53.934604 kubelet[1822]: I0513 23:40:53.934581 1822 server.go:1264] "Started kubelet" May 13 23:40:53.935486 kubelet[1822]: I0513 23:40:53.935442 1822 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:40:53.936755 kubelet[1822]: I0513 23:40:53.936725 1822 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:40:53.938262 kubelet[1822]: I0513 23:40:53.938096 1822 server.go:455] "Adding debug handlers to kubelet server" May 13 23:40:53.938905 kubelet[1822]: I0513 23:40:53.938742 1822 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:40:53.938981 kubelet[1822]: I0513 23:40:53.938967 1822 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:40:53.939618 kubelet[1822]: I0513 23:40:53.939088 1822 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 23:40:53.939618 kubelet[1822]: I0513 23:40:53.939162 1822 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:40:53.941203 kubelet[1822]: I0513 23:40:53.940814 1822 reconciler.go:26] "Reconciler: start to sync state" May 13 23:40:53.945376 kubelet[1822]: I0513 23:40:53.945344 1822 factory.go:221] Registration of the systemd container factory successfully May 13 23:40:53.945476 kubelet[1822]: I0513 23:40:53.945457 1822 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:40:53.946481 kubelet[1822]: I0513 23:40:53.946456 1822 factory.go:221] Registration of the containerd container factory successfully May 13 23:40:53.946560 kubelet[1822]: E0513 23:40:53.946536 1822 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:40:53.946854 kubelet[1822]: W0513 23:40:53.946831 1822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.43" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 13 23:40:53.946888 kubelet[1822]: E0513 23:40:53.946870 1822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.43" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 13 23:40:53.947122 kubelet[1822]: E0513 23:40:53.946967 1822 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.43.183f3aa3678f0348 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.43,UID:10.0.0.43,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.43,},FirstTimestamp:2025-05-13 23:40:53.934547784 +0000 UTC m=+0.571735041,LastTimestamp:2025-05-13 23:40:53.934547784 +0000 UTC m=+0.571735041,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.43,}" May 13 23:40:53.950496 kubelet[1822]: W0513 23:40:53.949934 1822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 13 23:40:53.950496 kubelet[1822]: E0513 23:40:53.950000 1822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 13 23:40:53.950496 kubelet[1822]: W0513 23:40:53.950106 1822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 13 23:40:53.950496 kubelet[1822]: E0513 23:40:53.950124 1822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 13 23:40:53.964418 kubelet[1822]: E0513 23:40:53.964259 1822 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.43\" not found" node="10.0.0.43" May 13 23:40:53.964817 kubelet[1822]: I0513 23:40:53.964787 1822 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:40:53.964817 kubelet[1822]: I0513 23:40:53.964804 1822 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:40:53.964817 kubelet[1822]: I0513 23:40:53.964822 1822 state_mem.go:36] "Initialized new in-memory state store" May 13 23:40:54.030853 kubelet[1822]: I0513 23:40:54.030743 1822 policy_none.go:49] "None policy: Start" May 13 23:40:54.032072 kubelet[1822]: I0513 23:40:54.032029 1822 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:40:54.032072 kubelet[1822]: I0513 23:40:54.032052 1822 state_mem.go:35] "Initializing new in-memory state store" May 13 23:40:54.040139 kubelet[1822]: I0513 23:40:54.040100 1822 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.43" May 13 23:40:54.040495 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:40:54.048177 kubelet[1822]: I0513 23:40:54.048129 1822 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.43" May 13 23:40:54.058144 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:40:54.061741 kubelet[1822]: E0513 23:40:54.061679 1822 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" May 13 23:40:54.063282 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:40:54.064637 kubelet[1822]: I0513 23:40:54.064482 1822 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:40:54.065768 kubelet[1822]: I0513 23:40:54.065748 1822 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:40:54.065965 kubelet[1822]: I0513 23:40:54.065954 1822 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:40:54.066053 kubelet[1822]: I0513 23:40:54.066041 1822 kubelet.go:2337] "Starting kubelet main sync loop" May 13 23:40:54.066159 kubelet[1822]: E0513 23:40:54.066139 1822 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:40:54.070351 kubelet[1822]: I0513 23:40:54.070165 1822 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:40:54.070677 kubelet[1822]: I0513 23:40:54.070644 1822 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:40:54.071022 kubelet[1822]: I0513 23:40:54.070963 1822 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:40:54.071882 kubelet[1822]: E0513 23:40:54.071864 1822 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.43\" not found" May 13 23:40:54.162356 kubelet[1822]: E0513 23:40:54.162296 1822 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" May 13 23:40:54.262979 kubelet[1822]: E0513 23:40:54.262925 1822 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" May 13 23:40:54.363907 kubelet[1822]: E0513 23:40:54.363860 1822 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" May 13 23:40:54.374997 sudo[1681]: pam_unix(sudo:session): session closed for user root May 13 23:40:54.376596 sshd[1680]: Connection closed by 10.0.0.1 port 44206 May 13 23:40:54.376923 sshd-session[1677]: pam_unix(sshd:session): session closed for user core May 13 23:40:54.380517 systemd[1]: sshd@6-10.0.0.43:22-10.0.0.1:44206.service: Deactivated successfully. May 13 23:40:54.382628 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:40:54.382849 systemd[1]: session-7.scope: Consumed 481ms CPU time, 100.4M memory peak. May 13 23:40:54.384099 systemd-logind[1470]: Session 7 logged out. Waiting for processes to exit. May 13 23:40:54.386437 systemd-logind[1470]: Removed session 7. May 13 23:40:54.464612 kubelet[1822]: E0513 23:40:54.464445 1822 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" May 13 23:40:54.565103 kubelet[1822]: E0513 23:40:54.565022 1822 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" May 13 23:40:54.665762 kubelet[1822]: E0513 23:40:54.665678 1822 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" May 13 23:40:54.766679 kubelet[1822]: E0513 23:40:54.766545 1822 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" May 13 23:40:54.867259 kubelet[1822]: E0513 23:40:54.867195 1822 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" May 13 23:40:54.881354 kubelet[1822]: I0513 23:40:54.881318 1822 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 13 23:40:54.881509 kubelet[1822]: W0513 23:40:54.881475 1822 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 13 23:40:54.931930 kubelet[1822]: E0513 23:40:54.931891 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:40:54.967485 kubelet[1822]: E0513 23:40:54.967445 1822 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" May 13 23:40:55.068689 kubelet[1822]: E0513 23:40:55.068162 1822 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" May 13 23:40:55.168649 kubelet[1822]: E0513 23:40:55.168602 1822 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" May 13 23:40:55.269062 kubelet[1822]: E0513 23:40:55.269032 1822 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" May 13 23:40:55.369925 kubelet[1822]: I0513 23:40:55.369832 1822 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 13 23:40:55.370182 containerd[1493]: time="2025-05-13T23:40:55.370140904Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:40:55.370457 kubelet[1822]: I0513 23:40:55.370300 1822 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 13 23:40:55.932890 kubelet[1822]: E0513 23:40:55.932836 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:40:55.932890 kubelet[1822]: I0513 23:40:55.932900 1822 apiserver.go:52] "Watching apiserver" May 13 23:40:55.945710 kubelet[1822]: I0513 23:40:55.945657 1822 topology_manager.go:215] "Topology Admit Handler" podUID="299d631a-134f-407d-9d2a-1f661715e0ff" podNamespace="kube-system" podName="cilium-llhk9" May 13 23:40:55.946054 kubelet[1822]: I0513 23:40:55.945839 1822 topology_manager.go:215] "Topology Admit Handler" podUID="ffe6dbc3-f966-47e4-a0df-5dcdefd6351c" podNamespace="kube-system" podName="kube-proxy-jnpsg" May 13 23:40:55.959174 systemd[1]: Created slice kubepods-besteffort-podffe6dbc3_f966_47e4_a0df_5dcdefd6351c.slice - libcontainer container kubepods-besteffort-podffe6dbc3_f966_47e4_a0df_5dcdefd6351c.slice. May 13 23:40:55.979432 systemd[1]: Created slice kubepods-burstable-pod299d631a_134f_407d_9d2a_1f661715e0ff.slice - libcontainer container kubepods-burstable-pod299d631a_134f_407d_9d2a_1f661715e0ff.slice. May 13 23:40:56.039950 kubelet[1822]: I0513 23:40:56.039914 1822 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:40:56.051827 kubelet[1822]: I0513 23:40:56.051792 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-cilium-cgroup\") pod \"cilium-llhk9\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " pod="kube-system/cilium-llhk9" May 13 23:40:56.051931 kubelet[1822]: I0513 23:40:56.051834 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-xtables-lock\") pod \"cilium-llhk9\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " pod="kube-system/cilium-llhk9" May 13 23:40:56.051931 kubelet[1822]: I0513 23:40:56.051855 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-host-proc-sys-net\") pod \"cilium-llhk9\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " pod="kube-system/cilium-llhk9" May 13 23:40:56.051931 kubelet[1822]: I0513 23:40:56.051873 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-host-proc-sys-kernel\") pod \"cilium-llhk9\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " pod="kube-system/cilium-llhk9" May 13 23:40:56.051931 kubelet[1822]: I0513 23:40:56.051930 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-cilium-run\") pod \"cilium-llhk9\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " pod="kube-system/cilium-llhk9" May 13 23:40:56.052009 kubelet[1822]: I0513 23:40:56.051954 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-bpf-maps\") pod \"cilium-llhk9\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " pod="kube-system/cilium-llhk9" May 13 23:40:56.052009 kubelet[1822]: I0513 23:40:56.051981 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-etc-cni-netd\") pod \"cilium-llhk9\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " pod="kube-system/cilium-llhk9" May 13 23:40:56.052009 kubelet[1822]: I0513 23:40:56.052001 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-lib-modules\") pod \"cilium-llhk9\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " pod="kube-system/cilium-llhk9" May 13 23:40:56.052071 kubelet[1822]: I0513 23:40:56.052015 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/299d631a-134f-407d-9d2a-1f661715e0ff-cilium-config-path\") pod \"cilium-llhk9\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " pod="kube-system/cilium-llhk9" May 13 23:40:56.052071 kubelet[1822]: I0513 23:40:56.052033 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ffe6dbc3-f966-47e4-a0df-5dcdefd6351c-kube-proxy\") pod \"kube-proxy-jnpsg\" (UID: \"ffe6dbc3-f966-47e4-a0df-5dcdefd6351c\") " pod="kube-system/kube-proxy-jnpsg" May 13 23:40:56.052109 kubelet[1822]: I0513 23:40:56.052066 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wzhf\" (UniqueName: \"kubernetes.io/projected/ffe6dbc3-f966-47e4-a0df-5dcdefd6351c-kube-api-access-9wzhf\") pod \"kube-proxy-jnpsg\" (UID: \"ffe6dbc3-f966-47e4-a0df-5dcdefd6351c\") " pod="kube-system/kube-proxy-jnpsg" May 13 23:40:56.052109 kubelet[1822]: I0513 23:40:56.052097 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/299d631a-134f-407d-9d2a-1f661715e0ff-hubble-tls\") pod \"cilium-llhk9\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " pod="kube-system/cilium-llhk9" May 13 23:40:56.052150 kubelet[1822]: I0513 23:40:56.052117 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdbwg\" (UniqueName: \"kubernetes.io/projected/299d631a-134f-407d-9d2a-1f661715e0ff-kube-api-access-tdbwg\") pod \"cilium-llhk9\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " pod="kube-system/cilium-llhk9" May 13 23:40:56.052150 kubelet[1822]: I0513 23:40:56.052135 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffe6dbc3-f966-47e4-a0df-5dcdefd6351c-xtables-lock\") pod \"kube-proxy-jnpsg\" (UID: \"ffe6dbc3-f966-47e4-a0df-5dcdefd6351c\") " pod="kube-system/kube-proxy-jnpsg" May 13 23:40:56.052188 kubelet[1822]: I0513 23:40:56.052150 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffe6dbc3-f966-47e4-a0df-5dcdefd6351c-lib-modules\") pod \"kube-proxy-jnpsg\" (UID: \"ffe6dbc3-f966-47e4-a0df-5dcdefd6351c\") " pod="kube-system/kube-proxy-jnpsg" May 13 23:40:56.052188 kubelet[1822]: I0513 23:40:56.052175 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-hostproc\") pod \"cilium-llhk9\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " pod="kube-system/cilium-llhk9" May 13 23:40:56.052227 kubelet[1822]: I0513 23:40:56.052194 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-cni-path\") pod \"cilium-llhk9\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " pod="kube-system/cilium-llhk9" May 13 23:40:56.052227 kubelet[1822]: I0513 23:40:56.052212 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/299d631a-134f-407d-9d2a-1f661715e0ff-clustermesh-secrets\") pod \"cilium-llhk9\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " pod="kube-system/cilium-llhk9" May 13 23:40:56.279634 containerd[1493]: time="2025-05-13T23:40:56.277997784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jnpsg,Uid:ffe6dbc3-f966-47e4-a0df-5dcdefd6351c,Namespace:kube-system,Attempt:0,}" May 13 23:40:56.291948 containerd[1493]: time="2025-05-13T23:40:56.291899704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-llhk9,Uid:299d631a-134f-407d-9d2a-1f661715e0ff,Namespace:kube-system,Attempt:0,}" May 13 23:40:56.933459 kubelet[1822]: E0513 23:40:56.933386 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:40:56.937399 containerd[1493]: time="2025-05-13T23:40:56.937275384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:40:56.941442 containerd[1493]: time="2025-05-13T23:40:56.941316624Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 13 23:40:56.943904 containerd[1493]: time="2025-05-13T23:40:56.943858464Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:40:56.945286 containerd[1493]: time="2025-05-13T23:40:56.945234424Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:40:56.946817 containerd[1493]: time="2025-05-13T23:40:56.946721264Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 13 23:40:56.947684 containerd[1493]: time="2025-05-13T23:40:56.947640664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:40:56.948393 containerd[1493]: time="2025-05-13T23:40:56.948354464Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 664.22596ms" May 13 23:40:56.953002 containerd[1493]: time="2025-05-13T23:40:56.952692824Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 657.59896ms" May 13 23:40:56.967255 containerd[1493]: time="2025-05-13T23:40:56.966640824Z" level=info msg="connecting to shim d4b8466c367523367bf59a92460c4f3760e176a6d0cd82467ced4a06e0e76490" address="unix:///run/containerd/s/343197577fbff16264d93a51df8575b03d4aa41b549d966d7d5bc19672e927c0" namespace=k8s.io protocol=ttrpc version=3 May 13 23:40:56.976990 containerd[1493]: time="2025-05-13T23:40:56.976928504Z" level=info msg="connecting to shim 467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9" address="unix:///run/containerd/s/ff903bddb7a19120a0f828f26d7b9191a3e43f955c748838e1e98974e0f4448d" namespace=k8s.io protocol=ttrpc version=3 May 13 23:40:56.992873 systemd[1]: Started cri-containerd-d4b8466c367523367bf59a92460c4f3760e176a6d0cd82467ced4a06e0e76490.scope - libcontainer container d4b8466c367523367bf59a92460c4f3760e176a6d0cd82467ced4a06e0e76490. May 13 23:40:56.998614 systemd[1]: Started cri-containerd-467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9.scope - libcontainer container 467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9. May 13 23:40:57.024626 containerd[1493]: time="2025-05-13T23:40:57.024580824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jnpsg,Uid:ffe6dbc3-f966-47e4-a0df-5dcdefd6351c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4b8466c367523367bf59a92460c4f3760e176a6d0cd82467ced4a06e0e76490\"" May 13 23:40:57.027074 containerd[1493]: time="2025-05-13T23:40:57.026867624Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 23:40:57.027074 containerd[1493]: time="2025-05-13T23:40:57.026889744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-llhk9,Uid:299d631a-134f-407d-9d2a-1f661715e0ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\"" May 13 23:40:57.160205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2369905723.mount: Deactivated successfully. May 13 23:40:57.934369 kubelet[1822]: E0513 23:40:57.934313 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:40:58.036071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3678946324.mount: Deactivated successfully. May 13 23:40:58.233838 containerd[1493]: time="2025-05-13T23:40:58.233717344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:58.234715 containerd[1493]: time="2025-05-13T23:40:58.234392984Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 13 23:40:58.235391 containerd[1493]: time="2025-05-13T23:40:58.235351504Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:58.237404 containerd[1493]: time="2025-05-13T23:40:58.237374784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:58.238029 containerd[1493]: time="2025-05-13T23:40:58.237863064Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.21096068s" May 13 23:40:58.238029 containerd[1493]: time="2025-05-13T23:40:58.237888824Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 13 23:40:58.239814 containerd[1493]: time="2025-05-13T23:40:58.239746584Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 23:40:58.240767 containerd[1493]: time="2025-05-13T23:40:58.240741424Z" level=info msg="CreateContainer within sandbox \"d4b8466c367523367bf59a92460c4f3760e176a6d0cd82467ced4a06e0e76490\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:40:58.250168 containerd[1493]: time="2025-05-13T23:40:58.250111544Z" level=info msg="Container ec59a7557ba06464a7f3f33dd6d2cb9e616d4d45f48b81b7299c3d349352564b: CDI devices from CRI Config.CDIDevices: []" May 13 23:40:58.256889 containerd[1493]: time="2025-05-13T23:40:58.256847384Z" level=info msg="CreateContainer within sandbox \"d4b8466c367523367bf59a92460c4f3760e176a6d0cd82467ced4a06e0e76490\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ec59a7557ba06464a7f3f33dd6d2cb9e616d4d45f48b81b7299c3d349352564b\"" May 13 23:40:58.257578 containerd[1493]: time="2025-05-13T23:40:58.257549104Z" level=info msg="StartContainer for \"ec59a7557ba06464a7f3f33dd6d2cb9e616d4d45f48b81b7299c3d349352564b\"" May 13 23:40:58.260552 containerd[1493]: time="2025-05-13T23:40:58.259601104Z" level=info msg="connecting to shim ec59a7557ba06464a7f3f33dd6d2cb9e616d4d45f48b81b7299c3d349352564b" address="unix:///run/containerd/s/343197577fbff16264d93a51df8575b03d4aa41b549d966d7d5bc19672e927c0" protocol=ttrpc version=3 May 13 23:40:58.279864 systemd[1]: Started cri-containerd-ec59a7557ba06464a7f3f33dd6d2cb9e616d4d45f48b81b7299c3d349352564b.scope - libcontainer container ec59a7557ba06464a7f3f33dd6d2cb9e616d4d45f48b81b7299c3d349352564b. May 13 23:40:58.311742 containerd[1493]: time="2025-05-13T23:40:58.311691664Z" level=info msg="StartContainer for \"ec59a7557ba06464a7f3f33dd6d2cb9e616d4d45f48b81b7299c3d349352564b\" returns successfully" May 13 23:40:58.934784 kubelet[1822]: E0513 23:40:58.934741 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:40:59.094805 kubelet[1822]: I0513 23:40:59.094739 1822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jnpsg" podStartSLOduration=3.882160144 podStartE2EDuration="5.094721744s" podCreationTimestamp="2025-05-13 23:40:54 +0000 UTC" firstStartedPulling="2025-05-13 23:40:57.026333664 +0000 UTC m=+3.663520921" lastFinishedPulling="2025-05-13 23:40:58.238895264 +0000 UTC m=+4.876082521" observedRunningTime="2025-05-13 23:40:59.094442704 +0000 UTC m=+5.731629921" watchObservedRunningTime="2025-05-13 23:40:59.094721744 +0000 UTC m=+5.731909001" May 13 23:40:59.935427 kubelet[1822]: E0513 23:40:59.935331 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:00.935895 kubelet[1822]: E0513 23:41:00.935852 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:01.045981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount901513421.mount: Deactivated successfully. May 13 23:41:01.936242 kubelet[1822]: E0513 23:41:01.936205 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:02.160072 containerd[1493]: time="2025-05-13T23:41:02.159229184Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:41:02.161322 containerd[1493]: time="2025-05-13T23:41:02.161242064Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 13 23:41:02.162369 containerd[1493]: time="2025-05-13T23:41:02.162015824Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:41:02.164479 containerd[1493]: time="2025-05-13T23:41:02.164108504Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 3.92432596s" May 13 23:41:02.164479 containerd[1493]: time="2025-05-13T23:41:02.164147064Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 13 23:41:02.167884 containerd[1493]: time="2025-05-13T23:41:02.167847104Z" level=info msg="CreateContainer within sandbox \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 23:41:02.175214 containerd[1493]: time="2025-05-13T23:41:02.175166984Z" level=info msg="Container 0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:02.185438 containerd[1493]: time="2025-05-13T23:41:02.185383024Z" level=info msg="CreateContainer within sandbox \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782\"" May 13 23:41:02.186083 containerd[1493]: time="2025-05-13T23:41:02.185959624Z" level=info msg="StartContainer for \"0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782\"" May 13 23:41:02.187645 containerd[1493]: time="2025-05-13T23:41:02.187178424Z" level=info msg="connecting to shim 0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782" address="unix:///run/containerd/s/ff903bddb7a19120a0f828f26d7b9191a3e43f955c748838e1e98974e0f4448d" protocol=ttrpc version=3 May 13 23:41:02.216920 systemd[1]: Started cri-containerd-0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782.scope - libcontainer container 0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782. May 13 23:41:02.247197 containerd[1493]: time="2025-05-13T23:41:02.246906944Z" level=info msg="StartContainer for \"0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782\" returns successfully" May 13 23:41:02.290437 systemd[1]: cri-containerd-0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782.scope: Deactivated successfully. May 13 23:41:02.292198 containerd[1493]: time="2025-05-13T23:41:02.292153784Z" level=info msg="received exit event container_id:\"0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782\" id:\"0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782\" pid:2170 exited_at:{seconds:1747179662 nanos:291740544}" May 13 23:41:02.292518 containerd[1493]: time="2025-05-13T23:41:02.292403384Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782\" id:\"0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782\" pid:2170 exited_at:{seconds:1747179662 nanos:291740544}" May 13 23:41:02.316528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782-rootfs.mount: Deactivated successfully. May 13 23:41:02.937154 kubelet[1822]: E0513 23:41:02.937098 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:03.097404 containerd[1493]: time="2025-05-13T23:41:03.097340544Z" level=info msg="CreateContainer within sandbox \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 23:41:03.108243 containerd[1493]: time="2025-05-13T23:41:03.108183104Z" level=info msg="Container c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:03.116176 containerd[1493]: time="2025-05-13T23:41:03.116126584Z" level=info msg="CreateContainer within sandbox \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c\"" May 13 23:41:03.116763 containerd[1493]: time="2025-05-13T23:41:03.116725424Z" level=info msg="StartContainer for \"c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c\"" May 13 23:41:03.119896 containerd[1493]: time="2025-05-13T23:41:03.119854064Z" level=info msg="connecting to shim c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c" address="unix:///run/containerd/s/ff903bddb7a19120a0f828f26d7b9191a3e43f955c748838e1e98974e0f4448d" protocol=ttrpc version=3 May 13 23:41:03.152924 systemd[1]: Started cri-containerd-c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c.scope - libcontainer container c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c. May 13 23:41:03.192730 containerd[1493]: time="2025-05-13T23:41:03.192606024Z" level=info msg="StartContainer for \"c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c\" returns successfully" May 13 23:41:03.211281 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:41:03.211891 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:41:03.212155 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 23:41:03.215477 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:41:03.219388 containerd[1493]: time="2025-05-13T23:41:03.218202664Z" level=info msg="received exit event container_id:\"c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c\" id:\"c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c\" pid:2216 exited_at:{seconds:1747179663 nanos:217510544}" May 13 23:41:03.219388 containerd[1493]: time="2025-05-13T23:41:03.218331944Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c\" id:\"c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c\" pid:2216 exited_at:{seconds:1747179663 nanos:217510544}" May 13 23:41:03.216989 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:41:03.217356 systemd[1]: cri-containerd-c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c.scope: Deactivated successfully. May 13 23:41:03.242073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c-rootfs.mount: Deactivated successfully. May 13 23:41:03.244905 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:41:03.937670 kubelet[1822]: E0513 23:41:03.937617 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:04.096575 containerd[1493]: time="2025-05-13T23:41:04.096519264Z" level=info msg="CreateContainer within sandbox \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 23:41:04.110407 containerd[1493]: time="2025-05-13T23:41:04.110348064Z" level=info msg="Container a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:04.120179 containerd[1493]: time="2025-05-13T23:41:04.120125344Z" level=info msg="CreateContainer within sandbox \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561\"" May 13 23:41:04.120719 containerd[1493]: time="2025-05-13T23:41:04.120621424Z" level=info msg="StartContainer for \"a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561\"" May 13 23:41:04.122080 containerd[1493]: time="2025-05-13T23:41:04.122035904Z" level=info msg="connecting to shim a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561" address="unix:///run/containerd/s/ff903bddb7a19120a0f828f26d7b9191a3e43f955c748838e1e98974e0f4448d" protocol=ttrpc version=3 May 13 23:41:04.152026 systemd[1]: Started cri-containerd-a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561.scope - libcontainer container a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561. May 13 23:41:04.193357 containerd[1493]: time="2025-05-13T23:41:04.190861384Z" level=info msg="StartContainer for \"a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561\" returns successfully" May 13 23:41:04.219934 systemd[1]: cri-containerd-a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561.scope: Deactivated successfully. May 13 23:41:04.221094 containerd[1493]: time="2025-05-13T23:41:04.220845664Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561\" id:\"a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561\" pid:2261 exited_at:{seconds:1747179664 nanos:220517304}" May 13 23:41:04.221292 containerd[1493]: time="2025-05-13T23:41:04.220865344Z" level=info msg="received exit event container_id:\"a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561\" id:\"a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561\" pid:2261 exited_at:{seconds:1747179664 nanos:220517304}" May 13 23:41:04.239565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561-rootfs.mount: Deactivated successfully. May 13 23:41:04.938395 kubelet[1822]: E0513 23:41:04.938345 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:05.100300 containerd[1493]: time="2025-05-13T23:41:05.100251544Z" level=info msg="CreateContainer within sandbox \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 23:41:05.111553 containerd[1493]: time="2025-05-13T23:41:05.108550304Z" level=info msg="Container b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:05.121933 containerd[1493]: time="2025-05-13T23:41:05.121816664Z" level=info msg="CreateContainer within sandbox \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8\"" May 13 23:41:05.123433 containerd[1493]: time="2025-05-13T23:41:05.122286384Z" level=info msg="StartContainer for \"b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8\"" May 13 23:41:05.123433 containerd[1493]: time="2025-05-13T23:41:05.123096224Z" level=info msg="connecting to shim b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8" address="unix:///run/containerd/s/ff903bddb7a19120a0f828f26d7b9191a3e43f955c748838e1e98974e0f4448d" protocol=ttrpc version=3 May 13 23:41:05.144965 systemd[1]: Started cri-containerd-b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8.scope - libcontainer container b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8. May 13 23:41:05.175762 systemd[1]: cri-containerd-b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8.scope: Deactivated successfully. May 13 23:41:05.176851 containerd[1493]: time="2025-05-13T23:41:05.176543064Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8\" id:\"b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8\" pid:2305 exited_at:{seconds:1747179665 nanos:176118824}" May 13 23:41:05.180851 containerd[1493]: time="2025-05-13T23:41:05.177767544Z" level=info msg="received exit event container_id:\"b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8\" id:\"b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8\" pid:2305 exited_at:{seconds:1747179665 nanos:176118824}" May 13 23:41:05.180851 containerd[1493]: time="2025-05-13T23:41:05.180138704Z" level=info msg="StartContainer for \"b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8\" returns successfully" May 13 23:41:05.196959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8-rootfs.mount: Deactivated successfully. May 13 23:41:05.939140 kubelet[1822]: E0513 23:41:05.939090 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:06.105154 containerd[1493]: time="2025-05-13T23:41:06.105095064Z" level=info msg="CreateContainer within sandbox \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 23:41:06.114139 containerd[1493]: time="2025-05-13T23:41:06.114085424Z" level=info msg="Container 4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:06.122129 containerd[1493]: time="2025-05-13T23:41:06.122061144Z" level=info msg="CreateContainer within sandbox \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9\"" May 13 23:41:06.125151 containerd[1493]: time="2025-05-13T23:41:06.124917424Z" level=info msg="StartContainer for \"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9\"" May 13 23:41:06.125773 containerd[1493]: time="2025-05-13T23:41:06.125747984Z" level=info msg="connecting to shim 4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9" address="unix:///run/containerd/s/ff903bddb7a19120a0f828f26d7b9191a3e43f955c748838e1e98974e0f4448d" protocol=ttrpc version=3 May 13 23:41:06.147859 systemd[1]: Started cri-containerd-4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9.scope - libcontainer container 4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9. May 13 23:41:06.181020 containerd[1493]: time="2025-05-13T23:41:06.180925984Z" level=info msg="StartContainer for \"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9\" returns successfully" May 13 23:41:06.258602 containerd[1493]: time="2025-05-13T23:41:06.258489944Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9\" id:\"d93c674d16eb7dfa42b76eeb5d04a5d312af6953462b91573f79c3a95807a9c2\" pid:2373 exited_at:{seconds:1747179666 nanos:258205104}" May 13 23:41:06.290045 kubelet[1822]: I0513 23:41:06.288771 1822 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 23:41:06.797769 kernel: Initializing XFRM netlink socket May 13 23:41:06.940139 kubelet[1822]: E0513 23:41:06.940085 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:07.134765 kubelet[1822]: I0513 23:41:07.134464 1822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-llhk9" podStartSLOduration=7.996066224 podStartE2EDuration="13.134445784s" podCreationTimestamp="2025-05-13 23:40:54 +0000 UTC" firstStartedPulling="2025-05-13 23:40:57.028085304 +0000 UTC m=+3.665272561" lastFinishedPulling="2025-05-13 23:41:02.166464864 +0000 UTC m=+8.803652121" observedRunningTime="2025-05-13 23:41:07.133165944 +0000 UTC m=+13.770353201" watchObservedRunningTime="2025-05-13 23:41:07.134445784 +0000 UTC m=+13.771633041" May 13 23:41:07.940426 kubelet[1822]: E0513 23:41:07.940373 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:08.438458 systemd-networkd[1410]: cilium_host: Link UP May 13 23:41:08.438578 systemd-networkd[1410]: cilium_net: Link UP May 13 23:41:08.441097 systemd-networkd[1410]: cilium_net: Gained carrier May 13 23:41:08.441290 systemd-networkd[1410]: cilium_host: Gained carrier May 13 23:41:08.441391 systemd-networkd[1410]: cilium_net: Gained IPv6LL May 13 23:41:08.441511 systemd-networkd[1410]: cilium_host: Gained IPv6LL May 13 23:41:08.520802 systemd-networkd[1410]: cilium_vxlan: Link UP May 13 23:41:08.520809 systemd-networkd[1410]: cilium_vxlan: Gained carrier May 13 23:41:08.818749 kernel: NET: Registered PF_ALG protocol family May 13 23:41:08.941167 kubelet[1822]: E0513 23:41:08.941113 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:09.391530 systemd-networkd[1410]: lxc_health: Link UP May 13 23:41:09.391798 systemd-networkd[1410]: lxc_health: Gained carrier May 13 23:41:09.941509 kubelet[1822]: E0513 23:41:09.941450 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:10.330879 systemd-networkd[1410]: cilium_vxlan: Gained IPv6LL May 13 23:41:10.650861 systemd-networkd[1410]: lxc_health: Gained IPv6LL May 13 23:41:10.725998 kubelet[1822]: I0513 23:41:10.725248 1822 topology_manager.go:215] "Topology Admit Handler" podUID="0ad28bc3-01d5-4051-91b3-33b39ad9e473" podNamespace="default" podName="nginx-deployment-85f456d6dd-b9f8h" May 13 23:41:10.730718 systemd[1]: Created slice kubepods-besteffort-pod0ad28bc3_01d5_4051_91b3_33b39ad9e473.slice - libcontainer container kubepods-besteffort-pod0ad28bc3_01d5_4051_91b3_33b39ad9e473.slice. May 13 23:41:10.751452 kubelet[1822]: I0513 23:41:10.751357 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbtfw\" (UniqueName: \"kubernetes.io/projected/0ad28bc3-01d5-4051-91b3-33b39ad9e473-kube-api-access-zbtfw\") pod \"nginx-deployment-85f456d6dd-b9f8h\" (UID: \"0ad28bc3-01d5-4051-91b3-33b39ad9e473\") " pod="default/nginx-deployment-85f456d6dd-b9f8h" May 13 23:41:10.942217 kubelet[1822]: E0513 23:41:10.941790 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:11.034006 containerd[1493]: time="2025-05-13T23:41:11.033663704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-b9f8h,Uid:0ad28bc3-01d5-4051-91b3-33b39ad9e473,Namespace:default,Attempt:0,}" May 13 23:41:11.093754 kernel: eth0: renamed from tmp7189a May 13 23:41:11.099434 systemd-networkd[1410]: lxcb4b50c88d8ed: Link UP May 13 23:41:11.100455 systemd-networkd[1410]: lxcb4b50c88d8ed: Gained carrier May 13 23:41:11.942049 kubelet[1822]: E0513 23:41:11.941995 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:12.943068 kubelet[1822]: E0513 23:41:12.943011 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:13.018825 systemd-networkd[1410]: lxcb4b50c88d8ed: Gained IPv6LL May 13 23:41:13.082836 kubelet[1822]: I0513 23:41:13.082796 1822 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:41:13.933831 kubelet[1822]: E0513 23:41:13.933783 1822 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:13.944084 kubelet[1822]: E0513 23:41:13.944046 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:14.065235 containerd[1493]: time="2025-05-13T23:41:14.065193424Z" level=info msg="connecting to shim 7189a090e10952276a7828bd165cf6249bd272101ce11d04c36e078ca9e44fc1" address="unix:///run/containerd/s/8857d0c12477011d732c3139c870ec1f7638b687c7805cb3441553c163410602" namespace=k8s.io protocol=ttrpc version=3 May 13 23:41:14.098948 systemd[1]: Started cri-containerd-7189a090e10952276a7828bd165cf6249bd272101ce11d04c36e078ca9e44fc1.scope - libcontainer container 7189a090e10952276a7828bd165cf6249bd272101ce11d04c36e078ca9e44fc1. May 13 23:41:14.109957 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:41:14.131690 containerd[1493]: time="2025-05-13T23:41:14.131646064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-b9f8h,Uid:0ad28bc3-01d5-4051-91b3-33b39ad9e473,Namespace:default,Attempt:0,} returns sandbox id \"7189a090e10952276a7828bd165cf6249bd272101ce11d04c36e078ca9e44fc1\"" May 13 23:41:14.133068 containerd[1493]: time="2025-05-13T23:41:14.133035264Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 23:41:14.944729 kubelet[1822]: E0513 23:41:14.944656 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:15.782567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3170550153.mount: Deactivated successfully. May 13 23:41:15.945725 kubelet[1822]: E0513 23:41:15.945673 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:16.581575 containerd[1493]: time="2025-05-13T23:41:16.580931384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:41:16.581575 containerd[1493]: time="2025-05-13T23:41:16.581364464Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69948859" May 13 23:41:16.582298 containerd[1493]: time="2025-05-13T23:41:16.582243984Z" level=info msg="ImageCreate event name:\"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:41:16.584723 containerd[1493]: time="2025-05-13T23:41:16.584464024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:41:16.585537 containerd[1493]: time="2025-05-13T23:41:16.585503384Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 2.45243364s" May 13 23:41:16.585582 containerd[1493]: time="2025-05-13T23:41:16.585537584Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 13 23:41:16.588616 containerd[1493]: time="2025-05-13T23:41:16.588576504Z" level=info msg="CreateContainer within sandbox \"7189a090e10952276a7828bd165cf6249bd272101ce11d04c36e078ca9e44fc1\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 13 23:41:16.596734 containerd[1493]: time="2025-05-13T23:41:16.596675104Z" level=info msg="Container f53f04c083c2f7fd214c526a5bbeb4a2e4dd411c8313409371db169279314ee6: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:16.602004 containerd[1493]: time="2025-05-13T23:41:16.601965784Z" level=info msg="CreateContainer within sandbox \"7189a090e10952276a7828bd165cf6249bd272101ce11d04c36e078ca9e44fc1\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"f53f04c083c2f7fd214c526a5bbeb4a2e4dd411c8313409371db169279314ee6\"" May 13 23:41:16.602727 containerd[1493]: time="2025-05-13T23:41:16.602583744Z" level=info msg="StartContainer for \"f53f04c083c2f7fd214c526a5bbeb4a2e4dd411c8313409371db169279314ee6\"" May 13 23:41:16.603379 containerd[1493]: time="2025-05-13T23:41:16.603351904Z" level=info msg="connecting to shim f53f04c083c2f7fd214c526a5bbeb4a2e4dd411c8313409371db169279314ee6" address="unix:///run/containerd/s/8857d0c12477011d732c3139c870ec1f7638b687c7805cb3441553c163410602" protocol=ttrpc version=3 May 13 23:41:16.631918 systemd[1]: Started cri-containerd-f53f04c083c2f7fd214c526a5bbeb4a2e4dd411c8313409371db169279314ee6.scope - libcontainer container f53f04c083c2f7fd214c526a5bbeb4a2e4dd411c8313409371db169279314ee6. May 13 23:41:16.687497 containerd[1493]: time="2025-05-13T23:41:16.683928184Z" level=info msg="StartContainer for \"f53f04c083c2f7fd214c526a5bbeb4a2e4dd411c8313409371db169279314ee6\" returns successfully" May 13 23:41:16.946364 kubelet[1822]: E0513 23:41:16.946310 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:17.138306 kubelet[1822]: I0513 23:41:17.138233 1822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-b9f8h" podStartSLOduration=4.684491453 podStartE2EDuration="7.138217253s" podCreationTimestamp="2025-05-13 23:41:10 +0000 UTC" firstStartedPulling="2025-05-13 23:41:14.132681704 +0000 UTC m=+20.769868961" lastFinishedPulling="2025-05-13 23:41:16.586407504 +0000 UTC m=+23.223594761" observedRunningTime="2025-05-13 23:41:17.138063287 +0000 UTC m=+23.775250544" watchObservedRunningTime="2025-05-13 23:41:17.138217253 +0000 UTC m=+23.775404510" May 13 23:41:17.946757 kubelet[1822]: E0513 23:41:17.946713 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:18.947387 kubelet[1822]: E0513 23:41:18.947343 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:19.948295 kubelet[1822]: E0513 23:41:19.948242 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:20.949307 kubelet[1822]: E0513 23:41:20.949258 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:21.950193 kubelet[1822]: E0513 23:41:21.950147 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:22.950990 kubelet[1822]: E0513 23:41:22.950941 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:22.997781 kubelet[1822]: I0513 23:41:22.997739 1822 topology_manager.go:215] "Topology Admit Handler" podUID="4243fbef-6df4-4daa-89a8-f9b9fad07abd" podNamespace="default" podName="nfs-server-provisioner-0" May 13 23:41:23.003734 systemd[1]: Created slice kubepods-besteffort-pod4243fbef_6df4_4daa_89a8_f9b9fad07abd.slice - libcontainer container kubepods-besteffort-pod4243fbef_6df4_4daa_89a8_f9b9fad07abd.slice. May 13 23:41:23.026572 kubelet[1822]: I0513 23:41:23.026525 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/4243fbef-6df4-4daa-89a8-f9b9fad07abd-data\") pod \"nfs-server-provisioner-0\" (UID: \"4243fbef-6df4-4daa-89a8-f9b9fad07abd\") " pod="default/nfs-server-provisioner-0" May 13 23:41:23.026572 kubelet[1822]: I0513 23:41:23.026579 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrrnt\" (UniqueName: \"kubernetes.io/projected/4243fbef-6df4-4daa-89a8-f9b9fad07abd-kube-api-access-lrrnt\") pod \"nfs-server-provisioner-0\" (UID: \"4243fbef-6df4-4daa-89a8-f9b9fad07abd\") " pod="default/nfs-server-provisioner-0" May 13 23:41:23.307139 containerd[1493]: time="2025-05-13T23:41:23.307026221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4243fbef-6df4-4daa-89a8-f9b9fad07abd,Namespace:default,Attempt:0,}" May 13 23:41:23.320558 systemd-networkd[1410]: lxc15c1ba63688b: Link UP May 13 23:41:23.331733 kernel: eth0: renamed from tmp7c375 May 13 23:41:23.337724 systemd-networkd[1410]: lxc15c1ba63688b: Gained carrier May 13 23:41:23.571933 containerd[1493]: time="2025-05-13T23:41:23.571817914Z" level=info msg="connecting to shim 7c3751972b8440529796106fd24a04f1c333daf5e0a0aeae7b5171562ac8a4a3" address="unix:///run/containerd/s/e802e23d80dba0ae02859e2058d0e0438162b03af48312cd707e2e8245d82293" namespace=k8s.io protocol=ttrpc version=3 May 13 23:41:23.594893 systemd[1]: Started cri-containerd-7c3751972b8440529796106fd24a04f1c333daf5e0a0aeae7b5171562ac8a4a3.scope - libcontainer container 7c3751972b8440529796106fd24a04f1c333daf5e0a0aeae7b5171562ac8a4a3. May 13 23:41:23.612705 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:41:23.635144 containerd[1493]: time="2025-05-13T23:41:23.635095734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4243fbef-6df4-4daa-89a8-f9b9fad07abd,Namespace:default,Attempt:0,} returns sandbox id \"7c3751972b8440529796106fd24a04f1c333daf5e0a0aeae7b5171562ac8a4a3\"" May 13 23:41:23.636426 containerd[1493]: time="2025-05-13T23:41:23.636404611Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 13 23:41:23.952069 kubelet[1822]: E0513 23:41:23.952033 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:24.794876 systemd-networkd[1410]: lxc15c1ba63688b: Gained IPv6LL May 13 23:41:24.952681 kubelet[1822]: E0513 23:41:24.952644 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:25.170184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2941440346.mount: Deactivated successfully. May 13 23:41:25.953031 kubelet[1822]: E0513 23:41:25.952990 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:26.562203 containerd[1493]: time="2025-05-13T23:41:26.562137455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:41:26.563170 containerd[1493]: time="2025-05-13T23:41:26.563112679Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" May 13 23:41:26.564173 containerd[1493]: time="2025-05-13T23:41:26.564141743Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:41:26.567668 containerd[1493]: time="2025-05-13T23:41:26.567629786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:41:26.568598 containerd[1493]: time="2025-05-13T23:41:26.568560928Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 2.932125555s" May 13 23:41:26.568635 containerd[1493]: time="2025-05-13T23:41:26.568597288Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 13 23:41:26.570772 containerd[1493]: time="2025-05-13T23:41:26.570737859Z" level=info msg="CreateContainer within sandbox \"7c3751972b8440529796106fd24a04f1c333daf5e0a0aeae7b5171562ac8a4a3\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 13 23:41:26.580204 containerd[1493]: time="2025-05-13T23:41:26.579081377Z" level=info msg="Container e2065e8b619c3aab43815e524fff3b5e47eea95818b0baedd172ee1a398f4e49: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:26.583102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount345670752.mount: Deactivated successfully. May 13 23:41:26.588725 containerd[1493]: time="2025-05-13T23:41:26.588672004Z" level=info msg="CreateContainer within sandbox \"7c3751972b8440529796106fd24a04f1c333daf5e0a0aeae7b5171562ac8a4a3\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"e2065e8b619c3aab43815e524fff3b5e47eea95818b0baedd172ee1a398f4e49\"" May 13 23:41:26.589491 containerd[1493]: time="2025-05-13T23:41:26.589447902Z" level=info msg="StartContainer for \"e2065e8b619c3aab43815e524fff3b5e47eea95818b0baedd172ee1a398f4e49\"" May 13 23:41:26.590478 containerd[1493]: time="2025-05-13T23:41:26.590439166Z" level=info msg="connecting to shim e2065e8b619c3aab43815e524fff3b5e47eea95818b0baedd172ee1a398f4e49" address="unix:///run/containerd/s/e802e23d80dba0ae02859e2058d0e0438162b03af48312cd707e2e8245d82293" protocol=ttrpc version=3 May 13 23:41:26.628936 systemd[1]: Started cri-containerd-e2065e8b619c3aab43815e524fff3b5e47eea95818b0baedd172ee1a398f4e49.scope - libcontainer container e2065e8b619c3aab43815e524fff3b5e47eea95818b0baedd172ee1a398f4e49. May 13 23:41:26.740018 containerd[1493]: time="2025-05-13T23:41:26.739958868Z" level=info msg="StartContainer for \"e2065e8b619c3aab43815e524fff3b5e47eea95818b0baedd172ee1a398f4e49\" returns successfully" May 13 23:41:26.953465 kubelet[1822]: E0513 23:41:26.953410 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:27.954450 kubelet[1822]: E0513 23:41:27.954405 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:28.954904 kubelet[1822]: E0513 23:41:28.954855 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:29.532861 update_engine[1472]: I20250513 23:41:29.532742 1472 update_attempter.cc:509] Updating boot flags... May 13 23:41:29.573722 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3158) May 13 23:41:29.621735 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3159) May 13 23:41:29.955922 kubelet[1822]: E0513 23:41:29.955856 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:30.956336 kubelet[1822]: E0513 23:41:30.956293 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:31.956776 kubelet[1822]: E0513 23:41:31.956715 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:32.957376 kubelet[1822]: E0513 23:41:32.957326 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:33.931686 kubelet[1822]: E0513 23:41:33.931614 1822 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:33.957931 kubelet[1822]: E0513 23:41:33.957884 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:34.958412 kubelet[1822]: E0513 23:41:34.958372 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:35.958841 kubelet[1822]: E0513 23:41:35.958779 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:36.443744 kubelet[1822]: I0513 23:41:36.443596 1822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.510388729 podStartE2EDuration="14.443578831s" podCreationTimestamp="2025-05-13 23:41:22 +0000 UTC" firstStartedPulling="2025-05-13 23:41:23.636137684 +0000 UTC m=+30.273324941" lastFinishedPulling="2025-05-13 23:41:26.569327826 +0000 UTC m=+33.206515043" observedRunningTime="2025-05-13 23:41:27.160568439 +0000 UTC m=+33.797755696" watchObservedRunningTime="2025-05-13 23:41:36.443578831 +0000 UTC m=+43.080766088" May 13 23:41:36.443999 kubelet[1822]: I0513 23:41:36.443768 1822 topology_manager.go:215] "Topology Admit Handler" podUID="23d72b96-6561-4db0-ad8c-c5b77d6387e6" podNamespace="default" podName="test-pod-1" May 13 23:41:36.453528 systemd[1]: Created slice kubepods-besteffort-pod23d72b96_6561_4db0_ad8c_c5b77d6387e6.slice - libcontainer container kubepods-besteffort-pod23d72b96_6561_4db0_ad8c_c5b77d6387e6.slice. May 13 23:41:36.509237 kubelet[1822]: I0513 23:41:36.502323 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncdvl\" (UniqueName: \"kubernetes.io/projected/23d72b96-6561-4db0-ad8c-c5b77d6387e6-kube-api-access-ncdvl\") pod \"test-pod-1\" (UID: \"23d72b96-6561-4db0-ad8c-c5b77d6387e6\") " pod="default/test-pod-1" May 13 23:41:36.509237 kubelet[1822]: I0513 23:41:36.509195 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7dfcc06a-5048-4d0b-829f-4d641cf4e1cc\" (UniqueName: \"kubernetes.io/nfs/23d72b96-6561-4db0-ad8c-c5b77d6387e6-pvc-7dfcc06a-5048-4d0b-829f-4d641cf4e1cc\") pod \"test-pod-1\" (UID: \"23d72b96-6561-4db0-ad8c-c5b77d6387e6\") " pod="default/test-pod-1" May 13 23:41:36.642744 kernel: FS-Cache: Loaded May 13 23:41:36.671851 kernel: RPC: Registered named UNIX socket transport module. May 13 23:41:36.671971 kernel: RPC: Registered udp transport module. May 13 23:41:36.671989 kernel: RPC: Registered tcp transport module. May 13 23:41:36.673075 kernel: RPC: Registered tcp-with-tls transport module. May 13 23:41:36.673133 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 13 23:41:36.850789 kernel: NFS: Registering the id_resolver key type May 13 23:41:36.850933 kernel: Key type id_resolver registered May 13 23:41:36.850953 kernel: Key type id_legacy registered May 13 23:41:36.887059 nfsidmap[3186]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 13 23:41:36.888854 nfsidmap[3187]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 13 23:41:36.959340 kubelet[1822]: E0513 23:41:36.959282 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:37.059023 containerd[1493]: time="2025-05-13T23:41:37.058968636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:23d72b96-6561-4db0-ad8c-c5b77d6387e6,Namespace:default,Attempt:0,}" May 13 23:41:37.090062 systemd-networkd[1410]: lxca37684783e88: Link UP May 13 23:41:37.103768 kernel: eth0: renamed from tmp086f9 May 13 23:41:37.109191 systemd-networkd[1410]: lxca37684783e88: Gained carrier May 13 23:41:37.482965 containerd[1493]: time="2025-05-13T23:41:37.482908454Z" level=info msg="connecting to shim 086f961b576fea06c0ac16596032550b87ec506b20d98d9dcf5d70a552962229" address="unix:///run/containerd/s/608e8e6333aa010715b9f6ee62779c295af113165644866689d0be48a257e852" namespace=k8s.io protocol=ttrpc version=3 May 13 23:41:37.516949 systemd[1]: Started cri-containerd-086f961b576fea06c0ac16596032550b87ec506b20d98d9dcf5d70a552962229.scope - libcontainer container 086f961b576fea06c0ac16596032550b87ec506b20d98d9dcf5d70a552962229. May 13 23:41:37.531814 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:41:37.557646 containerd[1493]: time="2025-05-13T23:41:37.557524683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:23d72b96-6561-4db0-ad8c-c5b77d6387e6,Namespace:default,Attempt:0,} returns sandbox id \"086f961b576fea06c0ac16596032550b87ec506b20d98d9dcf5d70a552962229\"" May 13 23:41:37.559337 containerd[1493]: time="2025-05-13T23:41:37.559306064Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 23:41:37.836844 containerd[1493]: time="2025-05-13T23:41:37.836321450Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:41:37.837072 containerd[1493]: time="2025-05-13T23:41:37.837023899Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" May 13 23:41:37.839749 containerd[1493]: time="2025-05-13T23:41:37.839710050Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 280.349106ms" May 13 23:41:37.839966 containerd[1493]: time="2025-05-13T23:41:37.839854452Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 13 23:41:37.842216 containerd[1493]: time="2025-05-13T23:41:37.842136398Z" level=info msg="CreateContainer within sandbox \"086f961b576fea06c0ac16596032550b87ec506b20d98d9dcf5d70a552962229\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 13 23:41:37.850819 containerd[1493]: time="2025-05-13T23:41:37.849921529Z" level=info msg="Container f780b9615dd719f884c6b193acc933abca3802178605c40d4672e31c1adef4c2: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:37.853311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2565224468.mount: Deactivated successfully. May 13 23:41:37.859482 containerd[1493]: time="2025-05-13T23:41:37.859430760Z" level=info msg="CreateContainer within sandbox \"086f961b576fea06c0ac16596032550b87ec506b20d98d9dcf5d70a552962229\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"f780b9615dd719f884c6b193acc933abca3802178605c40d4672e31c1adef4c2\"" May 13 23:41:37.860254 containerd[1493]: time="2025-05-13T23:41:37.860220209Z" level=info msg="StartContainer for \"f780b9615dd719f884c6b193acc933abca3802178605c40d4672e31c1adef4c2\"" May 13 23:41:37.864783 containerd[1493]: time="2025-05-13T23:41:37.864675901Z" level=info msg="connecting to shim f780b9615dd719f884c6b193acc933abca3802178605c40d4672e31c1adef4c2" address="unix:///run/containerd/s/608e8e6333aa010715b9f6ee62779c295af113165644866689d0be48a257e852" protocol=ttrpc version=3 May 13 23:41:37.896952 systemd[1]: Started cri-containerd-f780b9615dd719f884c6b193acc933abca3802178605c40d4672e31c1adef4c2.scope - libcontainer container f780b9615dd719f884c6b193acc933abca3802178605c40d4672e31c1adef4c2. May 13 23:41:37.932983 containerd[1493]: time="2025-05-13T23:41:37.932889255Z" level=info msg="StartContainer for \"f780b9615dd719f884c6b193acc933abca3802178605c40d4672e31c1adef4c2\" returns successfully" May 13 23:41:37.959600 kubelet[1822]: E0513 23:41:37.959551 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:38.186077 kubelet[1822]: I0513 23:41:38.186003 1822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=14.904341589 podStartE2EDuration="15.18598459s" podCreationTimestamp="2025-05-13 23:41:23 +0000 UTC" firstStartedPulling="2025-05-13 23:41:37.558916419 +0000 UTC m=+44.196103676" lastFinishedPulling="2025-05-13 23:41:37.84055942 +0000 UTC m=+44.477746677" observedRunningTime="2025-05-13 23:41:38.185895309 +0000 UTC m=+44.823082566" watchObservedRunningTime="2025-05-13 23:41:38.18598459 +0000 UTC m=+44.823171847" May 13 23:41:38.554907 systemd-networkd[1410]: lxca37684783e88: Gained IPv6LL May 13 23:41:38.960082 kubelet[1822]: E0513 23:41:38.960011 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:39.960196 kubelet[1822]: E0513 23:41:39.960126 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:40.961165 kubelet[1822]: E0513 23:41:40.961123 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:41.010906 containerd[1493]: time="2025-05-13T23:41:41.010845255Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:41:41.015228 containerd[1493]: time="2025-05-13T23:41:41.015173334Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9\" id:\"a34ba6f015804743401ba7df3a939399b81637c6139ff496a41bc466e5ae12db\" pid:3325 exited_at:{seconds:1747179701 nanos:14612729}" May 13 23:41:41.016843 containerd[1493]: time="2025-05-13T23:41:41.016544187Z" level=info msg="StopContainer for \"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9\" with timeout 2 (s)" May 13 23:41:41.023424 containerd[1493]: time="2025-05-13T23:41:41.023377048Z" level=info msg="Stop container \"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9\" with signal terminated" May 13 23:41:41.041778 systemd-networkd[1410]: lxc_health: Link DOWN May 13 23:41:41.041790 systemd-networkd[1410]: lxc_health: Lost carrier May 13 23:41:41.060360 systemd[1]: cri-containerd-4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9.scope: Deactivated successfully. May 13 23:41:41.060673 systemd[1]: cri-containerd-4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9.scope: Consumed 6.791s CPU time, 123.7M memory peak, 136K read from disk, 12.9M written to disk. May 13 23:41:41.061621 containerd[1493]: time="2025-05-13T23:41:41.061587512Z" level=info msg="received exit event container_id:\"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9\" id:\"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9\" pid:2345 exited_at:{seconds:1747179701 nanos:61403390}" May 13 23:41:41.061856 containerd[1493]: time="2025-05-13T23:41:41.061657793Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9\" id:\"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9\" pid:2345 exited_at:{seconds:1747179701 nanos:61403390}" May 13 23:41:41.078462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9-rootfs.mount: Deactivated successfully. May 13 23:41:41.100656 containerd[1493]: time="2025-05-13T23:41:41.100614103Z" level=info msg="StopContainer for \"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9\" returns successfully" May 13 23:41:41.101259 containerd[1493]: time="2025-05-13T23:41:41.101209869Z" level=info msg="StopPodSandbox for \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\"" May 13 23:41:41.101332 containerd[1493]: time="2025-05-13T23:41:41.101300109Z" level=info msg="Container to stop \"c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:41:41.101361 containerd[1493]: time="2025-05-13T23:41:41.101332230Z" level=info msg="Container to stop \"a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:41:41.101361 containerd[1493]: time="2025-05-13T23:41:41.101341950Z" level=info msg="Container to stop \"b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:41:41.101361 containerd[1493]: time="2025-05-13T23:41:41.101350590Z" level=info msg="Container to stop \"0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:41:41.101361 containerd[1493]: time="2025-05-13T23:41:41.101358630Z" level=info msg="Container to stop \"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:41:41.106989 systemd[1]: cri-containerd-467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9.scope: Deactivated successfully. May 13 23:41:41.107796 containerd[1493]: time="2025-05-13T23:41:41.107764968Z" level=info msg="TaskExit event in podsandbox handler container_id:\"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\" id:\"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\" pid:1942 exit_status:137 exited_at:{seconds:1747179701 nanos:107167002}" May 13 23:41:41.134605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9-rootfs.mount: Deactivated successfully. May 13 23:41:41.145032 containerd[1493]: time="2025-05-13T23:41:41.144828461Z" level=info msg="shim disconnected" id=467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9 namespace=k8s.io May 13 23:41:41.145032 containerd[1493]: time="2025-05-13T23:41:41.144859341Z" level=warning msg="cleaning up after shim disconnected" id=467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9 namespace=k8s.io May 13 23:41:41.145032 containerd[1493]: time="2025-05-13T23:41:41.144887422Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:41:41.157885 containerd[1493]: time="2025-05-13T23:41:41.156365525Z" level=info msg="TearDown network for sandbox \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\" successfully" May 13 23:41:41.157885 containerd[1493]: time="2025-05-13T23:41:41.156400685Z" level=info msg="StopPodSandbox for \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\" returns successfully" May 13 23:41:41.157938 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9-shm.mount: Deactivated successfully. May 13 23:41:41.163734 containerd[1493]: time="2025-05-13T23:41:41.162511580Z" level=info msg="received exit event sandbox_id:\"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\" exit_status:137 exited_at:{seconds:1747179701 nanos:107167002}" May 13 23:41:41.182972 kubelet[1822]: I0513 23:41:41.182920 1822 scope.go:117] "RemoveContainer" containerID="4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9" May 13 23:41:41.185822 containerd[1493]: time="2025-05-13T23:41:41.185773629Z" level=info msg="RemoveContainer for \"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9\"" May 13 23:41:41.198333 containerd[1493]: time="2025-05-13T23:41:41.198289622Z" level=info msg="RemoveContainer for \"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9\" returns successfully" May 13 23:41:41.198742 kubelet[1822]: I0513 23:41:41.198708 1822 scope.go:117] "RemoveContainer" containerID="b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8" May 13 23:41:41.200262 containerd[1493]: time="2025-05-13T23:41:41.200219519Z" level=info msg="RemoveContainer for \"b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8\"" May 13 23:41:41.210756 containerd[1493]: time="2025-05-13T23:41:41.210661413Z" level=info msg="RemoveContainer for \"b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8\" returns successfully" May 13 23:41:41.211020 kubelet[1822]: I0513 23:41:41.210992 1822 scope.go:117] "RemoveContainer" containerID="a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561" May 13 23:41:41.213841 containerd[1493]: time="2025-05-13T23:41:41.213182236Z" level=info msg="RemoveContainer for \"a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561\"" May 13 23:41:41.222581 containerd[1493]: time="2025-05-13T23:41:41.222535000Z" level=info msg="RemoveContainer for \"a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561\" returns successfully" May 13 23:41:41.222805 kubelet[1822]: I0513 23:41:41.222772 1822 scope.go:117] "RemoveContainer" containerID="c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c" May 13 23:41:41.224298 containerd[1493]: time="2025-05-13T23:41:41.224272576Z" level=info msg="RemoveContainer for \"c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c\"" May 13 23:41:41.235707 containerd[1493]: time="2025-05-13T23:41:41.235666958Z" level=info msg="RemoveContainer for \"c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c\" returns successfully" May 13 23:41:41.236016 kubelet[1822]: I0513 23:41:41.235901 1822 scope.go:117] "RemoveContainer" containerID="0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782" May 13 23:41:41.237270 containerd[1493]: time="2025-05-13T23:41:41.237215932Z" level=info msg="RemoveContainer for \"0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782\"" May 13 23:41:41.245988 containerd[1493]: time="2025-05-13T23:41:41.245945771Z" level=info msg="RemoveContainer for \"0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782\" returns successfully" May 13 23:41:41.246204 kubelet[1822]: I0513 23:41:41.246118 1822 scope.go:117] "RemoveContainer" containerID="4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9" May 13 23:41:41.246396 containerd[1493]: time="2025-05-13T23:41:41.246317014Z" level=error msg="ContainerStatus for \"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9\": not found" May 13 23:41:41.246507 kubelet[1822]: E0513 23:41:41.246472 1822 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9\": not found" containerID="4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9" May 13 23:41:41.246598 kubelet[1822]: I0513 23:41:41.246506 1822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9"} err="failed to get container status \"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"4cccd2b34694b27134886da3dfbe908ecc6eda951114689585ca7329cec531d9\": not found" May 13 23:41:41.246598 kubelet[1822]: I0513 23:41:41.246590 1822 scope.go:117] "RemoveContainer" containerID="b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8" May 13 23:41:41.246975 containerd[1493]: time="2025-05-13T23:41:41.246929020Z" level=error msg="ContainerStatus for \"b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8\": not found" May 13 23:41:41.247102 kubelet[1822]: E0513 23:41:41.247081 1822 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8\": not found" containerID="b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8" May 13 23:41:41.247131 kubelet[1822]: I0513 23:41:41.247108 1822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8"} err="failed to get container status \"b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"b74bdf07b577c7177f66efc9827b075a396b43844b9d280cf09795a7c48e59d8\": not found" May 13 23:41:41.247131 kubelet[1822]: I0513 23:41:41.247126 1822 scope.go:117] "RemoveContainer" containerID="a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561" May 13 23:41:41.247328 containerd[1493]: time="2025-05-13T23:41:41.247287983Z" level=error msg="ContainerStatus for \"a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561\": not found" May 13 23:41:41.247426 kubelet[1822]: E0513 23:41:41.247402 1822 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561\": not found" containerID="a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561" May 13 23:41:41.247455 kubelet[1822]: I0513 23:41:41.247430 1822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561"} err="failed to get container status \"a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9ad58cdd89daf53d09e46003862e6693caa9da88fb818b858384eafee2fa561\": not found" May 13 23:41:41.247455 kubelet[1822]: I0513 23:41:41.247444 1822 scope.go:117] "RemoveContainer" containerID="c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c" May 13 23:41:41.247609 containerd[1493]: time="2025-05-13T23:41:41.247582266Z" level=error msg="ContainerStatus for \"c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c\": not found" May 13 23:41:41.247745 kubelet[1822]: E0513 23:41:41.247727 1822 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c\": not found" containerID="c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c" May 13 23:41:41.247801 kubelet[1822]: I0513 23:41:41.247747 1822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c"} err="failed to get container status \"c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c\": rpc error: code = NotFound desc = an error occurred when try to find container \"c912c209fa412ff2e9da861fb1768395a2e3ea6e69d665aefa10a23cb2987e3c\": not found" May 13 23:41:41.247801 kubelet[1822]: I0513 23:41:41.247760 1822 scope.go:117] "RemoveContainer" containerID="0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782" May 13 23:41:41.247940 containerd[1493]: time="2025-05-13T23:41:41.247905269Z" level=error msg="ContainerStatus for \"0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782\": not found" May 13 23:41:41.248093 kubelet[1822]: E0513 23:41:41.248071 1822 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782\": not found" containerID="0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782" May 13 23:41:41.248135 kubelet[1822]: I0513 23:41:41.248098 1822 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782"} err="failed to get container status \"0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782\": rpc error: code = NotFound desc = an error occurred when try to find container \"0386f974e010bb25457eb4d98e9db9e20e8288d64b246c8f71911da03141d782\": not found" May 13 23:41:41.336908 kubelet[1822]: I0513 23:41:41.336848 1822 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-host-proc-sys-net\") pod \"299d631a-134f-407d-9d2a-1f661715e0ff\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " May 13 23:41:41.337036 kubelet[1822]: I0513 23:41:41.336918 1822 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdbwg\" (UniqueName: \"kubernetes.io/projected/299d631a-134f-407d-9d2a-1f661715e0ff-kube-api-access-tdbwg\") pod \"299d631a-134f-407d-9d2a-1f661715e0ff\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " May 13 23:41:41.337036 kubelet[1822]: I0513 23:41:41.336943 1822 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/299d631a-134f-407d-9d2a-1f661715e0ff-clustermesh-secrets\") pod \"299d631a-134f-407d-9d2a-1f661715e0ff\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " May 13 23:41:41.337036 kubelet[1822]: I0513 23:41:41.336985 1822 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/299d631a-134f-407d-9d2a-1f661715e0ff-hubble-tls\") pod \"299d631a-134f-407d-9d2a-1f661715e0ff\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " May 13 23:41:41.337036 kubelet[1822]: I0513 23:41:41.337004 1822 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-cilium-cgroup\") pod \"299d631a-134f-407d-9d2a-1f661715e0ff\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " May 13 23:41:41.337036 kubelet[1822]: I0513 23:41:41.337020 1822 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-xtables-lock\") pod \"299d631a-134f-407d-9d2a-1f661715e0ff\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " May 13 23:41:41.337147 kubelet[1822]: I0513 23:41:41.337054 1822 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-host-proc-sys-kernel\") pod \"299d631a-134f-407d-9d2a-1f661715e0ff\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " May 13 23:41:41.337147 kubelet[1822]: I0513 23:41:41.337076 1822 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-bpf-maps\") pod \"299d631a-134f-407d-9d2a-1f661715e0ff\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " May 13 23:41:41.337147 kubelet[1822]: I0513 23:41:41.337094 1822 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/299d631a-134f-407d-9d2a-1f661715e0ff-cilium-config-path\") pod \"299d631a-134f-407d-9d2a-1f661715e0ff\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " May 13 23:41:41.337147 kubelet[1822]: I0513 23:41:41.337108 1822 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-cni-path\") pod \"299d631a-134f-407d-9d2a-1f661715e0ff\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " May 13 23:41:41.337147 kubelet[1822]: I0513 23:41:41.337143 1822 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-cilium-run\") pod \"299d631a-134f-407d-9d2a-1f661715e0ff\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " May 13 23:41:41.337250 kubelet[1822]: I0513 23:41:41.337158 1822 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-lib-modules\") pod \"299d631a-134f-407d-9d2a-1f661715e0ff\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " May 13 23:41:41.337250 kubelet[1822]: I0513 23:41:41.337173 1822 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-etc-cni-netd\") pod \"299d631a-134f-407d-9d2a-1f661715e0ff\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " May 13 23:41:41.337250 kubelet[1822]: I0513 23:41:41.337186 1822 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-hostproc\") pod \"299d631a-134f-407d-9d2a-1f661715e0ff\" (UID: \"299d631a-134f-407d-9d2a-1f661715e0ff\") " May 13 23:41:41.337355 kubelet[1822]: I0513 23:41:41.337274 1822 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-hostproc" (OuterVolumeSpecName: "hostproc") pod "299d631a-134f-407d-9d2a-1f661715e0ff" (UID: "299d631a-134f-407d-9d2a-1f661715e0ff"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:41:41.337355 kubelet[1822]: I0513 23:41:41.337337 1822 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "299d631a-134f-407d-9d2a-1f661715e0ff" (UID: "299d631a-134f-407d-9d2a-1f661715e0ff"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:41:41.340401 kubelet[1822]: I0513 23:41:41.337459 1822 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "299d631a-134f-407d-9d2a-1f661715e0ff" (UID: "299d631a-134f-407d-9d2a-1f661715e0ff"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:41:41.340401 kubelet[1822]: I0513 23:41:41.337488 1822 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "299d631a-134f-407d-9d2a-1f661715e0ff" (UID: "299d631a-134f-407d-9d2a-1f661715e0ff"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:41:41.340401 kubelet[1822]: I0513 23:41:41.337485 1822 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "299d631a-134f-407d-9d2a-1f661715e0ff" (UID: "299d631a-134f-407d-9d2a-1f661715e0ff"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:41:41.340401 kubelet[1822]: I0513 23:41:41.337522 1822 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "299d631a-134f-407d-9d2a-1f661715e0ff" (UID: "299d631a-134f-407d-9d2a-1f661715e0ff"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:41:41.340401 kubelet[1822]: I0513 23:41:41.337545 1822 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "299d631a-134f-407d-9d2a-1f661715e0ff" (UID: "299d631a-134f-407d-9d2a-1f661715e0ff"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:41:41.340608 kubelet[1822]: I0513 23:41:41.339055 1822 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-cni-path" (OuterVolumeSpecName: "cni-path") pod "299d631a-134f-407d-9d2a-1f661715e0ff" (UID: "299d631a-134f-407d-9d2a-1f661715e0ff"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:41:41.340608 kubelet[1822]: I0513 23:41:41.339105 1822 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "299d631a-134f-407d-9d2a-1f661715e0ff" (UID: "299d631a-134f-407d-9d2a-1f661715e0ff"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:41:41.340608 kubelet[1822]: I0513 23:41:41.339127 1822 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "299d631a-134f-407d-9d2a-1f661715e0ff" (UID: "299d631a-134f-407d-9d2a-1f661715e0ff"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:41:41.340608 kubelet[1822]: I0513 23:41:41.339313 1822 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/299d631a-134f-407d-9d2a-1f661715e0ff-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "299d631a-134f-407d-9d2a-1f661715e0ff" (UID: "299d631a-134f-407d-9d2a-1f661715e0ff"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 23:41:41.348774 systemd[1]: var-lib-kubelet-pods-299d631a\x2d134f\x2d407d\x2d9d2a\x2d1f661715e0ff-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 23:41:41.351749 systemd[1]: var-lib-kubelet-pods-299d631a\x2d134f\x2d407d\x2d9d2a\x2d1f661715e0ff-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtdbwg.mount: Deactivated successfully. May 13 23:41:41.351856 systemd[1]: var-lib-kubelet-pods-299d631a\x2d134f\x2d407d\x2d9d2a\x2d1f661715e0ff-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 23:41:41.354191 kubelet[1822]: I0513 23:41:41.353763 1822 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/299d631a-134f-407d-9d2a-1f661715e0ff-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "299d631a-134f-407d-9d2a-1f661715e0ff" (UID: "299d631a-134f-407d-9d2a-1f661715e0ff"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 23:41:41.354814 kubelet[1822]: I0513 23:41:41.354770 1822 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/299d631a-134f-407d-9d2a-1f661715e0ff-kube-api-access-tdbwg" (OuterVolumeSpecName: "kube-api-access-tdbwg") pod "299d631a-134f-407d-9d2a-1f661715e0ff" (UID: "299d631a-134f-407d-9d2a-1f661715e0ff"). InnerVolumeSpecName "kube-api-access-tdbwg". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:41:41.358648 kubelet[1822]: I0513 23:41:41.358612 1822 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/299d631a-134f-407d-9d2a-1f661715e0ff-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "299d631a-134f-407d-9d2a-1f661715e0ff" (UID: "299d631a-134f-407d-9d2a-1f661715e0ff"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:41:41.438369 kubelet[1822]: I0513 23:41:41.438196 1822 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-host-proc-sys-net\") on node \"10.0.0.43\" DevicePath \"\"" May 13 23:41:41.438369 kubelet[1822]: I0513 23:41:41.438232 1822 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tdbwg\" (UniqueName: \"kubernetes.io/projected/299d631a-134f-407d-9d2a-1f661715e0ff-kube-api-access-tdbwg\") on node \"10.0.0.43\" DevicePath \"\"" May 13 23:41:41.438369 kubelet[1822]: I0513 23:41:41.438241 1822 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/299d631a-134f-407d-9d2a-1f661715e0ff-clustermesh-secrets\") on node \"10.0.0.43\" DevicePath \"\"" May 13 23:41:41.438369 kubelet[1822]: I0513 23:41:41.438249 1822 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-cilium-cgroup\") on node \"10.0.0.43\" DevicePath \"\"" May 13 23:41:41.438369 kubelet[1822]: I0513 23:41:41.438258 1822 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-xtables-lock\") on node \"10.0.0.43\" DevicePath \"\"" May 13 23:41:41.438369 kubelet[1822]: I0513 23:41:41.438266 1822 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-host-proc-sys-kernel\") on node \"10.0.0.43\" DevicePath \"\"" May 13 23:41:41.438369 kubelet[1822]: I0513 23:41:41.438291 1822 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/299d631a-134f-407d-9d2a-1f661715e0ff-hubble-tls\") on node \"10.0.0.43\" DevicePath \"\"" May 13 23:41:41.438369 kubelet[1822]: I0513 23:41:41.438304 1822 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-bpf-maps\") on node \"10.0.0.43\" DevicePath \"\"" May 13 23:41:41.438645 kubelet[1822]: I0513 23:41:41.438312 1822 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/299d631a-134f-407d-9d2a-1f661715e0ff-cilium-config-path\") on node \"10.0.0.43\" DevicePath \"\"" May 13 23:41:41.438645 kubelet[1822]: I0513 23:41:41.438319 1822 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-cni-path\") on node \"10.0.0.43\" DevicePath \"\"" May 13 23:41:41.438645 kubelet[1822]: I0513 23:41:41.438326 1822 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-cilium-run\") on node \"10.0.0.43\" DevicePath \"\"" May 13 23:41:41.438645 kubelet[1822]: I0513 23:41:41.438333 1822 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-lib-modules\") on node \"10.0.0.43\" DevicePath \"\"" May 13 23:41:41.438645 kubelet[1822]: I0513 23:41:41.438341 1822 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-etc-cni-netd\") on node \"10.0.0.43\" DevicePath \"\"" May 13 23:41:41.438645 kubelet[1822]: I0513 23:41:41.438348 1822 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/299d631a-134f-407d-9d2a-1f661715e0ff-hostproc\") on node \"10.0.0.43\" DevicePath \"\"" May 13 23:41:41.487914 systemd[1]: Removed slice kubepods-burstable-pod299d631a_134f_407d_9d2a_1f661715e0ff.slice - libcontainer container kubepods-burstable-pod299d631a_134f_407d_9d2a_1f661715e0ff.slice. May 13 23:41:41.488259 systemd[1]: kubepods-burstable-pod299d631a_134f_407d_9d2a_1f661715e0ff.slice: Consumed 6.949s CPU time, 124.2M memory peak, 136K read from disk, 12.9M written to disk. May 13 23:41:41.962307 kubelet[1822]: E0513 23:41:41.962246 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:42.069715 kubelet[1822]: I0513 23:41:42.069653 1822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="299d631a-134f-407d-9d2a-1f661715e0ff" path="/var/lib/kubelet/pods/299d631a-134f-407d-9d2a-1f661715e0ff/volumes" May 13 23:41:42.962466 kubelet[1822]: E0513 23:41:42.962424 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:43.963162 kubelet[1822]: E0513 23:41:43.963108 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:44.060120 kubelet[1822]: I0513 23:41:44.060077 1822 topology_manager.go:215] "Topology Admit Handler" podUID="8855575c-6fe3-4ece-8c40-09be15bcd625" podNamespace="kube-system" podName="cilium-operator-599987898-z2m4z" May 13 23:41:44.060844 kubelet[1822]: E0513 23:41:44.060298 1822 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="299d631a-134f-407d-9d2a-1f661715e0ff" containerName="mount-bpf-fs" May 13 23:41:44.060844 kubelet[1822]: E0513 23:41:44.060315 1822 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="299d631a-134f-407d-9d2a-1f661715e0ff" containerName="cilium-agent" May 13 23:41:44.060844 kubelet[1822]: E0513 23:41:44.060321 1822 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="299d631a-134f-407d-9d2a-1f661715e0ff" containerName="mount-cgroup" May 13 23:41:44.060844 kubelet[1822]: E0513 23:41:44.060327 1822 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="299d631a-134f-407d-9d2a-1f661715e0ff" containerName="apply-sysctl-overwrites" May 13 23:41:44.060844 kubelet[1822]: E0513 23:41:44.060332 1822 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="299d631a-134f-407d-9d2a-1f661715e0ff" containerName="clean-cilium-state" May 13 23:41:44.060844 kubelet[1822]: I0513 23:41:44.060347 1822 memory_manager.go:354] "RemoveStaleState removing state" podUID="299d631a-134f-407d-9d2a-1f661715e0ff" containerName="cilium-agent" May 13 23:41:44.065186 kubelet[1822]: I0513 23:41:44.065150 1822 topology_manager.go:215] "Topology Admit Handler" podUID="03cc7e02-14aa-4c65-9480-fa0e64c83867" podNamespace="kube-system" podName="cilium-9w25w" May 13 23:41:44.068147 systemd[1]: Created slice kubepods-besteffort-pod8855575c_6fe3_4ece_8c40_09be15bcd625.slice - libcontainer container kubepods-besteffort-pod8855575c_6fe3_4ece_8c40_09be15bcd625.slice. May 13 23:41:44.075278 systemd[1]: Created slice kubepods-burstable-pod03cc7e02_14aa_4c65_9480_fa0e64c83867.slice - libcontainer container kubepods-burstable-pod03cc7e02_14aa_4c65_9480_fa0e64c83867.slice. May 13 23:41:44.083236 kubelet[1822]: E0513 23:41:44.083083 1822 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 23:41:44.155051 kubelet[1822]: I0513 23:41:44.154974 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03cc7e02-14aa-4c65-9480-fa0e64c83867-etc-cni-netd\") pod \"cilium-9w25w\" (UID: \"03cc7e02-14aa-4c65-9480-fa0e64c83867\") " pod="kube-system/cilium-9w25w" May 13 23:41:44.155051 kubelet[1822]: I0513 23:41:44.155020 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03cc7e02-14aa-4c65-9480-fa0e64c83867-xtables-lock\") pod \"cilium-9w25w\" (UID: \"03cc7e02-14aa-4c65-9480-fa0e64c83867\") " pod="kube-system/cilium-9w25w" May 13 23:41:44.155051 kubelet[1822]: I0513 23:41:44.155040 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03cc7e02-14aa-4c65-9480-fa0e64c83867-cilium-run\") pod \"cilium-9w25w\" (UID: \"03cc7e02-14aa-4c65-9480-fa0e64c83867\") " pod="kube-system/cilium-9w25w" May 13 23:41:44.155051 kubelet[1822]: I0513 23:41:44.155060 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03cc7e02-14aa-4c65-9480-fa0e64c83867-cilium-cgroup\") pod \"cilium-9w25w\" (UID: \"03cc7e02-14aa-4c65-9480-fa0e64c83867\") " pod="kube-system/cilium-9w25w" May 13 23:41:44.155294 kubelet[1822]: I0513 23:41:44.155078 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03cc7e02-14aa-4c65-9480-fa0e64c83867-cni-path\") pod \"cilium-9w25w\" (UID: \"03cc7e02-14aa-4c65-9480-fa0e64c83867\") " pod="kube-system/cilium-9w25w" May 13 23:41:44.155294 kubelet[1822]: I0513 23:41:44.155095 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03cc7e02-14aa-4c65-9480-fa0e64c83867-hostproc\") pod \"cilium-9w25w\" (UID: \"03cc7e02-14aa-4c65-9480-fa0e64c83867\") " pod="kube-system/cilium-9w25w" May 13 23:41:44.155294 kubelet[1822]: I0513 23:41:44.155112 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8855575c-6fe3-4ece-8c40-09be15bcd625-cilium-config-path\") pod \"cilium-operator-599987898-z2m4z\" (UID: \"8855575c-6fe3-4ece-8c40-09be15bcd625\") " pod="kube-system/cilium-operator-599987898-z2m4z" May 13 23:41:44.155294 kubelet[1822]: I0513 23:41:44.155129 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/03cc7e02-14aa-4c65-9480-fa0e64c83867-cilium-ipsec-secrets\") pod \"cilium-9w25w\" (UID: \"03cc7e02-14aa-4c65-9480-fa0e64c83867\") " pod="kube-system/cilium-9w25w" May 13 23:41:44.155294 kubelet[1822]: I0513 23:41:44.155143 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03cc7e02-14aa-4c65-9480-fa0e64c83867-clustermesh-secrets\") pod \"cilium-9w25w\" (UID: \"03cc7e02-14aa-4c65-9480-fa0e64c83867\") " pod="kube-system/cilium-9w25w" May 13 23:41:44.155423 kubelet[1822]: I0513 23:41:44.155158 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03cc7e02-14aa-4c65-9480-fa0e64c83867-cilium-config-path\") pod \"cilium-9w25w\" (UID: \"03cc7e02-14aa-4c65-9480-fa0e64c83867\") " pod="kube-system/cilium-9w25w" May 13 23:41:44.155423 kubelet[1822]: I0513 23:41:44.155173 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03cc7e02-14aa-4c65-9480-fa0e64c83867-bpf-maps\") pod \"cilium-9w25w\" (UID: \"03cc7e02-14aa-4c65-9480-fa0e64c83867\") " pod="kube-system/cilium-9w25w" May 13 23:41:44.155423 kubelet[1822]: I0513 23:41:44.155190 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03cc7e02-14aa-4c65-9480-fa0e64c83867-host-proc-sys-net\") pod \"cilium-9w25w\" (UID: \"03cc7e02-14aa-4c65-9480-fa0e64c83867\") " pod="kube-system/cilium-9w25w" May 13 23:41:44.155423 kubelet[1822]: I0513 23:41:44.155207 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03cc7e02-14aa-4c65-9480-fa0e64c83867-host-proc-sys-kernel\") pod \"cilium-9w25w\" (UID: \"03cc7e02-14aa-4c65-9480-fa0e64c83867\") " pod="kube-system/cilium-9w25w" May 13 23:41:44.155423 kubelet[1822]: I0513 23:41:44.155223 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03cc7e02-14aa-4c65-9480-fa0e64c83867-hubble-tls\") pod \"cilium-9w25w\" (UID: \"03cc7e02-14aa-4c65-9480-fa0e64c83867\") " pod="kube-system/cilium-9w25w" May 13 23:41:44.155539 kubelet[1822]: I0513 23:41:44.155239 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mjsk\" (UniqueName: \"kubernetes.io/projected/03cc7e02-14aa-4c65-9480-fa0e64c83867-kube-api-access-8mjsk\") pod \"cilium-9w25w\" (UID: \"03cc7e02-14aa-4c65-9480-fa0e64c83867\") " pod="kube-system/cilium-9w25w" May 13 23:41:44.155539 kubelet[1822]: I0513 23:41:44.155261 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj5dg\" (UniqueName: \"kubernetes.io/projected/8855575c-6fe3-4ece-8c40-09be15bcd625-kube-api-access-wj5dg\") pod \"cilium-operator-599987898-z2m4z\" (UID: \"8855575c-6fe3-4ece-8c40-09be15bcd625\") " pod="kube-system/cilium-operator-599987898-z2m4z" May 13 23:41:44.155539 kubelet[1822]: I0513 23:41:44.155277 1822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03cc7e02-14aa-4c65-9480-fa0e64c83867-lib-modules\") pod \"cilium-9w25w\" (UID: \"03cc7e02-14aa-4c65-9480-fa0e64c83867\") " pod="kube-system/cilium-9w25w" May 13 23:41:44.371964 containerd[1493]: time="2025-05-13T23:41:44.371771333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-z2m4z,Uid:8855575c-6fe3-4ece-8c40-09be15bcd625,Namespace:kube-system,Attempt:0,}" May 13 23:41:44.392538 containerd[1493]: time="2025-05-13T23:41:44.392389165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9w25w,Uid:03cc7e02-14aa-4c65-9480-fa0e64c83867,Namespace:kube-system,Attempt:0,}" May 13 23:41:44.407038 containerd[1493]: time="2025-05-13T23:41:44.406965914Z" level=info msg="connecting to shim 4e291b23f4d2d529ffdf15476c5fae813e4a3c60050558fae2100061ccc6eab2" address="unix:///run/containerd/s/284632fda97cfe82630f9581c638c13d192bce844f2d5c3250598f1ac1052db4" namespace=k8s.io protocol=ttrpc version=3 May 13 23:41:44.421255 containerd[1493]: time="2025-05-13T23:41:44.421135179Z" level=info msg="connecting to shim edebb9d88665748ef05edece1e7a9c343327b8d7f06cec210881cc9391fed95b" address="unix:///run/containerd/s/fc08212b5d08f78f448683b0261bca6a3a69d73581563eafbf17996c0486bc9f" namespace=k8s.io protocol=ttrpc version=3 May 13 23:41:44.434947 systemd[1]: Started cri-containerd-4e291b23f4d2d529ffdf15476c5fae813e4a3c60050558fae2100061ccc6eab2.scope - libcontainer container 4e291b23f4d2d529ffdf15476c5fae813e4a3c60050558fae2100061ccc6eab2. May 13 23:41:44.448933 systemd[1]: Started cri-containerd-edebb9d88665748ef05edece1e7a9c343327b8d7f06cec210881cc9391fed95b.scope - libcontainer container edebb9d88665748ef05edece1e7a9c343327b8d7f06cec210881cc9391fed95b. May 13 23:41:44.485768 containerd[1493]: time="2025-05-13T23:41:44.485664337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9w25w,Uid:03cc7e02-14aa-4c65-9480-fa0e64c83867,Namespace:kube-system,Attempt:0,} returns sandbox id \"edebb9d88665748ef05edece1e7a9c343327b8d7f06cec210881cc9391fed95b\"" May 13 23:41:44.493387 containerd[1493]: time="2025-05-13T23:41:44.492350347Z" level=info msg="CreateContainer within sandbox \"edebb9d88665748ef05edece1e7a9c343327b8d7f06cec210881cc9391fed95b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 23:41:44.504009 containerd[1493]: time="2025-05-13T23:41:44.503938273Z" level=info msg="Container 686362ad154936dc4cd456a8d6fa494f2215a64a61e87807aa302a540b4aec14: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:44.512865 containerd[1493]: time="2025-05-13T23:41:44.512810578Z" level=info msg="CreateContainer within sandbox \"edebb9d88665748ef05edece1e7a9c343327b8d7f06cec210881cc9391fed95b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"686362ad154936dc4cd456a8d6fa494f2215a64a61e87807aa302a540b4aec14\"" May 13 23:41:44.513383 containerd[1493]: time="2025-05-13T23:41:44.513344462Z" level=info msg="StartContainer for \"686362ad154936dc4cd456a8d6fa494f2215a64a61e87807aa302a540b4aec14\"" May 13 23:41:44.514468 containerd[1493]: time="2025-05-13T23:41:44.514420230Z" level=info msg="connecting to shim 686362ad154936dc4cd456a8d6fa494f2215a64a61e87807aa302a540b4aec14" address="unix:///run/containerd/s/fc08212b5d08f78f448683b0261bca6a3a69d73581563eafbf17996c0486bc9f" protocol=ttrpc version=3 May 13 23:41:44.538002 systemd[1]: Started cri-containerd-686362ad154936dc4cd456a8d6fa494f2215a64a61e87807aa302a540b4aec14.scope - libcontainer container 686362ad154936dc4cd456a8d6fa494f2215a64a61e87807aa302a540b4aec14. May 13 23:41:44.550690 containerd[1493]: time="2025-05-13T23:41:44.550615299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-z2m4z,Uid:8855575c-6fe3-4ece-8c40-09be15bcd625,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e291b23f4d2d529ffdf15476c5fae813e4a3c60050558fae2100061ccc6eab2\"" May 13 23:41:44.553352 containerd[1493]: time="2025-05-13T23:41:44.553288878Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 23:41:44.586075 containerd[1493]: time="2025-05-13T23:41:44.586024001Z" level=info msg="StartContainer for \"686362ad154936dc4cd456a8d6fa494f2215a64a61e87807aa302a540b4aec14\" returns successfully" May 13 23:41:44.664415 systemd[1]: cri-containerd-686362ad154936dc4cd456a8d6fa494f2215a64a61e87807aa302a540b4aec14.scope: Deactivated successfully. May 13 23:41:44.665589 containerd[1493]: time="2025-05-13T23:41:44.665549711Z" level=info msg="received exit event container_id:\"686362ad154936dc4cd456a8d6fa494f2215a64a61e87807aa302a540b4aec14\" id:\"686362ad154936dc4cd456a8d6fa494f2215a64a61e87807aa302a540b4aec14\" pid:3505 exited_at:{seconds:1747179704 nanos:665304269}" May 13 23:41:44.666333 containerd[1493]: time="2025-05-13T23:41:44.665857793Z" level=info msg="TaskExit event in podsandbox handler container_id:\"686362ad154936dc4cd456a8d6fa494f2215a64a61e87807aa302a540b4aec14\" id:\"686362ad154936dc4cd456a8d6fa494f2215a64a61e87807aa302a540b4aec14\" pid:3505 exited_at:{seconds:1747179704 nanos:665304269}" May 13 23:41:44.963720 kubelet[1822]: E0513 23:41:44.963493 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:45.178772 kubelet[1822]: I0513 23:41:45.177873 1822 setters.go:580] "Node became not ready" node="10.0.0.43" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T23:41:45Z","lastTransitionTime":"2025-05-13T23:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 23:41:45.203025 containerd[1493]: time="2025-05-13T23:41:45.202892929Z" level=info msg="CreateContainer within sandbox \"edebb9d88665748ef05edece1e7a9c343327b8d7f06cec210881cc9391fed95b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 23:41:45.214388 containerd[1493]: time="2025-05-13T23:41:45.214193768Z" level=info msg="Container 1326b2cc7d661cead8fa0ba05bf4487aa190b9118a81c00ab7838ecef3ed91ac: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:45.220799 containerd[1493]: time="2025-05-13T23:41:45.220751094Z" level=info msg="CreateContainer within sandbox \"edebb9d88665748ef05edece1e7a9c343327b8d7f06cec210881cc9391fed95b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1326b2cc7d661cead8fa0ba05bf4487aa190b9118a81c00ab7838ecef3ed91ac\"" May 13 23:41:45.221553 containerd[1493]: time="2025-05-13T23:41:45.221467579Z" level=info msg="StartContainer for \"1326b2cc7d661cead8fa0ba05bf4487aa190b9118a81c00ab7838ecef3ed91ac\"" May 13 23:41:45.222340 containerd[1493]: time="2025-05-13T23:41:45.222306864Z" level=info msg="connecting to shim 1326b2cc7d661cead8fa0ba05bf4487aa190b9118a81c00ab7838ecef3ed91ac" address="unix:///run/containerd/s/fc08212b5d08f78f448683b0261bca6a3a69d73581563eafbf17996c0486bc9f" protocol=ttrpc version=3 May 13 23:41:45.247945 systemd[1]: Started cri-containerd-1326b2cc7d661cead8fa0ba05bf4487aa190b9118a81c00ab7838ecef3ed91ac.scope - libcontainer container 1326b2cc7d661cead8fa0ba05bf4487aa190b9118a81c00ab7838ecef3ed91ac. May 13 23:41:45.287262 containerd[1493]: time="2025-05-13T23:41:45.287141155Z" level=info msg="StartContainer for \"1326b2cc7d661cead8fa0ba05bf4487aa190b9118a81c00ab7838ecef3ed91ac\" returns successfully" May 13 23:41:45.296988 systemd[1]: cri-containerd-1326b2cc7d661cead8fa0ba05bf4487aa190b9118a81c00ab7838ecef3ed91ac.scope: Deactivated successfully. May 13 23:41:45.301970 containerd[1493]: time="2025-05-13T23:41:45.301771777Z" level=info msg="received exit event container_id:\"1326b2cc7d661cead8fa0ba05bf4487aa190b9118a81c00ab7838ecef3ed91ac\" id:\"1326b2cc7d661cead8fa0ba05bf4487aa190b9118a81c00ab7838ecef3ed91ac\" pid:3548 exited_at:{seconds:1747179705 nanos:301437134}" May 13 23:41:45.301970 containerd[1493]: time="2025-05-13T23:41:45.301926178Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1326b2cc7d661cead8fa0ba05bf4487aa190b9118a81c00ab7838ecef3ed91ac\" id:\"1326b2cc7d661cead8fa0ba05bf4487aa190b9118a81c00ab7838ecef3ed91ac\" pid:3548 exited_at:{seconds:1747179705 nanos:301437134}" May 13 23:41:45.321453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1326b2cc7d661cead8fa0ba05bf4487aa190b9118a81c00ab7838ecef3ed91ac-rootfs.mount: Deactivated successfully. May 13 23:41:45.778985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount306799850.mount: Deactivated successfully. May 13 23:41:45.964015 kubelet[1822]: E0513 23:41:45.963906 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:46.079545 containerd[1493]: time="2025-05-13T23:41:46.079411388Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:41:46.080223 containerd[1493]: time="2025-05-13T23:41:46.080160633Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 13 23:41:46.081075 containerd[1493]: time="2025-05-13T23:41:46.081036358Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:41:46.082440 containerd[1493]: time="2025-05-13T23:41:46.082406167Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.529071408s" May 13 23:41:46.082480 containerd[1493]: time="2025-05-13T23:41:46.082444207Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 13 23:41:46.086718 containerd[1493]: time="2025-05-13T23:41:46.085214786Z" level=info msg="CreateContainer within sandbox \"4e291b23f4d2d529ffdf15476c5fae813e4a3c60050558fae2100061ccc6eab2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 23:41:46.093875 containerd[1493]: time="2025-05-13T23:41:46.093821042Z" level=info msg="Container d9ca0a6ad4b2d63e7e04e8ec99f51a923af8c914e008b5c41f8fb863ff91aa7d: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:46.102047 containerd[1493]: time="2025-05-13T23:41:46.101996615Z" level=info msg="CreateContainer within sandbox \"4e291b23f4d2d529ffdf15476c5fae813e4a3c60050558fae2100061ccc6eab2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d9ca0a6ad4b2d63e7e04e8ec99f51a923af8c914e008b5c41f8fb863ff91aa7d\"" May 13 23:41:46.104756 containerd[1493]: time="2025-05-13T23:41:46.102517778Z" level=info msg="StartContainer for \"d9ca0a6ad4b2d63e7e04e8ec99f51a923af8c914e008b5c41f8fb863ff91aa7d\"" May 13 23:41:46.104756 containerd[1493]: time="2025-05-13T23:41:46.103420504Z" level=info msg="connecting to shim d9ca0a6ad4b2d63e7e04e8ec99f51a923af8c914e008b5c41f8fb863ff91aa7d" address="unix:///run/containerd/s/284632fda97cfe82630f9581c638c13d192bce844f2d5c3250598f1ac1052db4" protocol=ttrpc version=3 May 13 23:41:46.130919 systemd[1]: Started cri-containerd-d9ca0a6ad4b2d63e7e04e8ec99f51a923af8c914e008b5c41f8fb863ff91aa7d.scope - libcontainer container d9ca0a6ad4b2d63e7e04e8ec99f51a923af8c914e008b5c41f8fb863ff91aa7d. May 13 23:41:46.157022 containerd[1493]: time="2025-05-13T23:41:46.156969773Z" level=info msg="StartContainer for \"d9ca0a6ad4b2d63e7e04e8ec99f51a923af8c914e008b5c41f8fb863ff91aa7d\" returns successfully" May 13 23:41:46.204368 containerd[1493]: time="2025-05-13T23:41:46.204327842Z" level=info msg="CreateContainer within sandbox \"edebb9d88665748ef05edece1e7a9c343327b8d7f06cec210881cc9391fed95b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 23:41:46.234956 containerd[1493]: time="2025-05-13T23:41:46.234905121Z" level=info msg="Container db33fbf0c6cb6329984064e7680a35ff01cc3c9e7475ea87ff1e4f476302c396: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:46.243966 containerd[1493]: time="2025-05-13T23:41:46.243912700Z" level=info msg="CreateContainer within sandbox \"edebb9d88665748ef05edece1e7a9c343327b8d7f06cec210881cc9391fed95b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"db33fbf0c6cb6329984064e7680a35ff01cc3c9e7475ea87ff1e4f476302c396\"" May 13 23:41:46.244880 containerd[1493]: time="2025-05-13T23:41:46.244848146Z" level=info msg="StartContainer for \"db33fbf0c6cb6329984064e7680a35ff01cc3c9e7475ea87ff1e4f476302c396\"" May 13 23:41:46.246294 containerd[1493]: time="2025-05-13T23:41:46.246247955Z" level=info msg="connecting to shim db33fbf0c6cb6329984064e7680a35ff01cc3c9e7475ea87ff1e4f476302c396" address="unix:///run/containerd/s/fc08212b5d08f78f448683b0261bca6a3a69d73581563eafbf17996c0486bc9f" protocol=ttrpc version=3 May 13 23:41:46.265890 systemd[1]: Started cri-containerd-db33fbf0c6cb6329984064e7680a35ff01cc3c9e7475ea87ff1e4f476302c396.scope - libcontainer container db33fbf0c6cb6329984064e7680a35ff01cc3c9e7475ea87ff1e4f476302c396. May 13 23:41:46.322611 systemd[1]: cri-containerd-db33fbf0c6cb6329984064e7680a35ff01cc3c9e7475ea87ff1e4f476302c396.scope: Deactivated successfully. May 13 23:41:46.325864 containerd[1493]: time="2025-05-13T23:41:46.325823033Z" level=info msg="received exit event container_id:\"db33fbf0c6cb6329984064e7680a35ff01cc3c9e7475ea87ff1e4f476302c396\" id:\"db33fbf0c6cb6329984064e7680a35ff01cc3c9e7475ea87ff1e4f476302c396\" pid:3641 exited_at:{seconds:1747179706 nanos:325467911}" May 13 23:41:46.325972 containerd[1493]: time="2025-05-13T23:41:46.325893994Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db33fbf0c6cb6329984064e7680a35ff01cc3c9e7475ea87ff1e4f476302c396\" id:\"db33fbf0c6cb6329984064e7680a35ff01cc3c9e7475ea87ff1e4f476302c396\" pid:3641 exited_at:{seconds:1747179706 nanos:325467911}" May 13 23:41:46.328058 containerd[1493]: time="2025-05-13T23:41:46.328024608Z" level=info msg="StartContainer for \"db33fbf0c6cb6329984064e7680a35ff01cc3c9e7475ea87ff1e4f476302c396\" returns successfully" May 13 23:41:46.346992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db33fbf0c6cb6329984064e7680a35ff01cc3c9e7475ea87ff1e4f476302c396-rootfs.mount: Deactivated successfully. May 13 23:41:46.964375 kubelet[1822]: E0513 23:41:46.964315 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:47.211639 containerd[1493]: time="2025-05-13T23:41:47.211597640Z" level=info msg="CreateContainer within sandbox \"edebb9d88665748ef05edece1e7a9c343327b8d7f06cec210881cc9391fed95b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 23:41:47.225432 kubelet[1822]: I0513 23:41:47.225110 1822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-z2m4z" podStartSLOduration=1.694435864 podStartE2EDuration="3.225094803s" podCreationTimestamp="2025-05-13 23:41:44 +0000 UTC" firstStartedPulling="2025-05-13 23:41:44.552668674 +0000 UTC m=+51.189855891" lastFinishedPulling="2025-05-13 23:41:46.083327573 +0000 UTC m=+52.720514830" observedRunningTime="2025-05-13 23:41:46.232616786 +0000 UTC m=+52.869804003" watchObservedRunningTime="2025-05-13 23:41:47.225094803 +0000 UTC m=+53.862282060" May 13 23:41:47.228364 containerd[1493]: time="2025-05-13T23:41:47.228317863Z" level=info msg="Container ea3ac548a76d973e975ff26e793b03c8749f89b5b8c429f176f98be4c1ae5d32: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:47.233135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067992857.mount: Deactivated successfully. May 13 23:41:47.234997 containerd[1493]: time="2025-05-13T23:41:47.234884823Z" level=info msg="CreateContainer within sandbox \"edebb9d88665748ef05edece1e7a9c343327b8d7f06cec210881cc9391fed95b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ea3ac548a76d973e975ff26e793b03c8749f89b5b8c429f176f98be4c1ae5d32\"" May 13 23:41:47.235611 containerd[1493]: time="2025-05-13T23:41:47.235583787Z" level=info msg="StartContainer for \"ea3ac548a76d973e975ff26e793b03c8749f89b5b8c429f176f98be4c1ae5d32\"" May 13 23:41:47.236477 containerd[1493]: time="2025-05-13T23:41:47.236451512Z" level=info msg="connecting to shim ea3ac548a76d973e975ff26e793b03c8749f89b5b8c429f176f98be4c1ae5d32" address="unix:///run/containerd/s/fc08212b5d08f78f448683b0261bca6a3a69d73581563eafbf17996c0486bc9f" protocol=ttrpc version=3 May 13 23:41:47.263928 systemd[1]: Started cri-containerd-ea3ac548a76d973e975ff26e793b03c8749f89b5b8c429f176f98be4c1ae5d32.scope - libcontainer container ea3ac548a76d973e975ff26e793b03c8749f89b5b8c429f176f98be4c1ae5d32. May 13 23:41:47.288615 systemd[1]: cri-containerd-ea3ac548a76d973e975ff26e793b03c8749f89b5b8c429f176f98be4c1ae5d32.scope: Deactivated successfully. May 13 23:41:47.290305 containerd[1493]: time="2025-05-13T23:41:47.290248641Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea3ac548a76d973e975ff26e793b03c8749f89b5b8c429f176f98be4c1ae5d32\" id:\"ea3ac548a76d973e975ff26e793b03c8749f89b5b8c429f176f98be4c1ae5d32\" pid:3681 exited_at:{seconds:1747179707 nanos:289827598}" May 13 23:41:47.290680 containerd[1493]: time="2025-05-13T23:41:47.290541643Z" level=info msg="received exit event container_id:\"ea3ac548a76d973e975ff26e793b03c8749f89b5b8c429f176f98be4c1ae5d32\" id:\"ea3ac548a76d973e975ff26e793b03c8749f89b5b8c429f176f98be4c1ae5d32\" pid:3681 exited_at:{seconds:1747179707 nanos:289827598}" May 13 23:41:47.293428 containerd[1493]: time="2025-05-13T23:41:47.293398940Z" level=info msg="StartContainer for \"ea3ac548a76d973e975ff26e793b03c8749f89b5b8c429f176f98be4c1ae5d32\" returns successfully" May 13 23:41:47.310578 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea3ac548a76d973e975ff26e793b03c8749f89b5b8c429f176f98be4c1ae5d32-rootfs.mount: Deactivated successfully. May 13 23:41:47.965268 kubelet[1822]: E0513 23:41:47.965204 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:48.217524 containerd[1493]: time="2025-05-13T23:41:48.217419344Z" level=info msg="CreateContainer within sandbox \"edebb9d88665748ef05edece1e7a9c343327b8d7f06cec210881cc9391fed95b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 23:41:48.226213 containerd[1493]: time="2025-05-13T23:41:48.226164194Z" level=info msg="Container 73ca95827650638248fc5068051a1d7fc511667ff25523537a48a04d40ace517: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:48.234783 containerd[1493]: time="2025-05-13T23:41:48.234741483Z" level=info msg="CreateContainer within sandbox \"edebb9d88665748ef05edece1e7a9c343327b8d7f06cec210881cc9391fed95b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"73ca95827650638248fc5068051a1d7fc511667ff25523537a48a04d40ace517\"" May 13 23:41:48.235269 containerd[1493]: time="2025-05-13T23:41:48.235228406Z" level=info msg="StartContainer for \"73ca95827650638248fc5068051a1d7fc511667ff25523537a48a04d40ace517\"" May 13 23:41:48.236290 containerd[1493]: time="2025-05-13T23:41:48.236260212Z" level=info msg="connecting to shim 73ca95827650638248fc5068051a1d7fc511667ff25523537a48a04d40ace517" address="unix:///run/containerd/s/fc08212b5d08f78f448683b0261bca6a3a69d73581563eafbf17996c0486bc9f" protocol=ttrpc version=3 May 13 23:41:48.257905 systemd[1]: Started cri-containerd-73ca95827650638248fc5068051a1d7fc511667ff25523537a48a04d40ace517.scope - libcontainer container 73ca95827650638248fc5068051a1d7fc511667ff25523537a48a04d40ace517. May 13 23:41:48.292476 containerd[1493]: time="2025-05-13T23:41:48.292433173Z" level=info msg="StartContainer for \"73ca95827650638248fc5068051a1d7fc511667ff25523537a48a04d40ace517\" returns successfully" May 13 23:41:48.352291 containerd[1493]: time="2025-05-13T23:41:48.352251276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73ca95827650638248fc5068051a1d7fc511667ff25523537a48a04d40ace517\" id:\"814f1517f1a97d19efd6668c01c73004fd5977260aed65a1030a02adbe4101f0\" pid:3747 exited_at:{seconds:1747179708 nanos:351945834}" May 13 23:41:48.568731 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 13 23:41:48.966012 kubelet[1822]: E0513 23:41:48.965958 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:49.967603 kubelet[1822]: E0513 23:41:49.967521 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:50.823662 containerd[1493]: time="2025-05-13T23:41:50.823559863Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73ca95827650638248fc5068051a1d7fc511667ff25523537a48a04d40ace517\" id:\"b2af326e2e515bfd373bf77c7d11ec59f422986c5d21ba6bf5e454d28b678cda\" pid:4028 exit_status:1 exited_at:{seconds:1747179710 nanos:823071861}" May 13 23:41:50.967931 kubelet[1822]: E0513 23:41:50.967874 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:51.724022 systemd-networkd[1410]: lxc_health: Link UP May 13 23:41:51.724248 systemd-networkd[1410]: lxc_health: Gained carrier May 13 23:41:51.968191 kubelet[1822]: E0513 23:41:51.968132 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:52.417529 kubelet[1822]: I0513 23:41:52.417387 1822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9w25w" podStartSLOduration=8.417371517 podStartE2EDuration="8.417371517s" podCreationTimestamp="2025-05-13 23:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:41:49.241020882 +0000 UTC m=+55.878208139" watchObservedRunningTime="2025-05-13 23:41:52.417371517 +0000 UTC m=+59.054558774" May 13 23:41:52.969201 kubelet[1822]: E0513 23:41:52.969150 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:52.974123 containerd[1493]: time="2025-05-13T23:41:52.974078220Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73ca95827650638248fc5068051a1d7fc511667ff25523537a48a04d40ace517\" id:\"b98c1155562eee88708152c0b07906bdbfc824c5ad994adb7499ab3a923a0cb8\" pid:4288 exited_at:{seconds:1747179712 nanos:973796259}" May 13 23:41:53.402837 systemd-networkd[1410]: lxc_health: Gained IPv6LL May 13 23:41:53.931811 kubelet[1822]: E0513 23:41:53.931762 1822 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:53.951188 containerd[1493]: time="2025-05-13T23:41:53.951144760Z" level=info msg="StopPodSandbox for \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\"" May 13 23:41:53.951322 containerd[1493]: time="2025-05-13T23:41:53.951279681Z" level=info msg="TearDown network for sandbox \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\" successfully" May 13 23:41:53.951322 containerd[1493]: time="2025-05-13T23:41:53.951293601Z" level=info msg="StopPodSandbox for \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\" returns successfully" May 13 23:41:53.953731 containerd[1493]: time="2025-05-13T23:41:53.951639202Z" level=info msg="RemovePodSandbox for \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\"" May 13 23:41:53.953731 containerd[1493]: time="2025-05-13T23:41:53.951688923Z" level=info msg="Forcibly stopping sandbox \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\"" May 13 23:41:53.953731 containerd[1493]: time="2025-05-13T23:41:53.951781203Z" level=info msg="TearDown network for sandbox \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\" successfully" May 13 23:41:53.953731 containerd[1493]: time="2025-05-13T23:41:53.952767007Z" level=info msg="Ensure that sandbox 467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9 in task-service has been cleanup successfully" May 13 23:41:53.958332 containerd[1493]: time="2025-05-13T23:41:53.958276150Z" level=info msg="RemovePodSandbox \"467d5872a81711982eb9e7b9b434da5181a96c73442d5aad350281e41d70c7b9\" returns successfully" May 13 23:41:53.970374 kubelet[1822]: E0513 23:41:53.970319 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:54.971253 kubelet[1822]: E0513 23:41:54.971200 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:55.159151 containerd[1493]: time="2025-05-13T23:41:55.158748870Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73ca95827650638248fc5068051a1d7fc511667ff25523537a48a04d40ace517\" id:\"6d883667fd5bdaf6ba093ad96d8ef1b711e57dfd4ac7b2524549be06e89d4213\" pid:4319 exited_at:{seconds:1747179715 nanos:157938908}" May 13 23:41:55.972507 kubelet[1822]: E0513 23:41:55.972443 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:56.973349 kubelet[1822]: E0513 23:41:56.973308 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:57.326741 containerd[1493]: time="2025-05-13T23:41:57.325839927Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73ca95827650638248fc5068051a1d7fc511667ff25523537a48a04d40ace517\" id:\"63b33fc5c8d3de363a8d14a5be05f534f8ced4c0d89555a5004bc186ba39b9a1\" pid:4354 exited_at:{seconds:1747179717 nanos:324442202}" May 13 23:41:57.974837 kubelet[1822]: E0513 23:41:57.974775 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:58.974923 kubelet[1822]: E0513 23:41:58.974875 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:41:59.466315 containerd[1493]: time="2025-05-13T23:41:59.466263726Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73ca95827650638248fc5068051a1d7fc511667ff25523537a48a04d40ace517\" id:\"31947ca219f0994cf6b732a507fd6fa5eaa7a4638cac808f2ecea30ff0f2a202\" pid:4380 exited_at:{seconds:1747179719 nanos:465640525}" May 13 23:41:59.975291 kubelet[1822]: E0513 23:41:59.975220 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:42:00.976407 kubelet[1822]: E0513 23:42:00.976353 1822 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"