May 13 23:39:47.915002 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 23:39:47.915024 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 22:16:18 -00 2025 May 13 23:39:47.915033 kernel: KASLR enabled May 13 23:39:47.915039 kernel: efi: EFI v2.7 by EDK II May 13 23:39:47.915044 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb4ff018 ACPI 2.0=0xd93ef018 RNG=0xd93efa18 MEMRESERVE=0xd91e1f18 May 13 23:39:47.915050 kernel: random: crng init done May 13 23:39:47.915057 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 13 23:39:47.915062 kernel: secureboot: Secure boot enabled May 13 23:39:47.915068 kernel: ACPI: Early table checksum verification disabled May 13 23:39:47.915074 kernel: ACPI: RSDP 0x00000000D93EF018 000024 (v02 BOCHS ) May 13 23:39:47.915082 kernel: ACPI: XSDT 0x00000000D93EFF18 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 23:39:47.915088 kernel: ACPI: FACP 0x00000000D93EFB18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:39:47.915093 kernel: ACPI: DSDT 0x00000000D93ED018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:39:47.915099 kernel: ACPI: APIC 0x00000000D93EFC98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:39:47.915107 kernel: ACPI: PPTT 0x00000000D93EF098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:39:47.915114 kernel: ACPI: GTDT 0x00000000D93EF818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:39:47.915120 kernel: ACPI: MCFG 0x00000000D93EFA98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:39:47.915127 kernel: ACPI: SPCR 0x00000000D93EF918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:39:47.915133 kernel: ACPI: DBG2 0x00000000D93EF998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:39:47.915139 kernel: ACPI: IORT 0x00000000D93EF198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:39:47.915146 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 23:39:47.915152 kernel: NUMA: Failed to initialise from firmware May 13 23:39:47.915158 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:39:47.915164 kernel: NUMA: NODE_DATA [mem 0xdc729800-0xdc72efff] May 13 23:39:47.915170 kernel: Zone ranges: May 13 23:39:47.915178 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:39:47.915184 kernel: DMA32 empty May 13 23:39:47.915190 kernel: Normal empty May 13 23:39:47.915197 kernel: Movable zone start for each node May 13 23:39:47.915209 kernel: Early memory node ranges May 13 23:39:47.915217 kernel: node 0: [mem 0x0000000040000000-0x00000000d93effff] May 13 23:39:47.915223 kernel: node 0: [mem 0x00000000d93f0000-0x00000000d972ffff] May 13 23:39:47.915229 kernel: node 0: [mem 0x00000000d9730000-0x00000000dcbfffff] May 13 23:39:47.915235 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] May 13 23:39:47.915241 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 23:39:47.915247 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:39:47.915253 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 23:39:47.915261 kernel: psci: probing for conduit method from ACPI. May 13 23:39:47.915267 kernel: psci: PSCIv1.1 detected in firmware. May 13 23:39:47.915273 kernel: psci: Using standard PSCI v0.2 function IDs May 13 23:39:47.915282 kernel: psci: Trusted OS migration not required May 13 23:39:47.915288 kernel: psci: SMC Calling Convention v1.1 May 13 23:39:47.915295 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 23:39:47.915301 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 23:39:47.915309 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 23:39:47.915316 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 23:39:47.915322 kernel: Detected PIPT I-cache on CPU0 May 13 23:39:47.915329 kernel: CPU features: detected: GIC system register CPU interface May 13 23:39:47.915335 kernel: CPU features: detected: Hardware dirty bit management May 13 23:39:47.915341 kernel: CPU features: detected: Spectre-v4 May 13 23:39:47.915348 kernel: CPU features: detected: Spectre-BHB May 13 23:39:47.915354 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 23:39:47.915361 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 23:39:47.915368 kernel: CPU features: detected: ARM erratum 1418040 May 13 23:39:47.915376 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 23:39:47.915382 kernel: alternatives: applying boot alternatives May 13 23:39:47.915390 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:39:47.915397 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:39:47.915404 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:39:47.915410 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:39:47.915416 kernel: Fallback order for Node 0: 0 May 13 23:39:47.915423 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 23:39:47.915429 kernel: Policy zone: DMA May 13 23:39:47.915436 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:39:47.915444 kernel: software IO TLB: area num 4. May 13 23:39:47.915450 kernel: software IO TLB: mapped [mem 0x00000000d2800000-0x00000000d6800000] (64MB) May 13 23:39:47.915457 kernel: Memory: 2385752K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38464K init, 897K bss, 186536K reserved, 0K cma-reserved) May 13 23:39:47.915464 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 23:39:47.915470 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:39:47.915477 kernel: rcu: RCU event tracing is enabled. May 13 23:39:47.915483 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 23:39:47.915490 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:39:47.915497 kernel: Tracing variant of Tasks RCU enabled. May 13 23:39:47.915503 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:39:47.915510 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 23:39:47.915516 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 23:39:47.915524 kernel: GICv3: 256 SPIs implemented May 13 23:39:47.915530 kernel: GICv3: 0 Extended SPIs implemented May 13 23:39:47.915537 kernel: Root IRQ handler: gic_handle_irq May 13 23:39:47.915543 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 23:39:47.915550 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 23:39:47.915556 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 23:39:47.915562 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 13 23:39:47.915569 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 13 23:39:47.915575 kernel: GICv3: using LPI property table @0x00000000400f0000 May 13 23:39:47.915582 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 13 23:39:47.915588 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:39:47.915596 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:39:47.915603 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 23:39:47.915609 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 23:39:47.915616 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 23:39:47.915623 kernel: arm-pv: using stolen time PV May 13 23:39:47.915629 kernel: Console: colour dummy device 80x25 May 13 23:39:47.915636 kernel: ACPI: Core revision 20230628 May 13 23:39:47.915643 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 23:39:47.915649 kernel: pid_max: default: 32768 minimum: 301 May 13 23:39:47.915656 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:39:47.915664 kernel: landlock: Up and running. May 13 23:39:47.915671 kernel: SELinux: Initializing. May 13 23:39:47.915677 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:39:47.915684 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:39:47.915691 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 23:39:47.915697 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:39:47.915704 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:39:47.915711 kernel: rcu: Hierarchical SRCU implementation. May 13 23:39:47.915717 kernel: rcu: Max phase no-delay instances is 400. May 13 23:39:47.915725 kernel: Platform MSI: ITS@0x8080000 domain created May 13 23:39:47.915732 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 23:39:47.915738 kernel: Remapping and enabling EFI services. May 13 23:39:47.915745 kernel: smp: Bringing up secondary CPUs ... May 13 23:39:47.915752 kernel: Detected PIPT I-cache on CPU1 May 13 23:39:47.915758 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 23:39:47.915765 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 13 23:39:47.915772 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:39:47.915778 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 23:39:47.915785 kernel: Detected PIPT I-cache on CPU2 May 13 23:39:47.915793 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 23:39:47.915800 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 13 23:39:47.915812 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:39:47.915820 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 23:39:47.915827 kernel: Detected PIPT I-cache on CPU3 May 13 23:39:47.915834 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 23:39:47.915842 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 13 23:39:47.915849 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:39:47.915856 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 23:39:47.915863 kernel: smp: Brought up 1 node, 4 CPUs May 13 23:39:47.915871 kernel: SMP: Total of 4 processors activated. May 13 23:39:47.915879 kernel: CPU features: detected: 32-bit EL0 Support May 13 23:39:47.915886 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 23:39:47.915893 kernel: CPU features: detected: Common not Private translations May 13 23:39:47.915900 kernel: CPU features: detected: CRC32 instructions May 13 23:39:47.915907 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 23:39:47.915914 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 23:39:47.915922 kernel: CPU features: detected: LSE atomic instructions May 13 23:39:47.915929 kernel: CPU features: detected: Privileged Access Never May 13 23:39:47.915936 kernel: CPU features: detected: RAS Extension Support May 13 23:39:47.915943 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 23:39:47.915975 kernel: CPU: All CPU(s) started at EL1 May 13 23:39:47.915983 kernel: alternatives: applying system-wide alternatives May 13 23:39:47.915990 kernel: devtmpfs: initialized May 13 23:39:47.915997 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:39:47.916004 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 23:39:47.916015 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:39:47.916022 kernel: SMBIOS 3.0.0 present. May 13 23:39:47.916030 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 13 23:39:47.916037 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:39:47.916044 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 23:39:47.916052 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 23:39:47.916059 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 23:39:47.916066 kernel: audit: initializing netlink subsys (disabled) May 13 23:39:47.916074 kernel: audit: type=2000 audit(0.022:1): state=initialized audit_enabled=0 res=1 May 13 23:39:47.916083 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:39:47.916090 kernel: cpuidle: using governor menu May 13 23:39:47.916097 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 23:39:47.916104 kernel: ASID allocator initialised with 32768 entries May 13 23:39:47.916111 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:39:47.916118 kernel: Serial: AMBA PL011 UART driver May 13 23:39:47.916125 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 23:39:47.916132 kernel: Modules: 0 pages in range for non-PLT usage May 13 23:39:47.916139 kernel: Modules: 509232 pages in range for PLT usage May 13 23:39:47.916148 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:39:47.916155 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:39:47.916162 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 23:39:47.916169 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 23:39:47.916176 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:39:47.916183 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:39:47.916190 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 23:39:47.916197 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 23:39:47.916209 kernel: ACPI: Added _OSI(Module Device) May 13 23:39:47.916219 kernel: ACPI: Added _OSI(Processor Device) May 13 23:39:47.916226 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:39:47.916232 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:39:47.916240 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:39:47.916246 kernel: ACPI: Interpreter enabled May 13 23:39:47.916253 kernel: ACPI: Using GIC for interrupt routing May 13 23:39:47.916260 kernel: ACPI: MCFG table detected, 1 entries May 13 23:39:47.916267 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 23:39:47.916274 kernel: printk: console [ttyAMA0] enabled May 13 23:39:47.916282 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 23:39:47.916422 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:39:47.916497 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 23:39:47.916561 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 23:39:47.916624 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 23:39:47.916685 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 23:39:47.916695 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 23:39:47.916704 kernel: PCI host bridge to bus 0000:00 May 13 23:39:47.916777 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 23:39:47.916835 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 23:39:47.916893 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 23:39:47.916960 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 23:39:47.917046 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 23:39:47.917127 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 23:39:47.917196 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 23:39:47.917275 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 23:39:47.917341 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 23:39:47.917408 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 23:39:47.917474 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 23:39:47.917541 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 23:39:47.917606 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 23:39:47.917667 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 23:39:47.917726 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 23:39:47.917736 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 23:39:47.917743 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 23:39:47.917750 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 23:39:47.917757 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 23:39:47.917764 kernel: iommu: Default domain type: Translated May 13 23:39:47.917773 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 23:39:47.917780 kernel: efivars: Registered efivars operations May 13 23:39:47.917788 kernel: vgaarb: loaded May 13 23:39:47.917795 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 23:39:47.917802 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:39:47.917809 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:39:47.917816 kernel: pnp: PnP ACPI init May 13 23:39:47.917889 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 23:39:47.917899 kernel: pnp: PnP ACPI: found 1 devices May 13 23:39:47.917908 kernel: NET: Registered PF_INET protocol family May 13 23:39:47.917916 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:39:47.917923 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 23:39:47.917930 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:39:47.917937 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:39:47.917944 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 23:39:47.917971 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 23:39:47.917979 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:39:47.917986 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:39:47.917995 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:39:47.918003 kernel: PCI: CLS 0 bytes, default 64 May 13 23:39:47.918010 kernel: kvm [1]: HYP mode not available May 13 23:39:47.918017 kernel: Initialise system trusted keyrings May 13 23:39:47.918024 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 23:39:47.918031 kernel: Key type asymmetric registered May 13 23:39:47.918038 kernel: Asymmetric key parser 'x509' registered May 13 23:39:47.918046 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 23:39:47.918053 kernel: io scheduler mq-deadline registered May 13 23:39:47.918063 kernel: io scheduler kyber registered May 13 23:39:47.918070 kernel: io scheduler bfq registered May 13 23:39:47.918077 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 23:39:47.918084 kernel: ACPI: button: Power Button [PWRB] May 13 23:39:47.918091 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 23:39:47.918162 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 23:39:47.918172 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:39:47.918179 kernel: thunder_xcv, ver 1.0 May 13 23:39:47.918186 kernel: thunder_bgx, ver 1.0 May 13 23:39:47.918195 kernel: nicpf, ver 1.0 May 13 23:39:47.918207 kernel: nicvf, ver 1.0 May 13 23:39:47.918291 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 23:39:47.918357 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T23:39:47 UTC (1747179587) May 13 23:39:47.918366 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 23:39:47.918373 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 23:39:47.918380 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 23:39:47.918387 kernel: watchdog: Hard watchdog permanently disabled May 13 23:39:47.918397 kernel: NET: Registered PF_INET6 protocol family May 13 23:39:47.918404 kernel: Segment Routing with IPv6 May 13 23:39:47.918411 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:39:47.918418 kernel: NET: Registered PF_PACKET protocol family May 13 23:39:47.918425 kernel: Key type dns_resolver registered May 13 23:39:47.918432 kernel: registered taskstats version 1 May 13 23:39:47.918439 kernel: Loading compiled-in X.509 certificates May 13 23:39:47.918446 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 568a15bbab977599d8f910f319ba50c03c8a57bd' May 13 23:39:47.918453 kernel: Key type .fscrypt registered May 13 23:39:47.918461 kernel: Key type fscrypt-provisioning registered May 13 23:39:47.918468 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:39:47.918475 kernel: ima: Allocated hash algorithm: sha1 May 13 23:39:47.918482 kernel: ima: No architecture policies found May 13 23:39:47.918489 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 23:39:47.918496 kernel: clk: Disabling unused clocks May 13 23:39:47.918503 kernel: Freeing unused kernel memory: 38464K May 13 23:39:47.918510 kernel: Run /init as init process May 13 23:39:47.918517 kernel: with arguments: May 13 23:39:47.918526 kernel: /init May 13 23:39:47.918533 kernel: with environment: May 13 23:39:47.918541 kernel: HOME=/ May 13 23:39:47.918548 kernel: TERM=linux May 13 23:39:47.918555 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:39:47.918564 systemd[1]: Successfully made /usr/ read-only. May 13 23:39:47.918574 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:39:47.918584 systemd[1]: Detected virtualization kvm. May 13 23:39:47.918592 systemd[1]: Detected architecture arm64. May 13 23:39:47.918599 systemd[1]: Running in initrd. May 13 23:39:47.918606 systemd[1]: No hostname configured, using default hostname. May 13 23:39:47.918614 systemd[1]: Hostname set to . May 13 23:39:47.918622 systemd[1]: Initializing machine ID from VM UUID. May 13 23:39:47.918629 systemd[1]: Queued start job for default target initrd.target. May 13 23:39:47.918637 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:39:47.918645 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:39:47.918655 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:39:47.918663 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:39:47.918671 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:39:47.918679 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:39:47.918688 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:39:47.918696 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:39:47.918705 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:39:47.918713 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:39:47.918721 systemd[1]: Reached target paths.target - Path Units. May 13 23:39:47.918729 systemd[1]: Reached target slices.target - Slice Units. May 13 23:39:47.918736 systemd[1]: Reached target swap.target - Swaps. May 13 23:39:47.918744 systemd[1]: Reached target timers.target - Timer Units. May 13 23:39:47.918752 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:39:47.918760 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:39:47.918768 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:39:47.918778 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:39:47.918785 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:39:47.918793 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:39:47.918801 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:39:47.918808 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:39:47.918816 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:39:47.918824 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:39:47.918832 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:39:47.918841 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:39:47.918849 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:39:47.918857 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:39:47.918865 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:39:47.918873 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:39:47.918881 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:39:47.918891 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:39:47.918899 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:39:47.918923 systemd-journald[236]: Collecting audit messages is disabled. May 13 23:39:47.918944 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:39:47.918963 systemd-journald[236]: Journal started May 13 23:39:47.918981 systemd-journald[236]: Runtime Journal (/run/log/journal/451d231832674e6f91ee9f0bec4b0c31) is 5.9M, max 47.3M, 41.4M free. May 13 23:39:47.909582 systemd-modules-load[239]: Inserted module 'overlay' May 13 23:39:47.925966 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:39:47.926002 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:39:47.928087 systemd-modules-load[239]: Inserted module 'br_netfilter' May 13 23:39:47.930829 kernel: Bridge firewalling registered May 13 23:39:47.930850 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:39:47.930438 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:39:47.933254 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:39:47.940160 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:39:47.942058 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:39:47.946121 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:39:47.952315 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:39:47.955075 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:39:47.961048 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:39:47.962768 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:39:47.969156 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:39:47.972995 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:39:47.979619 dracut-cmdline[271]: dracut-dracut-053 May 13 23:39:47.985037 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:39:48.021233 systemd-resolved[281]: Positive Trust Anchors: May 13 23:39:48.023621 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:39:48.023656 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:39:48.029813 systemd-resolved[281]: Defaulting to hostname 'linux'. May 13 23:39:48.030820 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:39:48.033020 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:39:48.071977 kernel: SCSI subsystem initialized May 13 23:39:48.077966 kernel: Loading iSCSI transport class v2.0-870. May 13 23:39:48.084975 kernel: iscsi: registered transport (tcp) May 13 23:39:48.099980 kernel: iscsi: registered transport (qla4xxx) May 13 23:39:48.100046 kernel: QLogic iSCSI HBA Driver May 13 23:39:48.142439 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:39:48.144776 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:39:48.171995 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:39:48.172071 kernel: device-mapper: uevent: version 1.0.3 May 13 23:39:48.172083 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:39:48.221976 kernel: raid6: neonx8 gen() 15735 MB/s May 13 23:39:48.238969 kernel: raid6: neonx4 gen() 15791 MB/s May 13 23:39:48.255963 kernel: raid6: neonx2 gen() 13195 MB/s May 13 23:39:48.272964 kernel: raid6: neonx1 gen() 10485 MB/s May 13 23:39:48.289964 kernel: raid6: int64x8 gen() 6783 MB/s May 13 23:39:48.306964 kernel: raid6: int64x4 gen() 7340 MB/s May 13 23:39:48.323963 kernel: raid6: int64x2 gen() 6102 MB/s May 13 23:39:48.341186 kernel: raid6: int64x1 gen() 5047 MB/s May 13 23:39:48.341209 kernel: raid6: using algorithm neonx4 gen() 15791 MB/s May 13 23:39:48.359073 kernel: raid6: .... xor() 12428 MB/s, rmw enabled May 13 23:39:48.359086 kernel: raid6: using neon recovery algorithm May 13 23:39:48.364339 kernel: xor: measuring software checksum speed May 13 23:39:48.364352 kernel: 8regs : 21573 MB/sec May 13 23:39:48.365029 kernel: 32regs : 21641 MB/sec May 13 23:39:48.366289 kernel: arm64_neon : 27841 MB/sec May 13 23:39:48.366308 kernel: xor: using function: arm64_neon (27841 MB/sec) May 13 23:39:48.418976 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:39:48.430407 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:39:48.432909 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:39:48.458009 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 13 23:39:48.462496 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:39:48.465648 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:39:48.488829 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation May 13 23:39:48.514032 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:39:48.516299 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:39:48.570041 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:39:48.574484 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:39:48.596379 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:39:48.597936 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:39:48.599772 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:39:48.602293 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:39:48.605748 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:39:48.622813 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 23:39:48.622989 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 23:39:48.622754 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:39:48.637095 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:39:48.637137 kernel: GPT:9289727 != 19775487 May 13 23:39:48.637154 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:39:48.638180 kernel: GPT:9289727 != 19775487 May 13 23:39:48.638221 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:39:48.638965 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:39:48.639067 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:39:48.639176 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:39:48.642379 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:39:48.643534 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:39:48.643712 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:39:48.646850 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:39:48.650442 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:39:48.661976 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (523) May 13 23:39:48.666987 kernel: BTRFS: device fsid ee830c17-a93d-4109-bd12-3fec8ef6763d devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (522) May 13 23:39:48.673243 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 23:39:48.675701 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:39:48.687894 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 23:39:48.699717 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:39:48.706019 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 23:39:48.707275 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 23:39:48.711165 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:39:48.730549 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:39:48.737729 disk-uuid[550]: Primary Header is updated. May 13 23:39:48.737729 disk-uuid[550]: Secondary Entries is updated. May 13 23:39:48.737729 disk-uuid[550]: Secondary Header is updated. May 13 23:39:48.743984 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:39:48.760914 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:39:49.751972 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:39:49.753115 disk-uuid[551]: The operation has completed successfully. May 13 23:39:49.780073 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:39:49.780188 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:39:49.806734 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:39:49.826928 sh[572]: Success May 13 23:39:49.852989 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 23:39:49.891426 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:39:49.894533 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:39:49.911577 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:39:49.921167 kernel: BTRFS info (device dm-0): first mount of filesystem ee830c17-a93d-4109-bd12-3fec8ef6763d May 13 23:39:49.921218 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 23:39:49.922439 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:39:49.922462 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:39:49.923382 kernel: BTRFS info (device dm-0): using free space tree May 13 23:39:49.927913 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:39:49.929466 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:39:49.930361 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:39:49.933542 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:39:49.953046 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:39:49.953116 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:39:49.953127 kernel: BTRFS info (device vda6): using free space tree May 13 23:39:49.955982 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:39:49.962040 kernel: BTRFS info (device vda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:39:49.966022 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:39:49.968216 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:39:50.066018 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:39:50.072144 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:39:50.126276 ignition[664]: Ignition 2.20.0 May 13 23:39:50.126287 ignition[664]: Stage: fetch-offline May 13 23:39:50.126324 ignition[664]: no configs at "/usr/lib/ignition/base.d" May 13 23:39:50.126333 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:39:50.126539 ignition[664]: parsed url from cmdline: "" May 13 23:39:50.126543 ignition[664]: no config URL provided May 13 23:39:50.126548 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:39:50.126555 ignition[664]: no config at "/usr/lib/ignition/user.ign" May 13 23:39:50.126580 ignition[664]: op(1): [started] loading QEMU firmware config module May 13 23:39:50.126585 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 23:39:50.134614 systemd-networkd[762]: lo: Link UP May 13 23:39:50.134618 systemd-networkd[762]: lo: Gained carrier May 13 23:39:50.135525 systemd-networkd[762]: Enumeration completed May 13 23:39:50.136020 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:39:50.136023 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:39:50.136316 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:39:50.142305 ignition[664]: op(1): [finished] loading QEMU firmware config module May 13 23:39:50.136852 systemd-networkd[762]: eth0: Link UP May 13 23:39:50.136855 systemd-networkd[762]: eth0: Gained carrier May 13 23:39:50.136862 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:39:50.138875 systemd[1]: Reached target network.target - Network. May 13 23:39:50.161003 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.42/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:39:50.185709 ignition[664]: parsing config with SHA512: 9e4f41f308ebc9dbb9e04aac52fa0e4961eb9979fb224ecb7390e970182d901d364aa36b9f370df6d9fa17391da3f16b0f86c39b072933b60b4be97c696b8994 May 13 23:39:50.192852 unknown[664]: fetched base config from "system" May 13 23:39:50.192864 unknown[664]: fetched user config from "qemu" May 13 23:39:50.193477 ignition[664]: fetch-offline: fetch-offline passed May 13 23:39:50.195798 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:39:50.193554 ignition[664]: Ignition finished successfully May 13 23:39:50.197185 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 23:39:50.198051 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:39:50.226052 ignition[769]: Ignition 2.20.0 May 13 23:39:50.226063 ignition[769]: Stage: kargs May 13 23:39:50.226241 ignition[769]: no configs at "/usr/lib/ignition/base.d" May 13 23:39:50.226251 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:39:50.230735 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:39:50.227222 ignition[769]: kargs: kargs passed May 13 23:39:50.227271 ignition[769]: Ignition finished successfully May 13 23:39:50.234657 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:39:50.260219 ignition[778]: Ignition 2.20.0 May 13 23:39:50.260230 ignition[778]: Stage: disks May 13 23:39:50.260389 ignition[778]: no configs at "/usr/lib/ignition/base.d" May 13 23:39:50.263335 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:39:50.260400 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:39:50.264626 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:39:50.261319 ignition[778]: disks: disks passed May 13 23:39:50.266754 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:39:50.261369 ignition[778]: Ignition finished successfully May 13 23:39:50.268929 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:39:50.270984 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:39:50.272606 systemd[1]: Reached target basic.target - Basic System. May 13 23:39:50.275531 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:39:50.308303 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:39:50.312623 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:39:50.315526 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:39:50.374980 kernel: EXT4-fs (vda9): mounted filesystem 9f8d74e6-c079-469f-823a-18a62077a2c7 r/w with ordered data mode. Quota mode: none. May 13 23:39:50.374990 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:39:50.376396 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:39:50.378975 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:39:50.380839 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:39:50.381875 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 23:39:50.381920 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:39:50.381967 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:39:50.392633 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:39:50.395975 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:39:50.399060 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (796) May 13 23:39:50.401180 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:39:50.401212 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:39:50.401233 kernel: BTRFS info (device vda6): using free space tree May 13 23:39:50.405175 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:39:50.406122 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:39:50.457445 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:39:50.462383 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory May 13 23:39:50.466298 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:39:50.469788 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:39:50.547276 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:39:50.549672 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:39:50.551471 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:39:50.572975 kernel: BTRFS info (device vda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:39:50.591210 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:39:50.603042 ignition[911]: INFO : Ignition 2.20.0 May 13 23:39:50.603042 ignition[911]: INFO : Stage: mount May 13 23:39:50.604735 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:39:50.604735 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:39:50.604735 ignition[911]: INFO : mount: mount passed May 13 23:39:50.604735 ignition[911]: INFO : Ignition finished successfully May 13 23:39:50.607981 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:39:50.610447 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:39:50.919542 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:39:50.921006 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:39:50.939999 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (923) May 13 23:39:50.940048 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:39:50.940059 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:39:50.941555 kernel: BTRFS info (device vda6): using free space tree May 13 23:39:50.943966 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:39:50.945152 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:39:50.965925 ignition[940]: INFO : Ignition 2.20.0 May 13 23:39:50.965925 ignition[940]: INFO : Stage: files May 13 23:39:50.967665 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:39:50.967665 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:39:50.967665 ignition[940]: DEBUG : files: compiled without relabeling support, skipping May 13 23:39:50.971253 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:39:50.971253 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:39:50.974761 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:39:50.976182 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:39:50.976182 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:39:50.975371 unknown[940]: wrote ssh authorized keys file for user: core May 13 23:39:50.980237 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 23:39:50.980237 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 13 23:39:51.088975 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:39:51.219092 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 23:39:51.219092 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 23:39:51.224088 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 13 23:39:51.545216 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 23:39:51.612270 systemd-networkd[762]: eth0: Gained IPv6LL May 13 23:39:51.623377 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 23:39:51.625378 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 23:39:51.625378 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:39:51.625378 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:39:51.625378 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:39:51.625378 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:39:51.625378 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:39:51.625378 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:39:51.625378 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:39:51.625378 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:39:51.625378 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:39:51.625378 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:39:51.625378 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:39:51.625378 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:39:51.625378 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 13 23:39:51.851664 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 23:39:52.371962 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:39:52.371962 ignition[940]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 23:39:52.375753 ignition[940]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:39:52.375753 ignition[940]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:39:52.375753 ignition[940]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 23:39:52.375753 ignition[940]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 13 23:39:52.375753 ignition[940]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:39:52.375753 ignition[940]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:39:52.375753 ignition[940]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 13 23:39:52.375753 ignition[940]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 13 23:39:52.400642 ignition[940]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:39:52.404529 ignition[940]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:39:52.404529 ignition[940]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 13 23:39:52.404529 ignition[940]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 13 23:39:52.404529 ignition[940]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:39:52.404529 ignition[940]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:39:52.416472 ignition[940]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:39:52.416472 ignition[940]: INFO : files: files passed May 13 23:39:52.416472 ignition[940]: INFO : Ignition finished successfully May 13 23:39:52.409923 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:39:52.415708 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:39:52.418131 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:39:52.425907 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:39:52.426066 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:39:52.429459 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory May 13 23:39:52.431282 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:39:52.431282 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:39:52.434509 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:39:52.433019 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:39:52.435844 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:39:52.438877 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:39:52.475006 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:39:52.475142 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:39:52.477408 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:39:52.479360 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:39:52.481155 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:39:52.482168 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:39:52.506024 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:39:52.511342 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:39:52.536311 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:39:52.537646 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:39:52.539871 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:39:52.542072 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:39:52.542232 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:39:52.544940 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:39:52.547274 systemd[1]: Stopped target basic.target - Basic System. May 13 23:39:52.549257 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:39:52.551215 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:39:52.553278 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:39:52.555451 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:39:52.557421 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:39:52.559404 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:39:52.561518 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:39:52.563472 systemd[1]: Stopped target swap.target - Swaps. May 13 23:39:52.565327 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:39:52.565470 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:39:52.568483 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:39:52.571097 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:39:52.573047 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:39:52.574046 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:39:52.575633 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:39:52.575763 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:39:52.578830 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:39:52.578976 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:39:52.581216 systemd[1]: Stopped target paths.target - Path Units. May 13 23:39:52.583111 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:39:52.584025 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:39:52.585431 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:39:52.587433 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:39:52.588993 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:39:52.589083 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:39:52.591156 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:39:52.591235 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:39:52.593412 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:39:52.593525 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:39:52.595246 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:39:52.595479 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:39:52.598274 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:39:52.600190 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:39:52.600327 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:39:52.614605 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:39:52.615525 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:39:52.615660 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:39:52.617795 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:39:52.617905 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:39:52.624323 ignition[995]: INFO : Ignition 2.20.0 May 13 23:39:52.624323 ignition[995]: INFO : Stage: umount May 13 23:39:52.627401 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:39:52.627401 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:39:52.627401 ignition[995]: INFO : umount: umount passed May 13 23:39:52.627401 ignition[995]: INFO : Ignition finished successfully May 13 23:39:52.625090 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:39:52.625181 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:39:52.627695 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:39:52.627802 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:39:52.630246 systemd[1]: Stopped target network.target - Network. May 13 23:39:52.632628 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:39:52.632692 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:39:52.635560 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:39:52.635613 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:39:52.637374 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:39:52.637423 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:39:52.639414 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:39:52.639455 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:39:52.640904 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:39:52.642853 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:39:52.645895 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:39:52.646621 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:39:52.646894 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:39:52.651469 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:39:52.651837 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:39:52.652097 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:39:52.655692 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:39:52.655902 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:39:52.656014 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:39:52.659216 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:39:52.659264 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:39:52.660999 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:39:52.661052 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:39:52.663657 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:39:52.664790 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:39:52.664853 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:39:52.667484 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:39:52.667541 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:39:52.670275 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:39:52.670322 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:39:52.672443 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:39:52.672490 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:39:52.675114 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:39:52.678971 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:39:52.679034 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:39:52.697429 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:39:52.697578 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:39:52.700028 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:39:52.700195 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:39:52.701702 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:39:52.701761 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:39:52.703388 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:39:52.703421 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:39:52.705181 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:39:52.705233 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:39:52.707829 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:39:52.707879 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:39:52.710706 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:39:52.710758 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:39:52.714537 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:39:52.715718 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:39:52.715776 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:39:52.718545 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:39:52.718590 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:39:52.722527 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:39:52.722582 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:39:52.730587 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:39:52.730691 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:39:52.733244 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:39:52.736075 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:39:52.763453 systemd[1]: Switching root. May 13 23:39:52.795923 systemd-journald[236]: Journal stopped May 13 23:39:53.574570 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). May 13 23:39:53.574629 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:39:53.574642 kernel: SELinux: policy capability open_perms=1 May 13 23:39:53.574652 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:39:53.574661 kernel: SELinux: policy capability always_check_network=0 May 13 23:39:53.574671 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:39:53.574684 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:39:53.574694 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:39:53.574703 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:39:53.574713 kernel: audit: type=1403 audit(1747179592.951:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:39:53.574723 systemd[1]: Successfully loaded SELinux policy in 33.959ms. May 13 23:39:53.574746 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.074ms. May 13 23:39:53.574758 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:39:53.574771 systemd[1]: Detected virtualization kvm. May 13 23:39:53.574784 systemd[1]: Detected architecture arm64. May 13 23:39:53.574794 systemd[1]: Detected first boot. May 13 23:39:53.574805 systemd[1]: Initializing machine ID from VM UUID. May 13 23:39:53.574815 zram_generator::config[1043]: No configuration found. May 13 23:39:53.574827 kernel: NET: Registered PF_VSOCK protocol family May 13 23:39:53.574837 systemd[1]: Populated /etc with preset unit settings. May 13 23:39:53.574848 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:39:53.574858 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:39:53.574869 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:39:53.574881 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:39:53.574892 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:39:53.574902 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:39:53.574916 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:39:53.574927 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:39:53.574938 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:39:53.574960 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:39:53.574972 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:39:53.574985 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:39:53.574995 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:39:53.575014 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:39:53.575027 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:39:53.575039 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:39:53.575050 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:39:53.575065 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:39:53.575079 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 23:39:53.575090 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:39:53.575102 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:39:53.575113 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:39:53.575124 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:39:53.575137 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:39:53.575148 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:39:53.575159 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:39:53.575169 systemd[1]: Reached target slices.target - Slice Units. May 13 23:39:53.575180 systemd[1]: Reached target swap.target - Swaps. May 13 23:39:53.575192 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:39:53.575203 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:39:53.575213 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:39:53.575224 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:39:53.575235 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:39:53.575246 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:39:53.575256 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:39:53.575267 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:39:53.575278 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:39:53.575290 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:39:53.575301 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:39:53.575312 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:39:53.575322 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:39:53.575333 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:39:53.575343 systemd[1]: Reached target machines.target - Containers. May 13 23:39:53.575354 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:39:53.575365 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:39:53.575377 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:39:53.575388 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:39:53.575398 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:39:53.575408 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:39:53.575419 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:39:53.575430 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:39:53.575440 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:39:53.575451 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:39:53.575462 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:39:53.575474 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:39:53.575484 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:39:53.575494 kernel: fuse: init (API version 7.39) May 13 23:39:53.575505 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:39:53.575516 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:39:53.575527 kernel: loop: module loaded May 13 23:39:53.575537 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:39:53.575547 kernel: ACPI: bus type drm_connector registered May 13 23:39:53.575559 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:39:53.575569 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:39:53.575580 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:39:53.575590 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:39:53.575601 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:39:53.575632 systemd-journald[1111]: Collecting audit messages is disabled. May 13 23:39:53.575654 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:39:53.575664 systemd[1]: Stopped verity-setup.service. May 13 23:39:53.575677 systemd-journald[1111]: Journal started May 13 23:39:53.575698 systemd-journald[1111]: Runtime Journal (/run/log/journal/451d231832674e6f91ee9f0bec4b0c31) is 5.9M, max 47.3M, 41.4M free. May 13 23:39:53.361983 systemd[1]: Queued start job for default target multi-user.target. May 13 23:39:53.377754 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 23:39:53.378160 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:39:53.579732 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:39:53.580425 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:39:53.581600 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:39:53.582805 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:39:53.583910 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:39:53.585259 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:39:53.586473 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:39:53.587749 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:39:53.589248 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:39:53.590746 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:39:53.590906 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:39:53.592313 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:39:53.592476 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:39:53.595441 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:39:53.595614 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:39:53.596865 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:39:53.597037 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:39:53.598648 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:39:53.598817 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:39:53.600145 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:39:53.600299 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:39:53.601671 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:39:53.603074 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:39:53.604807 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:39:53.606376 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:39:53.619455 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:39:53.621884 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:39:53.623971 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:39:53.625229 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:39:53.625268 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:39:53.627134 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:39:53.638095 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:39:53.640329 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:39:53.641506 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:39:53.642837 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:39:53.644890 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:39:53.646178 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:39:53.647039 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:39:53.648471 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:39:53.653721 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:39:53.654183 systemd-journald[1111]: Time spent on flushing to /var/log/journal/451d231832674e6f91ee9f0bec4b0c31 is 27.203ms for 868 entries. May 13 23:39:53.654183 systemd-journald[1111]: System Journal (/var/log/journal/451d231832674e6f91ee9f0bec4b0c31) is 8M, max 195.6M, 187.6M free. May 13 23:39:53.690374 systemd-journald[1111]: Received client request to flush runtime journal. May 13 23:39:53.690411 kernel: loop0: detected capacity change from 0 to 103832 May 13 23:39:53.656749 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:39:53.660209 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:39:53.663356 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:39:53.666309 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:39:53.667682 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:39:53.669251 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:39:53.676763 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:39:53.692819 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:39:53.694719 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:39:53.696850 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:39:53.701162 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:39:53.704981 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:39:53.713097 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:39:53.721321 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 23:39:53.728863 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:39:53.733494 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:39:53.736386 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:39:53.740000 kernel: loop1: detected capacity change from 0 to 194096 May 13 23:39:53.769465 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. May 13 23:39:53.769479 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. May 13 23:39:53.773614 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:39:53.786967 kernel: loop2: detected capacity change from 0 to 126448 May 13 23:39:53.838983 kernel: loop3: detected capacity change from 0 to 103832 May 13 23:39:53.844116 kernel: loop4: detected capacity change from 0 to 194096 May 13 23:39:53.850846 kernel: loop5: detected capacity change from 0 to 126448 May 13 23:39:53.854445 (sd-merge)[1186]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 23:39:53.854830 (sd-merge)[1186]: Merged extensions into '/usr'. May 13 23:39:53.859793 systemd[1]: Reload requested from client PID 1160 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:39:53.859807 systemd[1]: Reloading... May 13 23:39:53.923980 zram_generator::config[1211]: No configuration found. May 13 23:39:53.953798 ldconfig[1155]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:39:54.009118 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:39:54.058411 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:39:54.058699 systemd[1]: Reloading finished in 198 ms. May 13 23:39:54.078631 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:39:54.080081 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:39:54.094419 systemd[1]: Starting ensure-sysext.service... May 13 23:39:54.096250 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:39:54.109087 systemd[1]: Reload requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... May 13 23:39:54.109101 systemd[1]: Reloading... May 13 23:39:54.118754 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:39:54.119008 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:39:54.119730 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:39:54.120382 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. May 13 23:39:54.120507 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. May 13 23:39:54.123644 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:39:54.123764 systemd-tmpfiles[1249]: Skipping /boot May 13 23:39:54.133062 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:39:54.135117 systemd-tmpfiles[1249]: Skipping /boot May 13 23:39:54.154994 zram_generator::config[1279]: No configuration found. May 13 23:39:54.237786 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:39:54.287113 systemd[1]: Reloading finished in 177 ms. May 13 23:39:54.300978 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:39:54.318120 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:39:54.325913 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:39:54.328186 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:39:54.332236 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:39:54.335899 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:39:54.338402 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:39:54.342258 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:39:54.346621 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:39:54.348404 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:39:54.356055 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:39:54.359555 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:39:54.360823 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:39:54.360960 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:39:54.364837 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:39:54.366988 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:39:54.369034 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:39:54.370920 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:39:54.379185 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:39:54.383262 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:39:54.385626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:39:54.385787 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:39:54.386765 systemd-udevd[1322]: Using default interface naming scheme 'v255'. May 13 23:39:54.397443 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:39:54.404494 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:39:54.406262 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:39:54.411122 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:39:54.416885 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:39:54.421810 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:39:54.423581 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:39:54.423711 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:39:54.432722 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:39:54.435832 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:39:54.436008 augenrules[1372]: No rules May 13 23:39:54.436944 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:39:54.438165 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:39:54.449417 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:39:54.451003 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:39:54.452705 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:39:54.452890 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:39:54.455686 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:39:54.455859 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:39:54.457569 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:39:54.457825 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:39:54.460965 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:39:54.461147 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:39:54.463638 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:39:54.467897 systemd[1]: Finished ensure-sysext.service. May 13 23:39:54.478474 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 23:39:54.481058 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:39:54.482062 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:39:54.482131 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:39:54.484156 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 23:39:54.504021 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1348) May 13 23:39:54.505717 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:39:54.541070 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:39:54.544821 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:39:54.579615 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:39:54.588909 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 23:39:54.590233 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:39:54.618572 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:39:54.619295 systemd-resolved[1318]: Positive Trust Anchors: May 13 23:39:54.622772 systemd-resolved[1318]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:39:54.622811 systemd-resolved[1318]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:39:54.629864 systemd-networkd[1390]: lo: Link UP May 13 23:39:54.629879 systemd-networkd[1390]: lo: Gained carrier May 13 23:39:54.630638 systemd-resolved[1318]: Defaulting to hostname 'linux'. May 13 23:39:54.630911 systemd-networkd[1390]: Enumeration completed May 13 23:39:54.631161 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:39:54.632416 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:39:54.634614 systemd[1]: Reached target network.target - Network. May 13 23:39:54.635561 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:39:54.637938 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:39:54.640171 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:39:54.643335 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:39:54.643346 systemd-networkd[1390]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:39:54.645612 systemd-networkd[1390]: eth0: Link UP May 13 23:39:54.645621 systemd-networkd[1390]: eth0: Gained carrier May 13 23:39:54.645635 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:39:54.650908 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:39:54.653486 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:39:54.662017 systemd-networkd[1390]: eth0: DHCPv4 address 10.0.0.42/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:39:54.666131 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. May 13 23:39:55.094332 systemd-timesyncd[1391]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 23:39:55.094369 systemd-resolved[1318]: Clock change detected. Flushing caches. May 13 23:39:55.094383 systemd-timesyncd[1391]: Initial clock synchronization to Tue 2025-05-13 23:39:55.094241 UTC. May 13 23:39:55.094518 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:39:55.101143 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:39:55.107133 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:39:55.126865 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:39:55.128365 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:39:55.129564 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:39:55.130695 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:39:55.131921 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:39:55.133296 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:39:55.134635 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:39:55.135881 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:39:55.137124 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:39:55.137160 systemd[1]: Reached target paths.target - Path Units. May 13 23:39:55.138084 systemd[1]: Reached target timers.target - Timer Units. May 13 23:39:55.139921 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:39:55.142222 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:39:55.145309 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:39:55.146685 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:39:55.147859 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:39:55.153280 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:39:55.154653 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:39:55.156899 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:39:55.158443 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:39:55.159534 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:39:55.160385 systemd[1]: Reached target basic.target - Basic System. May 13 23:39:55.161290 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:39:55.161323 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:39:55.162169 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:39:55.164928 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:39:55.164069 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:39:55.166159 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:39:55.170185 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:39:55.171488 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:39:55.173116 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:39:55.179277 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:39:55.181278 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:39:55.183382 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:39:55.186232 jq[1424]: false May 13 23:39:55.190566 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:39:55.192502 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:39:55.192904 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:39:55.193442 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:39:55.196005 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:39:55.198922 extend-filesystems[1425]: Found loop3 May 13 23:39:55.205431 extend-filesystems[1425]: Found loop4 May 13 23:39:55.205431 extend-filesystems[1425]: Found loop5 May 13 23:39:55.205431 extend-filesystems[1425]: Found vda May 13 23:39:55.205431 extend-filesystems[1425]: Found vda1 May 13 23:39:55.205431 extend-filesystems[1425]: Found vda2 May 13 23:39:55.205431 extend-filesystems[1425]: Found vda3 May 13 23:39:55.205431 extend-filesystems[1425]: Found usr May 13 23:39:55.205431 extend-filesystems[1425]: Found vda4 May 13 23:39:55.205431 extend-filesystems[1425]: Found vda6 May 13 23:39:55.205431 extend-filesystems[1425]: Found vda7 May 13 23:39:55.205431 extend-filesystems[1425]: Found vda9 May 13 23:39:55.205431 extend-filesystems[1425]: Checking size of /dev/vda9 May 13 23:39:55.199428 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:39:55.204322 dbus-daemon[1423]: [system] SELinux support is enabled May 13 23:39:55.235101 extend-filesystems[1425]: Resized partition /dev/vda9 May 13 23:39:55.207760 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:39:55.240128 jq[1438]: true May 13 23:39:55.210637 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:39:55.210818 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:39:55.211059 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:39:55.240545 jq[1446]: true May 13 23:39:55.211209 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:39:55.213934 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:39:55.214106 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:39:55.222318 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:39:55.222346 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:39:55.226563 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:39:55.226582 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:39:55.241072 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:39:55.247986 extend-filesystems[1455]: resize2fs 1.47.2 (1-Jan-2025) May 13 23:39:55.256942 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1350) May 13 23:39:55.257060 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 23:39:55.262619 tar[1445]: linux-arm64/helm May 13 23:39:55.281402 update_engine[1436]: I20250513 23:39:55.280900 1436 main.cc:92] Flatcar Update Engine starting May 13 23:39:55.286093 systemd[1]: Started update-engine.service - Update Engine. May 13 23:39:55.287265 update_engine[1436]: I20250513 23:39:55.286329 1436 update_check_scheduler.cc:74] Next update check in 10m36s May 13 23:39:55.289661 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:39:55.298446 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 23:39:55.314152 extend-filesystems[1455]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 23:39:55.314152 extend-filesystems[1455]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 23:39:55.314152 extend-filesystems[1455]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 23:39:55.334977 extend-filesystems[1425]: Resized filesystem in /dev/vda9 May 13 23:39:55.336660 bash[1477]: Updated "/home/core/.ssh/authorized_keys" May 13 23:39:55.315943 systemd-logind[1433]: Watching system buttons on /dev/input/event0 (Power Button) May 13 23:39:55.316735 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:39:55.316922 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:39:55.318176 systemd-logind[1433]: New seat seat0. May 13 23:39:55.326294 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:39:55.331848 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:39:55.335743 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 23:39:55.342730 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:39:55.486572 containerd[1456]: time="2025-05-13T23:39:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:39:55.487309 containerd[1456]: time="2025-05-13T23:39:55.487265229Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:39:55.498090 containerd[1456]: time="2025-05-13T23:39:55.498036989Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.76µs" May 13 23:39:55.498090 containerd[1456]: time="2025-05-13T23:39:55.498080789Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:39:55.498184 containerd[1456]: time="2025-05-13T23:39:55.498108349Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:39:55.498296 containerd[1456]: time="2025-05-13T23:39:55.498273029Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:39:55.498327 containerd[1456]: time="2025-05-13T23:39:55.498299389Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:39:55.498346 containerd[1456]: time="2025-05-13T23:39:55.498330349Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:39:55.498428 containerd[1456]: time="2025-05-13T23:39:55.498386949Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:39:55.498428 containerd[1456]: time="2025-05-13T23:39:55.498421349Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:39:55.498765 containerd[1456]: time="2025-05-13T23:39:55.498736309Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:39:55.498765 containerd[1456]: time="2025-05-13T23:39:55.498759709Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:39:55.498817 containerd[1456]: time="2025-05-13T23:39:55.498776189Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:39:55.498817 containerd[1456]: time="2025-05-13T23:39:55.498788749Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:39:55.498885 containerd[1456]: time="2025-05-13T23:39:55.498865989Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:39:55.499096 containerd[1456]: time="2025-05-13T23:39:55.499072109Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:39:55.499132 containerd[1456]: time="2025-05-13T23:39:55.499114109Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:39:55.499132 containerd[1456]: time="2025-05-13T23:39:55.499126149Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:39:55.499174 containerd[1456]: time="2025-05-13T23:39:55.499158309Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:39:55.499765 containerd[1456]: time="2025-05-13T23:39:55.499729829Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:39:55.499844 containerd[1456]: time="2025-05-13T23:39:55.499825989Z" level=info msg="metadata content store policy set" policy=shared May 13 23:39:55.520466 containerd[1456]: time="2025-05-13T23:39:55.520079109Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:39:55.520466 containerd[1456]: time="2025-05-13T23:39:55.520160149Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:39:55.520466 containerd[1456]: time="2025-05-13T23:39:55.520175949Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:39:55.520466 containerd[1456]: time="2025-05-13T23:39:55.520193629Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:39:55.520466 containerd[1456]: time="2025-05-13T23:39:55.520206029Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:39:55.520466 containerd[1456]: time="2025-05-13T23:39:55.520217469Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:39:55.520466 containerd[1456]: time="2025-05-13T23:39:55.520233189Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:39:55.520466 containerd[1456]: time="2025-05-13T23:39:55.520245909Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:39:55.520466 containerd[1456]: time="2025-05-13T23:39:55.520257229Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:39:55.520466 containerd[1456]: time="2025-05-13T23:39:55.520269309Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:39:55.520466 containerd[1456]: time="2025-05-13T23:39:55.520279189Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:39:55.520466 containerd[1456]: time="2025-05-13T23:39:55.520290189Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:39:55.520740 containerd[1456]: time="2025-05-13T23:39:55.520641349Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:39:55.520740 containerd[1456]: time="2025-05-13T23:39:55.520672029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:39:55.520740 containerd[1456]: time="2025-05-13T23:39:55.520693509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:39:55.520740 containerd[1456]: time="2025-05-13T23:39:55.520717829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:39:55.520740 containerd[1456]: time="2025-05-13T23:39:55.520730949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:39:55.520824 containerd[1456]: time="2025-05-13T23:39:55.520742229Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:39:55.520824 containerd[1456]: time="2025-05-13T23:39:55.520754109Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:39:55.520824 containerd[1456]: time="2025-05-13T23:39:55.520765629Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:39:55.520824 containerd[1456]: time="2025-05-13T23:39:55.520779509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:39:55.520824 containerd[1456]: time="2025-05-13T23:39:55.520793629Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:39:55.520824 containerd[1456]: time="2025-05-13T23:39:55.520806429Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:39:55.522102 containerd[1456]: time="2025-05-13T23:39:55.521189829Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:39:55.522102 containerd[1456]: time="2025-05-13T23:39:55.521220869Z" level=info msg="Start snapshots syncer" May 13 23:39:55.522102 containerd[1456]: time="2025-05-13T23:39:55.521255589Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:39:55.522242 containerd[1456]: time="2025-05-13T23:39:55.521800989Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:39:55.522242 containerd[1456]: time="2025-05-13T23:39:55.521864589Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:39:55.522242 containerd[1456]: time="2025-05-13T23:39:55.522099709Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:39:55.522364 containerd[1456]: time="2025-05-13T23:39:55.522312509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:39:55.522364 containerd[1456]: time="2025-05-13T23:39:55.522340989Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:39:55.522364 containerd[1456]: time="2025-05-13T23:39:55.522353309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:39:55.522433 containerd[1456]: time="2025-05-13T23:39:55.522363989Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:39:55.522433 containerd[1456]: time="2025-05-13T23:39:55.522377989Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:39:55.522433 containerd[1456]: time="2025-05-13T23:39:55.522389389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:39:55.522433 containerd[1456]: time="2025-05-13T23:39:55.522400069Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:39:55.522518 containerd[1456]: time="2025-05-13T23:39:55.522435909Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:39:55.522518 containerd[1456]: time="2025-05-13T23:39:55.522451189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:39:55.522518 containerd[1456]: time="2025-05-13T23:39:55.522462869Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:39:55.522518 containerd[1456]: time="2025-05-13T23:39:55.522510189Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:39:55.522583 containerd[1456]: time="2025-05-13T23:39:55.522527189Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:39:55.522583 containerd[1456]: time="2025-05-13T23:39:55.522537069Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:39:55.522583 containerd[1456]: time="2025-05-13T23:39:55.522546629Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:39:55.522583 containerd[1456]: time="2025-05-13T23:39:55.522554709Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:39:55.522583 containerd[1456]: time="2025-05-13T23:39:55.522565189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:39:55.522583 containerd[1456]: time="2025-05-13T23:39:55.522577709Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:39:55.522682 containerd[1456]: time="2025-05-13T23:39:55.522663149Z" level=info msg="runtime interface created" May 13 23:39:55.522682 containerd[1456]: time="2025-05-13T23:39:55.522669189Z" level=info msg="created NRI interface" May 13 23:39:55.522715 containerd[1456]: time="2025-05-13T23:39:55.522681349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:39:55.522715 containerd[1456]: time="2025-05-13T23:39:55.522694789Z" level=info msg="Connect containerd service" May 13 23:39:55.522750 containerd[1456]: time="2025-05-13T23:39:55.522722229Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:39:55.523944 containerd[1456]: time="2025-05-13T23:39:55.523906589Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:39:55.634047 containerd[1456]: time="2025-05-13T23:39:55.633973869Z" level=info msg="Start subscribing containerd event" May 13 23:39:55.634163 containerd[1456]: time="2025-05-13T23:39:55.634104269Z" level=info msg="Start recovering state" May 13 23:39:55.634473 containerd[1456]: time="2025-05-13T23:39:55.634405389Z" level=info msg="Start event monitor" May 13 23:39:55.634996 containerd[1456]: time="2025-05-13T23:39:55.634474109Z" level=info msg="Start cni network conf syncer for default" May 13 23:39:55.635048 containerd[1456]: time="2025-05-13T23:39:55.634999309Z" level=info msg="Start streaming server" May 13 23:39:55.635048 containerd[1456]: time="2025-05-13T23:39:55.635019429Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:39:55.635048 containerd[1456]: time="2025-05-13T23:39:55.635027949Z" level=info msg="runtime interface starting up..." May 13 23:39:55.635048 containerd[1456]: time="2025-05-13T23:39:55.635034269Z" level=info msg="starting plugins..." May 13 23:39:55.635114 containerd[1456]: time="2025-05-13T23:39:55.635066709Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:39:55.635658 containerd[1456]: time="2025-05-13T23:39:55.635625389Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:39:55.635769 containerd[1456]: time="2025-05-13T23:39:55.635752789Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:39:55.637462 containerd[1456]: time="2025-05-13T23:39:55.635917909Z" level=info msg="containerd successfully booted in 0.150824s" May 13 23:39:55.636021 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:39:55.637828 tar[1445]: linux-arm64/LICENSE May 13 23:39:55.637907 tar[1445]: linux-arm64/README.md May 13 23:39:55.661995 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:39:56.582533 systemd-networkd[1390]: eth0: Gained IPv6LL May 13 23:39:56.587376 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:39:56.589930 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:39:56.592626 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 23:39:56.595518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:39:56.607685 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:39:56.624465 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 23:39:56.624720 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 23:39:56.626320 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:39:56.630510 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:39:57.089260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:39:57.100745 (kubelet)[1535]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:39:57.454783 sshd_keygen[1443]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:39:57.474636 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:39:57.477826 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:39:57.504270 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:39:57.504558 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:39:57.508178 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:39:57.532346 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:39:57.536243 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:39:57.539672 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 23:39:57.541339 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:39:57.542462 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:39:57.544473 systemd[1]: Startup finished in 564ms (kernel) + 5.252s (initrd) + 4.200s (userspace) = 10.018s. May 13 23:39:57.606112 kubelet[1535]: E0513 23:39:57.606071 1535 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:39:57.608806 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:39:57.608949 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:39:57.610564 systemd[1]: kubelet.service: Consumed 854ms CPU time, 244M memory peak. May 13 23:40:00.797169 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:40:00.798329 systemd[1]: Started sshd@0-10.0.0.42:22-10.0.0.1:51648.service - OpenSSH per-connection server daemon (10.0.0.1:51648). May 13 23:40:00.874721 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 51648 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:40:00.876560 sshd-session[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:40:00.884249 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:40:00.885209 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:40:00.890522 systemd-logind[1433]: New session 1 of user core. May 13 23:40:00.912177 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:40:00.914606 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:40:00.935559 (systemd)[1570]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:40:00.937767 systemd-logind[1433]: New session c1 of user core. May 13 23:40:01.047001 systemd[1570]: Queued start job for default target default.target. May 13 23:40:01.063470 systemd[1570]: Created slice app.slice - User Application Slice. May 13 23:40:01.063496 systemd[1570]: Reached target paths.target - Paths. May 13 23:40:01.063535 systemd[1570]: Reached target timers.target - Timers. May 13 23:40:01.064803 systemd[1570]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:40:01.073949 systemd[1570]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:40:01.074010 systemd[1570]: Reached target sockets.target - Sockets. May 13 23:40:01.074046 systemd[1570]: Reached target basic.target - Basic System. May 13 23:40:01.074074 systemd[1570]: Reached target default.target - Main User Target. May 13 23:40:01.074098 systemd[1570]: Startup finished in 130ms. May 13 23:40:01.074306 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:40:01.075753 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:40:01.138791 systemd[1]: Started sshd@1-10.0.0.42:22-10.0.0.1:51658.service - OpenSSH per-connection server daemon (10.0.0.1:51658). May 13 23:40:01.200129 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 51658 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:40:01.201230 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:40:01.205465 systemd-logind[1433]: New session 2 of user core. May 13 23:40:01.216638 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:40:01.268505 sshd[1583]: Connection closed by 10.0.0.1 port 51658 May 13 23:40:01.268334 sshd-session[1581]: pam_unix(sshd:session): session closed for user core May 13 23:40:01.279423 systemd[1]: sshd@1-10.0.0.42:22-10.0.0.1:51658.service: Deactivated successfully. May 13 23:40:01.280867 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:40:01.282897 systemd-logind[1433]: Session 2 logged out. Waiting for processes to exit. May 13 23:40:01.284697 systemd[1]: Started sshd@2-10.0.0.42:22-10.0.0.1:51670.service - OpenSSH per-connection server daemon (10.0.0.1:51670). May 13 23:40:01.285470 systemd-logind[1433]: Removed session 2. May 13 23:40:01.335810 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 51670 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:40:01.337160 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:40:01.341263 systemd-logind[1433]: New session 3 of user core. May 13 23:40:01.352563 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:40:01.399894 sshd[1591]: Connection closed by 10.0.0.1 port 51670 May 13 23:40:01.400401 sshd-session[1588]: pam_unix(sshd:session): session closed for user core May 13 23:40:01.420592 systemd[1]: sshd@2-10.0.0.42:22-10.0.0.1:51670.service: Deactivated successfully. May 13 23:40:01.421903 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:40:01.422614 systemd-logind[1433]: Session 3 logged out. Waiting for processes to exit. May 13 23:40:01.424284 systemd[1]: Started sshd@3-10.0.0.42:22-10.0.0.1:51684.service - OpenSSH per-connection server daemon (10.0.0.1:51684). May 13 23:40:01.425090 systemd-logind[1433]: Removed session 3. May 13 23:40:01.476892 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 51684 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:40:01.478048 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:40:01.482341 systemd-logind[1433]: New session 4 of user core. May 13 23:40:01.492551 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:40:01.543055 sshd[1599]: Connection closed by 10.0.0.1 port 51684 May 13 23:40:01.543497 sshd-session[1596]: pam_unix(sshd:session): session closed for user core May 13 23:40:01.552313 systemd[1]: sshd@3-10.0.0.42:22-10.0.0.1:51684.service: Deactivated successfully. May 13 23:40:01.553703 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:40:01.556560 systemd-logind[1433]: Session 4 logged out. Waiting for processes to exit. May 13 23:40:01.557639 systemd[1]: Started sshd@4-10.0.0.42:22-10.0.0.1:51694.service - OpenSSH per-connection server daemon (10.0.0.1:51694). May 13 23:40:01.558349 systemd-logind[1433]: Removed session 4. May 13 23:40:01.605479 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 51694 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:40:01.606700 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:40:01.610672 systemd-logind[1433]: New session 5 of user core. May 13 23:40:01.621560 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:40:01.677164 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:40:01.677469 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:40:01.691170 sudo[1608]: pam_unix(sudo:session): session closed for user root May 13 23:40:01.692695 sshd[1607]: Connection closed by 10.0.0.1 port 51694 May 13 23:40:01.693169 sshd-session[1604]: pam_unix(sshd:session): session closed for user core May 13 23:40:01.702776 systemd[1]: sshd@4-10.0.0.42:22-10.0.0.1:51694.service: Deactivated successfully. May 13 23:40:01.704179 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:40:01.704875 systemd-logind[1433]: Session 5 logged out. Waiting for processes to exit. May 13 23:40:01.706589 systemd[1]: Started sshd@5-10.0.0.42:22-10.0.0.1:51698.service - OpenSSH per-connection server daemon (10.0.0.1:51698). May 13 23:40:01.707720 systemd-logind[1433]: Removed session 5. May 13 23:40:01.763817 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 51698 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:40:01.765060 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:40:01.768801 systemd-logind[1433]: New session 6 of user core. May 13 23:40:01.779548 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:40:01.832322 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:40:01.832653 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:40:01.836090 sudo[1618]: pam_unix(sudo:session): session closed for user root May 13 23:40:01.841204 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:40:01.841537 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:40:01.850823 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:40:01.883850 augenrules[1640]: No rules May 13 23:40:01.885006 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:40:01.885224 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:40:01.886082 sudo[1617]: pam_unix(sudo:session): session closed for user root May 13 23:40:01.887295 sshd[1616]: Connection closed by 10.0.0.1 port 51698 May 13 23:40:01.888630 sshd-session[1613]: pam_unix(sshd:session): session closed for user core May 13 23:40:01.898523 systemd[1]: sshd@5-10.0.0.42:22-10.0.0.1:51698.service: Deactivated successfully. May 13 23:40:01.899971 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:40:01.900660 systemd-logind[1433]: Session 6 logged out. Waiting for processes to exit. May 13 23:40:01.902329 systemd[1]: Started sshd@6-10.0.0.42:22-10.0.0.1:51712.service - OpenSSH per-connection server daemon (10.0.0.1:51712). May 13 23:40:01.903109 systemd-logind[1433]: Removed session 6. May 13 23:40:01.960065 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 51712 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:40:01.961202 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:40:01.965442 systemd-logind[1433]: New session 7 of user core. May 13 23:40:01.979565 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:40:02.031820 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:40:02.032101 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:40:02.362228 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:40:02.375696 (dockerd)[1673]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:40:02.620787 dockerd[1673]: time="2025-05-13T23:40:02.620665069Z" level=info msg="Starting up" May 13 23:40:02.623417 dockerd[1673]: time="2025-05-13T23:40:02.623377509Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:40:02.721594 dockerd[1673]: time="2025-05-13T23:40:02.721419189Z" level=info msg="Loading containers: start." May 13 23:40:02.873449 kernel: Initializing XFRM netlink socket May 13 23:40:02.934285 systemd-networkd[1390]: docker0: Link UP May 13 23:40:02.994774 dockerd[1673]: time="2025-05-13T23:40:02.994680149Z" level=info msg="Loading containers: done." May 13 23:40:03.006959 dockerd[1673]: time="2025-05-13T23:40:03.006900909Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:40:03.007112 dockerd[1673]: time="2025-05-13T23:40:03.006990549Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:40:03.007177 dockerd[1673]: time="2025-05-13T23:40:03.007159189Z" level=info msg="Daemon has completed initialization" May 13 23:40:03.035822 dockerd[1673]: time="2025-05-13T23:40:03.035762869Z" level=info msg="API listen on /run/docker.sock" May 13 23:40:03.036523 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:40:03.734394 containerd[1456]: time="2025-05-13T23:40:03.734340149Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 23:40:04.386205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount783641264.mount: Deactivated successfully. May 13 23:40:05.765602 containerd[1456]: time="2025-05-13T23:40:05.765527789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:05.765959 containerd[1456]: time="2025-05-13T23:40:05.765788269Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 13 23:40:05.766745 containerd[1456]: time="2025-05-13T23:40:05.766713949Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:05.769492 containerd[1456]: time="2025-05-13T23:40:05.769458829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:05.771085 containerd[1456]: time="2025-05-13T23:40:05.771042629Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.03665984s" May 13 23:40:05.771122 containerd[1456]: time="2025-05-13T23:40:05.771090189Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 13 23:40:05.786681 containerd[1456]: time="2025-05-13T23:40:05.786646389Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 23:40:07.615792 containerd[1456]: time="2025-05-13T23:40:07.615735829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:07.616808 containerd[1456]: time="2025-05-13T23:40:07.616751669Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 13 23:40:07.617440 containerd[1456]: time="2025-05-13T23:40:07.617398949Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:07.621273 containerd[1456]: time="2025-05-13T23:40:07.621230069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:07.623093 containerd[1456]: time="2025-05-13T23:40:07.622860069Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.83617396s" May 13 23:40:07.623093 containerd[1456]: time="2025-05-13T23:40:07.622901389Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 13 23:40:07.639779 containerd[1456]: time="2025-05-13T23:40:07.639743349Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 23:40:07.859351 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:40:07.861172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:40:07.986568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:40:07.990620 (kubelet)[1970]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:40:08.097471 kubelet[1970]: E0513 23:40:08.097391 1970 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:40:08.100789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:40:08.100933 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:40:08.101430 systemd[1]: kubelet.service: Consumed 155ms CPU time, 95.8M memory peak. May 13 23:40:08.750073 containerd[1456]: time="2025-05-13T23:40:08.750029349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:08.751459 containerd[1456]: time="2025-05-13T23:40:08.750664709Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 13 23:40:08.752069 containerd[1456]: time="2025-05-13T23:40:08.752014309Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:08.754857 containerd[1456]: time="2025-05-13T23:40:08.754808709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:08.755857 containerd[1456]: time="2025-05-13T23:40:08.755809589Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.11601648s" May 13 23:40:08.755924 containerd[1456]: time="2025-05-13T23:40:08.755859749Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 13 23:40:08.771140 containerd[1456]: time="2025-05-13T23:40:08.771107309Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 23:40:09.956616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3620164298.mount: Deactivated successfully. May 13 23:40:10.169899 containerd[1456]: time="2025-05-13T23:40:10.169680749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:10.172439 containerd[1456]: time="2025-05-13T23:40:10.172363589Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 13 23:40:10.173336 containerd[1456]: time="2025-05-13T23:40:10.173302909Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:10.175115 containerd[1456]: time="2025-05-13T23:40:10.175064269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:10.175801 containerd[1456]: time="2025-05-13T23:40:10.175629669Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.40448268s" May 13 23:40:10.175801 containerd[1456]: time="2025-05-13T23:40:10.175671309Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 13 23:40:10.191047 containerd[1456]: time="2025-05-13T23:40:10.191010509Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 23:40:10.812601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1150820492.mount: Deactivated successfully. May 13 23:40:11.534632 containerd[1456]: time="2025-05-13T23:40:11.534577109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:11.535142 containerd[1456]: time="2025-05-13T23:40:11.535079829Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 13 23:40:11.536005 containerd[1456]: time="2025-05-13T23:40:11.535950429Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:11.538837 containerd[1456]: time="2025-05-13T23:40:11.538799229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:11.539986 containerd[1456]: time="2025-05-13T23:40:11.539850149Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.34879868s" May 13 23:40:11.539986 containerd[1456]: time="2025-05-13T23:40:11.539881749Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 13 23:40:11.555399 containerd[1456]: time="2025-05-13T23:40:11.555310469Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 23:40:11.961692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2174464148.mount: Deactivated successfully. May 13 23:40:11.971187 containerd[1456]: time="2025-05-13T23:40:11.971126909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:11.971908 containerd[1456]: time="2025-05-13T23:40:11.971861429Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 13 23:40:11.972926 containerd[1456]: time="2025-05-13T23:40:11.972881029Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:11.974580 containerd[1456]: time="2025-05-13T23:40:11.974552669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:11.975236 containerd[1456]: time="2025-05-13T23:40:11.975176709Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 419.832ms" May 13 23:40:11.975236 containerd[1456]: time="2025-05-13T23:40:11.975209109Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 13 23:40:11.990734 containerd[1456]: time="2025-05-13T23:40:11.990699949Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 23:40:12.499321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount268153136.mount: Deactivated successfully. May 13 23:40:14.712437 containerd[1456]: time="2025-05-13T23:40:14.712336189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:14.713444 containerd[1456]: time="2025-05-13T23:40:14.713144029Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 13 23:40:14.714250 containerd[1456]: time="2025-05-13T23:40:14.714215789Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:14.716862 containerd[1456]: time="2025-05-13T23:40:14.716815949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:14.718034 containerd[1456]: time="2025-05-13T23:40:14.717991749Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.72725216s" May 13 23:40:14.718104 containerd[1456]: time="2025-05-13T23:40:14.718036749Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 13 23:40:18.351340 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 23:40:18.353255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:40:18.465858 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:40:18.469627 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:40:18.508671 kubelet[2218]: E0513 23:40:18.508619 2218 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:40:18.511377 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:40:18.511663 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:40:18.512237 systemd[1]: kubelet.service: Consumed 134ms CPU time, 94.9M memory peak. May 13 23:40:19.270861 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:40:19.271252 systemd[1]: kubelet.service: Consumed 134ms CPU time, 94.9M memory peak. May 13 23:40:19.273564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:40:19.297973 systemd[1]: Reload requested from client PID 2233 ('systemctl') (unit session-7.scope)... May 13 23:40:19.297989 systemd[1]: Reloading... May 13 23:40:19.363536 zram_generator::config[2280]: No configuration found. May 13 23:40:19.476941 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:40:19.555035 systemd[1]: Reloading finished in 256 ms. May 13 23:40:19.608894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:40:19.611085 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:40:19.612703 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:40:19.612913 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:40:19.612952 systemd[1]: kubelet.service: Consumed 90ms CPU time, 82.3M memory peak. May 13 23:40:19.614405 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:40:19.739931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:40:19.743854 (kubelet)[2324]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:40:19.801966 kubelet[2324]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:40:19.801966 kubelet[2324]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:40:19.801966 kubelet[2324]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:40:19.802354 kubelet[2324]: I0513 23:40:19.802008 2324 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:40:20.553708 kubelet[2324]: I0513 23:40:20.553663 2324 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 23:40:20.553708 kubelet[2324]: I0513 23:40:20.553697 2324 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:40:20.553935 kubelet[2324]: I0513 23:40:20.553919 2324 server.go:927] "Client rotation is on, will bootstrap in background" May 13 23:40:20.590855 kubelet[2324]: I0513 23:40:20.590771 2324 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:40:20.591077 kubelet[2324]: E0513 23:40:20.591056 2324 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.42:6443: connect: connection refused May 13 23:40:20.598554 kubelet[2324]: I0513 23:40:20.598517 2324 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:40:20.602443 kubelet[2324]: I0513 23:40:20.602184 2324 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:40:20.602647 kubelet[2324]: I0513 23:40:20.602444 2324 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 23:40:20.602778 kubelet[2324]: I0513 23:40:20.602758 2324 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:40:20.602778 kubelet[2324]: I0513 23:40:20.602772 2324 container_manager_linux.go:301] "Creating device plugin manager" May 13 23:40:20.603228 kubelet[2324]: I0513 23:40:20.603205 2324 state_mem.go:36] "Initialized new in-memory state store" May 13 23:40:20.605374 kubelet[2324]: I0513 23:40:20.605349 2324 kubelet.go:400] "Attempting to sync node with API server" May 13 23:40:20.605399 kubelet[2324]: I0513 23:40:20.605375 2324 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:40:20.606019 kubelet[2324]: I0513 23:40:20.606004 2324 kubelet.go:312] "Adding apiserver pod source" May 13 23:40:20.606545 kubelet[2324]: I0513 23:40:20.606387 2324 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:40:20.606545 kubelet[2324]: W0513 23:40:20.606468 2324 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 13 23:40:20.606545 kubelet[2324]: E0513 23:40:20.606523 2324 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 13 23:40:20.606822 kubelet[2324]: W0513 23:40:20.606773 2324 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 13 23:40:20.606877 kubelet[2324]: E0513 23:40:20.606823 2324 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 13 23:40:20.608139 kubelet[2324]: I0513 23:40:20.608121 2324 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:40:20.608513 kubelet[2324]: I0513 23:40:20.608499 2324 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:40:20.608721 kubelet[2324]: W0513 23:40:20.608711 2324 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:40:20.609629 kubelet[2324]: I0513 23:40:20.609598 2324 server.go:1264] "Started kubelet" May 13 23:40:20.611402 kubelet[2324]: I0513 23:40:20.611222 2324 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:40:20.612910 kubelet[2324]: I0513 23:40:20.612861 2324 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:40:20.613298 kubelet[2324]: E0513 23:40:20.612975 2324 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.42:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.42:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3a9ba53c434d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 23:40:20.609573709 +0000 UTC m=+0.862557961,LastTimestamp:2025-05-13 23:40:20.609573709 +0000 UTC m=+0.862557961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 23:40:20.613393 kubelet[2324]: I0513 23:40:20.613343 2324 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:40:20.613749 kubelet[2324]: I0513 23:40:20.613630 2324 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:40:20.613972 kubelet[2324]: I0513 23:40:20.613950 2324 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 23:40:20.614514 kubelet[2324]: I0513 23:40:20.614007 2324 server.go:455] "Adding debug handlers to kubelet server" May 13 23:40:20.614514 kubelet[2324]: I0513 23:40:20.614038 2324 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:40:20.615049 kubelet[2324]: I0513 23:40:20.615001 2324 reconciler.go:26] "Reconciler: start to sync state" May 13 23:40:20.615503 kubelet[2324]: W0513 23:40:20.615445 2324 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 13 23:40:20.615503 kubelet[2324]: E0513 23:40:20.615507 2324 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 13 23:40:20.618059 kubelet[2324]: E0513 23:40:20.617987 2324 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="200ms" May 13 23:40:20.618493 kubelet[2324]: E0513 23:40:20.618461 2324 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:40:20.618601 kubelet[2324]: I0513 23:40:20.618565 2324 factory.go:221] Registration of the systemd container factory successfully May 13 23:40:20.618747 kubelet[2324]: I0513 23:40:20.618699 2324 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:40:20.619950 kubelet[2324]: I0513 23:40:20.619902 2324 factory.go:221] Registration of the containerd container factory successfully May 13 23:40:20.628073 kubelet[2324]: I0513 23:40:20.627929 2324 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:40:20.629326 kubelet[2324]: I0513 23:40:20.629294 2324 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:40:20.629507 kubelet[2324]: I0513 23:40:20.629478 2324 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:40:20.629507 kubelet[2324]: I0513 23:40:20.629503 2324 kubelet.go:2337] "Starting kubelet main sync loop" May 13 23:40:20.629581 kubelet[2324]: E0513 23:40:20.629545 2324 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:40:20.631181 kubelet[2324]: W0513 23:40:20.631112 2324 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 13 23:40:20.631181 kubelet[2324]: E0513 23:40:20.631175 2324 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 13 23:40:20.632611 kubelet[2324]: I0513 23:40:20.632113 2324 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:40:20.632611 kubelet[2324]: I0513 23:40:20.632136 2324 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:40:20.632611 kubelet[2324]: I0513 23:40:20.632156 2324 state_mem.go:36] "Initialized new in-memory state store" May 13 23:40:20.715237 kubelet[2324]: I0513 23:40:20.715186 2324 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 23:40:20.715610 kubelet[2324]: E0513 23:40:20.715581 2324 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" May 13 23:40:20.729909 kubelet[2324]: E0513 23:40:20.729872 2324 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 23:40:20.737647 kubelet[2324]: I0513 23:40:20.737607 2324 policy_none.go:49] "None policy: Start" May 13 23:40:20.738348 kubelet[2324]: I0513 23:40:20.738316 2324 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:40:20.738453 kubelet[2324]: I0513 23:40:20.738438 2324 state_mem.go:35] "Initializing new in-memory state store" May 13 23:40:20.744246 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:40:20.764522 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:40:20.767823 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:40:20.778432 kubelet[2324]: I0513 23:40:20.778360 2324 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:40:20.778672 kubelet[2324]: I0513 23:40:20.778631 2324 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:40:20.778873 kubelet[2324]: I0513 23:40:20.778755 2324 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:40:20.780995 kubelet[2324]: E0513 23:40:20.780915 2324 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 23:40:20.819192 kubelet[2324]: E0513 23:40:20.819085 2324 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="400ms" May 13 23:40:20.917378 kubelet[2324]: I0513 23:40:20.917324 2324 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 23:40:20.917711 kubelet[2324]: E0513 23:40:20.917683 2324 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" May 13 23:40:20.931011 kubelet[2324]: I0513 23:40:20.930876 2324 topology_manager.go:215] "Topology Admit Handler" podUID="4495f0f57a9cfb49c4b97c7cc8abc723" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 23:40:20.932209 kubelet[2324]: I0513 23:40:20.932184 2324 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 23:40:20.933247 kubelet[2324]: I0513 23:40:20.933183 2324 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 23:40:20.939639 systemd[1]: Created slice kubepods-burstable-pod4495f0f57a9cfb49c4b97c7cc8abc723.slice - libcontainer container kubepods-burstable-pod4495f0f57a9cfb49c4b97c7cc8abc723.slice. May 13 23:40:20.962350 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 13 23:40:20.973214 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 13 23:40:21.017157 kubelet[2324]: I0513 23:40:21.017104 2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4495f0f57a9cfb49c4b97c7cc8abc723-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4495f0f57a9cfb49c4b97c7cc8abc723\") " pod="kube-system/kube-apiserver-localhost" May 13 23:40:21.017157 kubelet[2324]: I0513 23:40:21.017147 2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:40:21.017157 kubelet[2324]: I0513 23:40:21.017168 2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:40:21.017323 kubelet[2324]: I0513 23:40:21.017184 2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 23:40:21.017323 kubelet[2324]: I0513 23:40:21.017199 2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4495f0f57a9cfb49c4b97c7cc8abc723-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4495f0f57a9cfb49c4b97c7cc8abc723\") " pod="kube-system/kube-apiserver-localhost" May 13 23:40:21.017323 kubelet[2324]: I0513 23:40:21.017215 2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4495f0f57a9cfb49c4b97c7cc8abc723-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4495f0f57a9cfb49c4b97c7cc8abc723\") " pod="kube-system/kube-apiserver-localhost" May 13 23:40:21.017323 kubelet[2324]: I0513 23:40:21.017231 2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:40:21.017323 kubelet[2324]: I0513 23:40:21.017247 2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:40:21.017462 kubelet[2324]: I0513 23:40:21.017274 2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:40:21.219794 kubelet[2324]: E0513 23:40:21.219686 2324 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="800ms" May 13 23:40:21.260719 containerd[1456]: time="2025-05-13T23:40:21.260671749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4495f0f57a9cfb49c4b97c7cc8abc723,Namespace:kube-system,Attempt:0,}" May 13 23:40:21.272131 containerd[1456]: time="2025-05-13T23:40:21.271830629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 13 23:40:21.276533 containerd[1456]: time="2025-05-13T23:40:21.276427829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 13 23:40:21.321423 kubelet[2324]: I0513 23:40:21.321361 2324 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 23:40:21.321762 kubelet[2324]: E0513 23:40:21.321737 2324 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" May 13 23:40:21.479913 kubelet[2324]: W0513 23:40:21.479737 2324 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 13 23:40:21.479913 kubelet[2324]: E0513 23:40:21.479810 2324 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 13 23:40:21.550531 kubelet[2324]: W0513 23:40:21.550492 2324 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 13 23:40:21.550531 kubelet[2324]: E0513 23:40:21.550534 2324 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 13 23:40:21.745762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1214859162.mount: Deactivated successfully. May 13 23:40:21.751935 containerd[1456]: time="2025-05-13T23:40:21.751806349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:40:21.753726 containerd[1456]: time="2025-05-13T23:40:21.753668749Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:40:21.757482 containerd[1456]: time="2025-05-13T23:40:21.757425389Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 13 23:40:21.757716 kubelet[2324]: W0513 23:40:21.757648 2324 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 13 23:40:21.757716 kubelet[2324]: E0513 23:40:21.757712 2324 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused May 13 23:40:21.758393 containerd[1456]: time="2025-05-13T23:40:21.758236749Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 13 23:40:21.760230 containerd[1456]: time="2025-05-13T23:40:21.760199269Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:40:21.761658 containerd[1456]: time="2025-05-13T23:40:21.761588629Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 13 23:40:21.764433 containerd[1456]: time="2025-05-13T23:40:21.764007029Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 500.99848ms" May 13 23:40:21.765149 containerd[1456]: time="2025-05-13T23:40:21.765103429Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:40:21.767651 containerd[1456]: time="2025-05-13T23:40:21.767606309Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 489.2914ms" May 13 23:40:21.769166 containerd[1456]: time="2025-05-13T23:40:21.769126509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:40:21.773562 containerd[1456]: time="2025-05-13T23:40:21.773513989Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 499.91228ms" May 13 23:40:21.797963 containerd[1456]: time="2025-05-13T23:40:21.797922309Z" level=info msg="connecting to shim f8638b9158d89df9d005263a2d1a777b7e74454b3661a7e2d22ab29a0c319d2f" address="unix:///run/containerd/s/dda0b0e3ad6d001db9b458f0203f4ac4566ab8a1ff5ca1fc56df67ac678ab1f1" namespace=k8s.io protocol=ttrpc version=3 May 13 23:40:21.799328 containerd[1456]: time="2025-05-13T23:40:21.798934989Z" level=info msg="connecting to shim 7c2180e8327579f51ef3bf21db42e4f9568d7de30076764f110412e43e933aa1" address="unix:///run/containerd/s/96a899307d118c74ea0009319a86e0d68763015f3e466db4cd7cf5349d1ccf4f" namespace=k8s.io protocol=ttrpc version=3 May 13 23:40:21.802123 containerd[1456]: time="2025-05-13T23:40:21.802086629Z" level=info msg="connecting to shim 6adf3a4a3725d901421ef6a04e7a20b3d534814f32bde928466cacaacb5bcfc0" address="unix:///run/containerd/s/742272feb4e5154af68165a04867ff5e816b5083e877e98c81e7cdf62636f116" namespace=k8s.io protocol=ttrpc version=3 May 13 23:40:21.826631 systemd[1]: Started cri-containerd-f8638b9158d89df9d005263a2d1a777b7e74454b3661a7e2d22ab29a0c319d2f.scope - libcontainer container f8638b9158d89df9d005263a2d1a777b7e74454b3661a7e2d22ab29a0c319d2f. May 13 23:40:21.830724 systemd[1]: Started cri-containerd-6adf3a4a3725d901421ef6a04e7a20b3d534814f32bde928466cacaacb5bcfc0.scope - libcontainer container 6adf3a4a3725d901421ef6a04e7a20b3d534814f32bde928466cacaacb5bcfc0. May 13 23:40:21.831856 systemd[1]: Started cri-containerd-7c2180e8327579f51ef3bf21db42e4f9568d7de30076764f110412e43e933aa1.scope - libcontainer container 7c2180e8327579f51ef3bf21db42e4f9568d7de30076764f110412e43e933aa1. May 13 23:40:21.867212 containerd[1456]: time="2025-05-13T23:40:21.867127869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8638b9158d89df9d005263a2d1a777b7e74454b3661a7e2d22ab29a0c319d2f\"" May 13 23:40:21.870398 containerd[1456]: time="2025-05-13T23:40:21.870093549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4495f0f57a9cfb49c4b97c7cc8abc723,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c2180e8327579f51ef3bf21db42e4f9568d7de30076764f110412e43e933aa1\"" May 13 23:40:21.874073 containerd[1456]: time="2025-05-13T23:40:21.873577829Z" level=info msg="CreateContainer within sandbox \"7c2180e8327579f51ef3bf21db42e4f9568d7de30076764f110412e43e933aa1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:40:21.874073 containerd[1456]: time="2025-05-13T23:40:21.873618789Z" level=info msg="CreateContainer within sandbox \"f8638b9158d89df9d005263a2d1a777b7e74454b3661a7e2d22ab29a0c319d2f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:40:21.877573 containerd[1456]: time="2025-05-13T23:40:21.877539549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"6adf3a4a3725d901421ef6a04e7a20b3d534814f32bde928466cacaacb5bcfc0\"" May 13 23:40:21.881076 containerd[1456]: time="2025-05-13T23:40:21.881004069Z" level=info msg="CreateContainer within sandbox \"6adf3a4a3725d901421ef6a04e7a20b3d534814f32bde928466cacaacb5bcfc0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:40:21.881884 containerd[1456]: time="2025-05-13T23:40:21.881828189Z" level=info msg="Container 469221281b1bd1ce7f5497be61555e2b98beae4efa5d37d7910df21a7719ff84: CDI devices from CRI Config.CDIDevices: []" May 13 23:40:21.882921 containerd[1456]: time="2025-05-13T23:40:21.882857349Z" level=info msg="Container d938d2b4bb2d86ed23d6c4244aed70184602a150edf9a7b9be60f9f97f01ca49: CDI devices from CRI Config.CDIDevices: []" May 13 23:40:21.891256 containerd[1456]: time="2025-05-13T23:40:21.891209909Z" level=info msg="CreateContainer within sandbox \"f8638b9158d89df9d005263a2d1a777b7e74454b3661a7e2d22ab29a0c319d2f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d938d2b4bb2d86ed23d6c4244aed70184602a150edf9a7b9be60f9f97f01ca49\"" May 13 23:40:21.892045 containerd[1456]: time="2025-05-13T23:40:21.891951389Z" level=info msg="StartContainer for \"d938d2b4bb2d86ed23d6c4244aed70184602a150edf9a7b9be60f9f97f01ca49\"" May 13 23:40:21.892952 containerd[1456]: time="2025-05-13T23:40:21.892902949Z" level=info msg="Container 6bd5d2cf685161a5c415603160d6ab03f3d05b61aa49fd66592231e22de27174: CDI devices from CRI Config.CDIDevices: []" May 13 23:40:21.893181 containerd[1456]: time="2025-05-13T23:40:21.893129429Z" level=info msg="connecting to shim d938d2b4bb2d86ed23d6c4244aed70184602a150edf9a7b9be60f9f97f01ca49" address="unix:///run/containerd/s/dda0b0e3ad6d001db9b458f0203f4ac4566ab8a1ff5ca1fc56df67ac678ab1f1" protocol=ttrpc version=3 May 13 23:40:21.896211 containerd[1456]: time="2025-05-13T23:40:21.896068389Z" level=info msg="CreateContainer within sandbox \"7c2180e8327579f51ef3bf21db42e4f9568d7de30076764f110412e43e933aa1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"469221281b1bd1ce7f5497be61555e2b98beae4efa5d37d7910df21a7719ff84\"" May 13 23:40:21.896774 containerd[1456]: time="2025-05-13T23:40:21.896743909Z" level=info msg="StartContainer for \"469221281b1bd1ce7f5497be61555e2b98beae4efa5d37d7910df21a7719ff84\"" May 13 23:40:21.898015 containerd[1456]: time="2025-05-13T23:40:21.897983829Z" level=info msg="connecting to shim 469221281b1bd1ce7f5497be61555e2b98beae4efa5d37d7910df21a7719ff84" address="unix:///run/containerd/s/96a899307d118c74ea0009319a86e0d68763015f3e466db4cd7cf5349d1ccf4f" protocol=ttrpc version=3 May 13 23:40:21.899192 containerd[1456]: time="2025-05-13T23:40:21.899159309Z" level=info msg="CreateContainer within sandbox \"6adf3a4a3725d901421ef6a04e7a20b3d534814f32bde928466cacaacb5bcfc0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6bd5d2cf685161a5c415603160d6ab03f3d05b61aa49fd66592231e22de27174\"" May 13 23:40:21.899946 containerd[1456]: time="2025-05-13T23:40:21.899916749Z" level=info msg="StartContainer for \"6bd5d2cf685161a5c415603160d6ab03f3d05b61aa49fd66592231e22de27174\"" May 13 23:40:21.900995 containerd[1456]: time="2025-05-13T23:40:21.900969589Z" level=info msg="connecting to shim 6bd5d2cf685161a5c415603160d6ab03f3d05b61aa49fd66592231e22de27174" address="unix:///run/containerd/s/742272feb4e5154af68165a04867ff5e816b5083e877e98c81e7cdf62636f116" protocol=ttrpc version=3 May 13 23:40:21.912591 systemd[1]: Started cri-containerd-d938d2b4bb2d86ed23d6c4244aed70184602a150edf9a7b9be60f9f97f01ca49.scope - libcontainer container d938d2b4bb2d86ed23d6c4244aed70184602a150edf9a7b9be60f9f97f01ca49. May 13 23:40:21.916926 systemd[1]: Started cri-containerd-469221281b1bd1ce7f5497be61555e2b98beae4efa5d37d7910df21a7719ff84.scope - libcontainer container 469221281b1bd1ce7f5497be61555e2b98beae4efa5d37d7910df21a7719ff84. May 13 23:40:21.918287 systemd[1]: Started cri-containerd-6bd5d2cf685161a5c415603160d6ab03f3d05b61aa49fd66592231e22de27174.scope - libcontainer container 6bd5d2cf685161a5c415603160d6ab03f3d05b61aa49fd66592231e22de27174. May 13 23:40:22.017892 containerd[1456]: time="2025-05-13T23:40:22.012642509Z" level=info msg="StartContainer for \"d938d2b4bb2d86ed23d6c4244aed70184602a150edf9a7b9be60f9f97f01ca49\" returns successfully" May 13 23:40:22.024845 kubelet[2324]: E0513 23:40:22.020204 2324 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="1.6s" May 13 23:40:22.025153 containerd[1456]: time="2025-05-13T23:40:22.022802469Z" level=info msg="StartContainer for \"469221281b1bd1ce7f5497be61555e2b98beae4efa5d37d7910df21a7719ff84\" returns successfully" May 13 23:40:22.026695 containerd[1456]: time="2025-05-13T23:40:22.026557869Z" level=info msg="StartContainer for \"6bd5d2cf685161a5c415603160d6ab03f3d05b61aa49fd66592231e22de27174\" returns successfully" May 13 23:40:22.123326 kubelet[2324]: I0513 23:40:22.123290 2324 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 23:40:22.126458 kubelet[2324]: E0513 23:40:22.123733 2324 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" May 13 23:40:23.544722 kubelet[2324]: E0513 23:40:23.544669 2324 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 13 23:40:23.624404 kubelet[2324]: E0513 23:40:23.624358 2324 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 23:40:23.729169 kubelet[2324]: I0513 23:40:23.728364 2324 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 23:40:23.742511 kubelet[2324]: I0513 23:40:23.742320 2324 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 23:40:24.608066 kubelet[2324]: I0513 23:40:24.608027 2324 apiserver.go:52] "Watching apiserver" May 13 23:40:24.615010 kubelet[2324]: I0513 23:40:24.614975 2324 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:40:25.267663 systemd[1]: Reload requested from client PID 2606 ('systemctl') (unit session-7.scope)... May 13 23:40:25.267681 systemd[1]: Reloading... May 13 23:40:25.331557 zram_generator::config[2653]: No configuration found. May 13 23:40:25.438018 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:40:25.523310 systemd[1]: Reloading finished in 255 ms. May 13 23:40:25.547808 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:40:25.560786 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:40:25.560990 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:40:25.561036 systemd[1]: kubelet.service: Consumed 1.250s CPU time, 116.5M memory peak. May 13 23:40:25.563475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:40:25.693007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:40:25.698849 (kubelet)[2692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:40:25.743657 kubelet[2692]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:40:25.743657 kubelet[2692]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:40:25.743657 kubelet[2692]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:40:25.744031 kubelet[2692]: I0513 23:40:25.743712 2692 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:40:25.749169 kubelet[2692]: I0513 23:40:25.749118 2692 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 23:40:25.749169 kubelet[2692]: I0513 23:40:25.749146 2692 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:40:25.749329 kubelet[2692]: I0513 23:40:25.749316 2692 server.go:927] "Client rotation is on, will bootstrap in background" May 13 23:40:25.750655 kubelet[2692]: I0513 23:40:25.750625 2692 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:40:25.751911 kubelet[2692]: I0513 23:40:25.751880 2692 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:40:25.760023 kubelet[2692]: I0513 23:40:25.759995 2692 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:40:25.760683 kubelet[2692]: I0513 23:40:25.760319 2692 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:40:25.760683 kubelet[2692]: I0513 23:40:25.760348 2692 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 23:40:25.760683 kubelet[2692]: I0513 23:40:25.760532 2692 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:40:25.760683 kubelet[2692]: I0513 23:40:25.760541 2692 container_manager_linux.go:301] "Creating device plugin manager" May 13 23:40:25.760683 kubelet[2692]: I0513 23:40:25.760576 2692 state_mem.go:36] "Initialized new in-memory state store" May 13 23:40:25.760988 kubelet[2692]: I0513 23:40:25.760673 2692 kubelet.go:400] "Attempting to sync node with API server" May 13 23:40:25.760988 kubelet[2692]: I0513 23:40:25.760685 2692 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:40:25.760988 kubelet[2692]: I0513 23:40:25.760707 2692 kubelet.go:312] "Adding apiserver pod source" May 13 23:40:25.760988 kubelet[2692]: I0513 23:40:25.760719 2692 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:40:25.762037 kubelet[2692]: I0513 23:40:25.761918 2692 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:40:25.762109 kubelet[2692]: I0513 23:40:25.762080 2692 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:40:25.762488 kubelet[2692]: I0513 23:40:25.762461 2692 server.go:1264] "Started kubelet" May 13 23:40:25.762887 kubelet[2692]: I0513 23:40:25.762607 2692 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:40:25.762887 kubelet[2692]: I0513 23:40:25.762753 2692 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:40:25.766484 kubelet[2692]: I0513 23:40:25.766457 2692 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:40:25.767839 kubelet[2692]: I0513 23:40:25.767631 2692 server.go:455] "Adding debug handlers to kubelet server" May 13 23:40:25.772332 kubelet[2692]: I0513 23:40:25.772305 2692 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:40:25.776577 kubelet[2692]: I0513 23:40:25.776102 2692 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 23:40:25.776577 kubelet[2692]: I0513 23:40:25.776320 2692 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:40:25.776577 kubelet[2692]: I0513 23:40:25.776490 2692 reconciler.go:26] "Reconciler: start to sync state" May 13 23:40:25.780444 kubelet[2692]: E0513 23:40:25.780399 2692 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:40:25.780444 kubelet[2692]: I0513 23:40:25.780610 2692 factory.go:221] Registration of the systemd container factory successfully May 13 23:40:25.780444 kubelet[2692]: I0513 23:40:25.780690 2692 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:40:25.783289 kubelet[2692]: I0513 23:40:25.783260 2692 factory.go:221] Registration of the containerd container factory successfully May 13 23:40:25.786272 kubelet[2692]: I0513 23:40:25.786216 2692 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:40:25.787355 kubelet[2692]: I0513 23:40:25.787308 2692 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:40:25.787355 kubelet[2692]: I0513 23:40:25.787355 2692 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:40:25.787485 kubelet[2692]: I0513 23:40:25.787373 2692 kubelet.go:2337] "Starting kubelet main sync loop" May 13 23:40:25.787510 kubelet[2692]: E0513 23:40:25.787491 2692 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:40:25.814061 kubelet[2692]: I0513 23:40:25.814035 2692 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:40:25.814061 kubelet[2692]: I0513 23:40:25.814053 2692 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:40:25.814202 kubelet[2692]: I0513 23:40:25.814073 2692 state_mem.go:36] "Initialized new in-memory state store" May 13 23:40:25.814255 kubelet[2692]: I0513 23:40:25.814239 2692 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:40:25.814291 kubelet[2692]: I0513 23:40:25.814254 2692 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:40:25.814291 kubelet[2692]: I0513 23:40:25.814272 2692 policy_none.go:49] "None policy: Start" May 13 23:40:25.814928 kubelet[2692]: I0513 23:40:25.814906 2692 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:40:25.814994 kubelet[2692]: I0513 23:40:25.814965 2692 state_mem.go:35] "Initializing new in-memory state store" May 13 23:40:25.815269 kubelet[2692]: I0513 23:40:25.815228 2692 state_mem.go:75] "Updated machine memory state" May 13 23:40:25.819872 kubelet[2692]: I0513 23:40:25.819796 2692 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:40:25.820099 kubelet[2692]: I0513 23:40:25.820006 2692 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:40:25.820153 kubelet[2692]: I0513 23:40:25.820117 2692 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:40:25.877936 kubelet[2692]: I0513 23:40:25.877903 2692 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 23:40:25.888668 kubelet[2692]: I0513 23:40:25.888609 2692 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 23:40:25.888803 kubelet[2692]: I0513 23:40:25.888732 2692 topology_manager.go:215] "Topology Admit Handler" podUID="4495f0f57a9cfb49c4b97c7cc8abc723" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 23:40:25.888803 kubelet[2692]: I0513 23:40:25.888779 2692 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 23:40:25.897978 kubelet[2692]: I0513 23:40:25.897935 2692 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 13 23:40:25.898105 kubelet[2692]: I0513 23:40:25.898027 2692 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 23:40:25.898711 kubelet[2692]: E0513 23:40:25.898677 2692 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 23:40:25.898796 kubelet[2692]: E0513 23:40:25.898769 2692 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 23:40:25.977245 kubelet[2692]: I0513 23:40:25.977174 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 23:40:25.977245 kubelet[2692]: I0513 23:40:25.977237 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4495f0f57a9cfb49c4b97c7cc8abc723-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4495f0f57a9cfb49c4b97c7cc8abc723\") " pod="kube-system/kube-apiserver-localhost" May 13 23:40:25.977395 kubelet[2692]: I0513 23:40:25.977260 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:40:25.977395 kubelet[2692]: I0513 23:40:25.977276 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:40:25.977395 kubelet[2692]: I0513 23:40:25.977294 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:40:25.977395 kubelet[2692]: I0513 23:40:25.977307 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4495f0f57a9cfb49c4b97c7cc8abc723-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4495f0f57a9cfb49c4b97c7cc8abc723\") " pod="kube-system/kube-apiserver-localhost" May 13 23:40:25.977395 kubelet[2692]: I0513 23:40:25.977322 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4495f0f57a9cfb49c4b97c7cc8abc723-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4495f0f57a9cfb49c4b97c7cc8abc723\") " pod="kube-system/kube-apiserver-localhost" May 13 23:40:25.977528 kubelet[2692]: I0513 23:40:25.977336 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:40:25.977528 kubelet[2692]: I0513 23:40:25.977350 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:40:26.273693 sudo[2725]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 23:40:26.273995 sudo[2725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 23:40:26.699129 sudo[2725]: pam_unix(sudo:session): session closed for user root May 13 23:40:26.761651 kubelet[2692]: I0513 23:40:26.761560 2692 apiserver.go:52] "Watching apiserver" May 13 23:40:26.777080 kubelet[2692]: I0513 23:40:26.777042 2692 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:40:26.835446 kubelet[2692]: I0513 23:40:26.835353 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.835338829 podStartE2EDuration="2.835338829s" podCreationTimestamp="2025-05-13 23:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:40:26.834501669 +0000 UTC m=+1.132561921" watchObservedRunningTime="2025-05-13 23:40:26.835338829 +0000 UTC m=+1.133399081" May 13 23:40:26.854522 kubelet[2692]: I0513 23:40:26.854121 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.854104949 podStartE2EDuration="2.854104949s" podCreationTimestamp="2025-05-13 23:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:40:26.854083909 +0000 UTC m=+1.152144161" watchObservedRunningTime="2025-05-13 23:40:26.854104949 +0000 UTC m=+1.152165201" May 13 23:40:26.854522 kubelet[2692]: I0513 23:40:26.854257 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8542517090000001 podStartE2EDuration="1.854251709s" podCreationTimestamp="2025-05-13 23:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:40:26.845064549 +0000 UTC m=+1.143124801" watchObservedRunningTime="2025-05-13 23:40:26.854251709 +0000 UTC m=+1.152311921" May 13 23:40:29.522828 sudo[1652]: pam_unix(sudo:session): session closed for user root May 13 23:40:29.524371 sshd[1651]: Connection closed by 10.0.0.1 port 51712 May 13 23:40:29.525073 sshd-session[1648]: pam_unix(sshd:session): session closed for user core May 13 23:40:29.529757 systemd-logind[1433]: Session 7 logged out. Waiting for processes to exit. May 13 23:40:29.531524 systemd[1]: sshd@6-10.0.0.42:22-10.0.0.1:51712.service: Deactivated successfully. May 13 23:40:29.533756 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:40:29.533996 systemd[1]: session-7.scope: Consumed 7.923s CPU time, 278.4M memory peak. May 13 23:40:29.535256 systemd-logind[1433]: Removed session 7. May 13 23:40:40.535854 update_engine[1436]: I20250513 23:40:40.535776 1436 update_attempter.cc:509] Updating boot flags... May 13 23:40:40.561444 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2774) May 13 23:40:40.591846 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2775) May 13 23:40:41.334347 kubelet[2692]: I0513 23:40:41.334285 2692 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:40:41.339166 containerd[1456]: time="2025-05-13T23:40:41.339117605Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:40:41.339395 kubelet[2692]: I0513 23:40:41.339348 2692 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:40:42.279663 kubelet[2692]: I0513 23:40:42.279302 2692 topology_manager.go:215] "Topology Admit Handler" podUID="99e4d087-f831-4f93-9d36-6cac12f8db9e" podNamespace="kube-system" podName="kube-proxy-57l8v" May 13 23:40:42.279798 kubelet[2692]: I0513 23:40:42.279750 2692 topology_manager.go:215] "Topology Admit Handler" podUID="084b6c39-b361-4fe0-96e2-0ecbf480d1de" podNamespace="kube-system" podName="cilium-65j5x" May 13 23:40:42.292918 kubelet[2692]: I0513 23:40:42.292889 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/084b6c39-b361-4fe0-96e2-0ecbf480d1de-hubble-tls\") pod \"cilium-65j5x\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " pod="kube-system/cilium-65j5x" May 13 23:40:42.292918 kubelet[2692]: I0513 23:40:42.292920 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99e4d087-f831-4f93-9d36-6cac12f8db9e-lib-modules\") pod \"kube-proxy-57l8v\" (UID: \"99e4d087-f831-4f93-9d36-6cac12f8db9e\") " pod="kube-system/kube-proxy-57l8v" May 13 23:40:42.293051 kubelet[2692]: I0513 23:40:42.292949 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-etc-cni-netd\") pod \"cilium-65j5x\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " pod="kube-system/cilium-65j5x" May 13 23:40:42.293051 kubelet[2692]: I0513 23:40:42.292967 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-cni-path\") pod \"cilium-65j5x\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " pod="kube-system/cilium-65j5x" May 13 23:40:42.293051 kubelet[2692]: I0513 23:40:42.293016 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgqzp\" (UniqueName: \"kubernetes.io/projected/084b6c39-b361-4fe0-96e2-0ecbf480d1de-kube-api-access-bgqzp\") pod \"cilium-65j5x\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " pod="kube-system/cilium-65j5x" May 13 23:40:42.293168 kubelet[2692]: I0513 23:40:42.293062 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/99e4d087-f831-4f93-9d36-6cac12f8db9e-kube-proxy\") pod \"kube-proxy-57l8v\" (UID: \"99e4d087-f831-4f93-9d36-6cac12f8db9e\") " pod="kube-system/kube-proxy-57l8v" May 13 23:40:42.293168 kubelet[2692]: I0513 23:40:42.293081 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-xtables-lock\") pod \"cilium-65j5x\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " pod="kube-system/cilium-65j5x" May 13 23:40:42.293168 kubelet[2692]: I0513 23:40:42.293096 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-host-proc-sys-net\") pod \"cilium-65j5x\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " pod="kube-system/cilium-65j5x" May 13 23:40:42.293168 kubelet[2692]: I0513 23:40:42.293112 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-host-proc-sys-kernel\") pod \"cilium-65j5x\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " pod="kube-system/cilium-65j5x" May 13 23:40:42.293168 kubelet[2692]: I0513 23:40:42.293127 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99e4d087-f831-4f93-9d36-6cac12f8db9e-xtables-lock\") pod \"kube-proxy-57l8v\" (UID: \"99e4d087-f831-4f93-9d36-6cac12f8db9e\") " pod="kube-system/kube-proxy-57l8v" May 13 23:40:42.293290 kubelet[2692]: I0513 23:40:42.293140 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t2lw\" (UniqueName: \"kubernetes.io/projected/99e4d087-f831-4f93-9d36-6cac12f8db9e-kube-api-access-4t2lw\") pod \"kube-proxy-57l8v\" (UID: \"99e4d087-f831-4f93-9d36-6cac12f8db9e\") " pod="kube-system/kube-proxy-57l8v" May 13 23:40:42.293290 kubelet[2692]: I0513 23:40:42.293156 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-bpf-maps\") pod \"cilium-65j5x\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " pod="kube-system/cilium-65j5x" May 13 23:40:42.293290 kubelet[2692]: I0513 23:40:42.293170 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-hostproc\") pod \"cilium-65j5x\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " pod="kube-system/cilium-65j5x" May 13 23:40:42.293290 kubelet[2692]: I0513 23:40:42.293186 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-cilium-run\") pod \"cilium-65j5x\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " pod="kube-system/cilium-65j5x" May 13 23:40:42.293290 kubelet[2692]: I0513 23:40:42.293200 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-cilium-cgroup\") pod \"cilium-65j5x\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " pod="kube-system/cilium-65j5x" May 13 23:40:42.293290 kubelet[2692]: I0513 23:40:42.293213 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-lib-modules\") pod \"cilium-65j5x\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " pod="kube-system/cilium-65j5x" May 13 23:40:42.293644 kubelet[2692]: I0513 23:40:42.293228 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/084b6c39-b361-4fe0-96e2-0ecbf480d1de-clustermesh-secrets\") pod \"cilium-65j5x\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " pod="kube-system/cilium-65j5x" May 13 23:40:42.293644 kubelet[2692]: I0513 23:40:42.293244 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/084b6c39-b361-4fe0-96e2-0ecbf480d1de-cilium-config-path\") pod \"cilium-65j5x\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " pod="kube-system/cilium-65j5x" May 13 23:40:42.294766 systemd[1]: Created slice kubepods-besteffort-pod99e4d087_f831_4f93_9d36_6cac12f8db9e.slice - libcontainer container kubepods-besteffort-pod99e4d087_f831_4f93_9d36_6cac12f8db9e.slice. May 13 23:40:42.314539 systemd[1]: Created slice kubepods-burstable-pod084b6c39_b361_4fe0_96e2_0ecbf480d1de.slice - libcontainer container kubepods-burstable-pod084b6c39_b361_4fe0_96e2_0ecbf480d1de.slice. May 13 23:40:42.426346 kubelet[2692]: I0513 23:40:42.426306 2692 topology_manager.go:215] "Topology Admit Handler" podUID="e69cf628-0e94-4cf2-a53c-bd993019423b" podNamespace="kube-system" podName="cilium-operator-599987898-l5fx9" May 13 23:40:42.433815 systemd[1]: Created slice kubepods-besteffort-pode69cf628_0e94_4cf2_a53c_bd993019423b.slice - libcontainer container kubepods-besteffort-pode69cf628_0e94_4cf2_a53c_bd993019423b.slice. May 13 23:40:42.495201 kubelet[2692]: I0513 23:40:42.495099 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e69cf628-0e94-4cf2-a53c-bd993019423b-cilium-config-path\") pod \"cilium-operator-599987898-l5fx9\" (UID: \"e69cf628-0e94-4cf2-a53c-bd993019423b\") " pod="kube-system/cilium-operator-599987898-l5fx9" May 13 23:40:42.495201 kubelet[2692]: I0513 23:40:42.495146 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjbf4\" (UniqueName: \"kubernetes.io/projected/e69cf628-0e94-4cf2-a53c-bd993019423b-kube-api-access-rjbf4\") pod \"cilium-operator-599987898-l5fx9\" (UID: \"e69cf628-0e94-4cf2-a53c-bd993019423b\") " pod="kube-system/cilium-operator-599987898-l5fx9" May 13 23:40:42.609003 containerd[1456]: time="2025-05-13T23:40:42.608937855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-57l8v,Uid:99e4d087-f831-4f93-9d36-6cac12f8db9e,Namespace:kube-system,Attempt:0,}" May 13 23:40:42.618674 containerd[1456]: time="2025-05-13T23:40:42.618633887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-65j5x,Uid:084b6c39-b361-4fe0-96e2-0ecbf480d1de,Namespace:kube-system,Attempt:0,}" May 13 23:40:42.636959 containerd[1456]: time="2025-05-13T23:40:42.636915729Z" level=info msg="connecting to shim f2400033eedbdc7200dd705b1504e156066800b451cdc9cad0cde2b797ae15c1" address="unix:///run/containerd/s/41b296f3548d56da676105949ba45953e1d850199137e46fee0f3e41ae58a3ce" namespace=k8s.io protocol=ttrpc version=3 May 13 23:40:42.639071 containerd[1456]: time="2025-05-13T23:40:42.639039811Z" level=info msg="connecting to shim 801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7" address="unix:///run/containerd/s/ccfb55a5659f9c4c9c4d93ae8ebd9fb42b0f7ef374605fea480c8d30dab015a7" namespace=k8s.io protocol=ttrpc version=3 May 13 23:40:42.663606 systemd[1]: Started cri-containerd-f2400033eedbdc7200dd705b1504e156066800b451cdc9cad0cde2b797ae15c1.scope - libcontainer container f2400033eedbdc7200dd705b1504e156066800b451cdc9cad0cde2b797ae15c1. May 13 23:40:42.666741 systemd[1]: Started cri-containerd-801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7.scope - libcontainer container 801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7. May 13 23:40:42.693710 containerd[1456]: time="2025-05-13T23:40:42.693650854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-57l8v,Uid:99e4d087-f831-4f93-9d36-6cac12f8db9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2400033eedbdc7200dd705b1504e156066800b451cdc9cad0cde2b797ae15c1\"" May 13 23:40:42.695291 containerd[1456]: time="2025-05-13T23:40:42.695253006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-65j5x,Uid:084b6c39-b361-4fe0-96e2-0ecbf480d1de,Namespace:kube-system,Attempt:0,} returns sandbox id \"801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7\"" May 13 23:40:42.702193 containerd[1456]: time="2025-05-13T23:40:42.701885177Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 23:40:42.702950 containerd[1456]: time="2025-05-13T23:40:42.702917118Z" level=info msg="CreateContainer within sandbox \"f2400033eedbdc7200dd705b1504e156066800b451cdc9cad0cde2b797ae15c1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:40:42.712474 containerd[1456]: time="2025-05-13T23:40:42.712429706Z" level=info msg="Container a1305a309a5c45053bcf3273f1540c0ab81f423578f67b3f808dfb518e025661: CDI devices from CRI Config.CDIDevices: []" May 13 23:40:42.721224 containerd[1456]: time="2025-05-13T23:40:42.721184640Z" level=info msg="CreateContainer within sandbox \"f2400033eedbdc7200dd705b1504e156066800b451cdc9cad0cde2b797ae15c1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a1305a309a5c45053bcf3273f1540c0ab81f423578f67b3f808dfb518e025661\"" May 13 23:40:42.724038 containerd[1456]: time="2025-05-13T23:40:42.723905974Z" level=info msg="StartContainer for \"a1305a309a5c45053bcf3273f1540c0ab81f423578f67b3f808dfb518e025661\"" May 13 23:40:42.725722 containerd[1456]: time="2025-05-13T23:40:42.725685409Z" level=info msg="connecting to shim a1305a309a5c45053bcf3273f1540c0ab81f423578f67b3f808dfb518e025661" address="unix:///run/containerd/s/41b296f3548d56da676105949ba45953e1d850199137e46fee0f3e41ae58a3ce" protocol=ttrpc version=3 May 13 23:40:42.739046 containerd[1456]: time="2025-05-13T23:40:42.738534584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-l5fx9,Uid:e69cf628-0e94-4cf2-a53c-bd993019423b,Namespace:kube-system,Attempt:0,}" May 13 23:40:42.746592 systemd[1]: Started cri-containerd-a1305a309a5c45053bcf3273f1540c0ab81f423578f67b3f808dfb518e025661.scope - libcontainer container a1305a309a5c45053bcf3273f1540c0ab81f423578f67b3f808dfb518e025661. May 13 23:40:42.756769 containerd[1456]: time="2025-05-13T23:40:42.756725945Z" level=info msg="connecting to shim 0fe0aa4e001614010e077c9f39d523530f99ad65a528af2c6a353d9fb9087bd8" address="unix:///run/containerd/s/6023e5b6bf937cef33302492a5585f51c10b9ba85675eabcfa46d6eccae8be5b" namespace=k8s.io protocol=ttrpc version=3 May 13 23:40:42.781611 systemd[1]: Started cri-containerd-0fe0aa4e001614010e077c9f39d523530f99ad65a528af2c6a353d9fb9087bd8.scope - libcontainer container 0fe0aa4e001614010e077c9f39d523530f99ad65a528af2c6a353d9fb9087bd8. May 13 23:40:42.811213 containerd[1456]: time="2025-05-13T23:40:42.811120663Z" level=info msg="StartContainer for \"a1305a309a5c45053bcf3273f1540c0ab81f423578f67b3f808dfb518e025661\" returns successfully" May 13 23:40:42.847006 containerd[1456]: time="2025-05-13T23:40:42.846870812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-l5fx9,Uid:e69cf628-0e94-4cf2-a53c-bd993019423b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fe0aa4e001614010e077c9f39d523530f99ad65a528af2c6a353d9fb9087bd8\"" May 13 23:40:47.737611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1909249025.mount: Deactivated successfully. May 13 23:40:53.110405 containerd[1456]: time="2025-05-13T23:40:53.110345202Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:53.111809 containerd[1456]: time="2025-05-13T23:40:53.111715335Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 13 23:40:53.112542 containerd[1456]: time="2025-05-13T23:40:53.112510983Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:53.114356 containerd[1456]: time="2025-05-13T23:40:53.114314320Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.412386822s" May 13 23:40:53.114403 containerd[1456]: time="2025-05-13T23:40:53.114359241Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 13 23:40:53.117068 containerd[1456]: time="2025-05-13T23:40:53.117037867Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 23:40:53.119273 containerd[1456]: time="2025-05-13T23:40:53.119238168Z" level=info msg="CreateContainer within sandbox \"801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 23:40:53.128464 containerd[1456]: time="2025-05-13T23:40:53.128395538Z" level=info msg="Container d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c: CDI devices from CRI Config.CDIDevices: []" May 13 23:40:53.149304 containerd[1456]: time="2025-05-13T23:40:53.149175780Z" level=info msg="CreateContainer within sandbox \"801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c\"" May 13 23:40:53.149686 containerd[1456]: time="2025-05-13T23:40:53.149657705Z" level=info msg="StartContainer for \"d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c\"" May 13 23:40:53.150556 containerd[1456]: time="2025-05-13T23:40:53.150527353Z" level=info msg="connecting to shim d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c" address="unix:///run/containerd/s/ccfb55a5659f9c4c9c4d93ae8ebd9fb42b0f7ef374605fea480c8d30dab015a7" protocol=ttrpc version=3 May 13 23:40:53.192620 systemd[1]: Started cri-containerd-d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c.scope - libcontainer container d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c. May 13 23:40:53.226587 containerd[1456]: time="2025-05-13T23:40:53.225946689Z" level=info msg="StartContainer for \"d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c\" returns successfully" May 13 23:40:53.274949 systemd[1]: cri-containerd-d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c.scope: Deactivated successfully. May 13 23:40:53.306735 containerd[1456]: time="2025-05-13T23:40:53.306674115Z" level=info msg="received exit event container_id:\"d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c\" id:\"d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c\" pid:3115 exited_at:{seconds:1747179653 nanos:297230663}" May 13 23:40:53.306866 containerd[1456]: time="2025-05-13T23:40:53.306779077Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c\" id:\"d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c\" pid:3115 exited_at:{seconds:1747179653 nanos:297230663}" May 13 23:40:53.339175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c-rootfs.mount: Deactivated successfully. May 13 23:40:53.863243 containerd[1456]: time="2025-05-13T23:40:53.863180820Z" level=info msg="CreateContainer within sandbox \"801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 23:40:53.870483 containerd[1456]: time="2025-05-13T23:40:53.869813805Z" level=info msg="Container 50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4: CDI devices from CRI Config.CDIDevices: []" May 13 23:40:53.874657 containerd[1456]: time="2025-05-13T23:40:53.874620932Z" level=info msg="CreateContainer within sandbox \"801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4\"" May 13 23:40:53.875394 containerd[1456]: time="2025-05-13T23:40:53.875358979Z" level=info msg="StartContainer for \"50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4\"" May 13 23:40:53.880950 kubelet[2692]: I0513 23:40:53.880879 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-57l8v" podStartSLOduration=11.880861712 podStartE2EDuration="11.880861712s" podCreationTimestamp="2025-05-13 23:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:40:42.86694425 +0000 UTC m=+17.165004502" watchObservedRunningTime="2025-05-13 23:40:53.880861712 +0000 UTC m=+28.178921924" May 13 23:40:53.885320 containerd[1456]: time="2025-05-13T23:40:53.885280476Z" level=info msg="connecting to shim 50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4" address="unix:///run/containerd/s/ccfb55a5659f9c4c9c4d93ae8ebd9fb42b0f7ef374605fea480c8d30dab015a7" protocol=ttrpc version=3 May 13 23:40:53.922597 systemd[1]: Started cri-containerd-50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4.scope - libcontainer container 50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4. May 13 23:40:53.971736 containerd[1456]: time="2025-05-13T23:40:53.971645877Z" level=info msg="StartContainer for \"50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4\" returns successfully" May 13 23:40:53.985285 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:40:53.985511 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:40:53.985665 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 23:40:53.986922 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:40:53.988341 systemd[1]: cri-containerd-50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4.scope: Deactivated successfully. May 13 23:40:53.997448 containerd[1456]: time="2025-05-13T23:40:53.996903884Z" level=info msg="received exit event container_id:\"50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4\" id:\"50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4\" pid:3160 exited_at:{seconds:1747179653 nanos:996620881}" May 13 23:40:53.997448 containerd[1456]: time="2025-05-13T23:40:53.997090445Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4\" id:\"50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4\" pid:3160 exited_at:{seconds:1747179653 nanos:996620881}" May 13 23:40:54.033920 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:40:54.189862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1749628970.mount: Deactivated successfully. May 13 23:40:54.391755 systemd[1]: Started sshd@7-10.0.0.42:22-10.0.0.1:60502.service - OpenSSH per-connection server daemon (10.0.0.1:60502). May 13 23:40:54.461935 sshd[3209]: Accepted publickey for core from 10.0.0.1 port 60502 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:40:54.463924 sshd-session[3209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:40:54.469823 systemd-logind[1433]: New session 8 of user core. May 13 23:40:54.482616 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:40:54.513130 containerd[1456]: time="2025-05-13T23:40:54.513069763Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:54.514699 containerd[1456]: time="2025-05-13T23:40:54.514645818Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 13 23:40:54.515658 containerd[1456]: time="2025-05-13T23:40:54.515630267Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:40:54.516841 containerd[1456]: time="2025-05-13T23:40:54.516793397Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.399568728s" May 13 23:40:54.516841 containerd[1456]: time="2025-05-13T23:40:54.516829558Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 13 23:40:54.519290 containerd[1456]: time="2025-05-13T23:40:54.519252500Z" level=info msg="CreateContainer within sandbox \"0fe0aa4e001614010e077c9f39d523530f99ad65a528af2c6a353d9fb9087bd8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 23:40:54.526482 containerd[1456]: time="2025-05-13T23:40:54.526445846Z" level=info msg="Container 85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3: CDI devices from CRI Config.CDIDevices: []" May 13 23:40:54.540349 containerd[1456]: time="2025-05-13T23:40:54.540229492Z" level=info msg="CreateContainer within sandbox \"0fe0aa4e001614010e077c9f39d523530f99ad65a528af2c6a353d9fb9087bd8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3\"" May 13 23:40:54.541118 containerd[1456]: time="2025-05-13T23:40:54.540906458Z" level=info msg="StartContainer for \"85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3\"" May 13 23:40:54.542621 containerd[1456]: time="2025-05-13T23:40:54.542584313Z" level=info msg="connecting to shim 85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3" address="unix:///run/containerd/s/6023e5b6bf937cef33302492a5585f51c10b9ba85675eabcfa46d6eccae8be5b" protocol=ttrpc version=3 May 13 23:40:54.561877 systemd[1]: Started cri-containerd-85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3.scope - libcontainer container 85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3. May 13 23:40:54.597537 containerd[1456]: time="2025-05-13T23:40:54.597501015Z" level=info msg="StartContainer for \"85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3\" returns successfully" May 13 23:40:54.648680 sshd[3211]: Connection closed by 10.0.0.1 port 60502 May 13 23:40:54.648208 sshd-session[3209]: pam_unix(sshd:session): session closed for user core May 13 23:40:54.652260 systemd[1]: sshd@7-10.0.0.42:22-10.0.0.1:60502.service: Deactivated successfully. May 13 23:40:54.654857 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:40:54.656336 systemd-logind[1433]: Session 8 logged out. Waiting for processes to exit. May 13 23:40:54.657303 systemd-logind[1433]: Removed session 8. May 13 23:40:54.867615 containerd[1456]: time="2025-05-13T23:40:54.867562963Z" level=info msg="CreateContainer within sandbox \"801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 23:40:54.898930 containerd[1456]: time="2025-05-13T23:40:54.898877489Z" level=info msg="Container e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3: CDI devices from CRI Config.CDIDevices: []" May 13 23:40:54.902712 kubelet[2692]: I0513 23:40:54.902472 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-l5fx9" podStartSLOduration=1.233286758 podStartE2EDuration="12.902455802s" podCreationTimestamp="2025-05-13 23:40:42 +0000 UTC" firstStartedPulling="2025-05-13 23:40:42.84830028 +0000 UTC m=+17.146360532" lastFinishedPulling="2025-05-13 23:40:54.517469324 +0000 UTC m=+28.815529576" observedRunningTime="2025-05-13 23:40:54.902103798 +0000 UTC m=+29.200164050" watchObservedRunningTime="2025-05-13 23:40:54.902455802 +0000 UTC m=+29.200516054" May 13 23:40:54.913739 containerd[1456]: time="2025-05-13T23:40:54.913675264Z" level=info msg="CreateContainer within sandbox \"801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3\"" May 13 23:40:54.914961 containerd[1456]: time="2025-05-13T23:40:54.914927756Z" level=info msg="StartContainer for \"e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3\"" May 13 23:40:54.916354 containerd[1456]: time="2025-05-13T23:40:54.916311368Z" level=info msg="connecting to shim e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3" address="unix:///run/containerd/s/ccfb55a5659f9c4c9c4d93ae8ebd9fb42b0f7ef374605fea480c8d30dab015a7" protocol=ttrpc version=3 May 13 23:40:54.940751 systemd[1]: Started cri-containerd-e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3.scope - libcontainer container e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3. May 13 23:40:55.000255 containerd[1456]: time="2025-05-13T23:40:55.000218535Z" level=info msg="StartContainer for \"e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3\" returns successfully" May 13 23:40:55.000965 systemd[1]: cri-containerd-e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3.scope: Deactivated successfully. May 13 23:40:55.001749 containerd[1456]: time="2025-05-13T23:40:55.001662748Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3\" id:\"e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3\" pid:3275 exited_at:{seconds:1747179655 nanos:1432386}" May 13 23:40:55.001749 containerd[1456]: time="2025-05-13T23:40:55.001660508Z" level=info msg="received exit event container_id:\"e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3\" id:\"e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3\" pid:3275 exited_at:{seconds:1747179655 nanos:1432386}" May 13 23:40:55.878058 containerd[1456]: time="2025-05-13T23:40:55.877351810Z" level=info msg="CreateContainer within sandbox \"801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 23:40:55.887622 containerd[1456]: time="2025-05-13T23:40:55.886869452Z" level=info msg="Container f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003: CDI devices from CRI Config.CDIDevices: []" May 13 23:40:55.891848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount103172880.mount: Deactivated successfully. May 13 23:40:55.899488 containerd[1456]: time="2025-05-13T23:40:55.899333439Z" level=info msg="CreateContainer within sandbox \"801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003\"" May 13 23:40:55.901247 containerd[1456]: time="2025-05-13T23:40:55.900162766Z" level=info msg="StartContainer for \"f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003\"" May 13 23:40:55.901247 containerd[1456]: time="2025-05-13T23:40:55.900936253Z" level=info msg="connecting to shim f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003" address="unix:///run/containerd/s/ccfb55a5659f9c4c9c4d93ae8ebd9fb42b0f7ef374605fea480c8d30dab015a7" protocol=ttrpc version=3 May 13 23:40:55.929611 systemd[1]: Started cri-containerd-f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003.scope - libcontainer container f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003. May 13 23:40:55.971741 systemd[1]: cri-containerd-f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003.scope: Deactivated successfully. May 13 23:40:55.972288 containerd[1456]: time="2025-05-13T23:40:55.972254064Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003\" id:\"f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003\" pid:3314 exited_at:{seconds:1747179655 nanos:971929981}" May 13 23:40:55.977632 containerd[1456]: time="2025-05-13T23:40:55.977496428Z" level=info msg="received exit event container_id:\"f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003\" id:\"f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003\" pid:3314 exited_at:{seconds:1747179655 nanos:971929981}" May 13 23:40:55.988182 containerd[1456]: time="2025-05-13T23:40:55.987996118Z" level=info msg="StartContainer for \"f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003\" returns successfully" May 13 23:40:55.999873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003-rootfs.mount: Deactivated successfully. May 13 23:40:56.885627 containerd[1456]: time="2025-05-13T23:40:56.885579255Z" level=info msg="CreateContainer within sandbox \"801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 23:40:56.936209 containerd[1456]: time="2025-05-13T23:40:56.935055652Z" level=info msg="Container bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001: CDI devices from CRI Config.CDIDevices: []" May 13 23:40:56.948839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3798458126.mount: Deactivated successfully. May 13 23:40:56.951775 containerd[1456]: time="2025-05-13T23:40:56.951740106Z" level=info msg="CreateContainer within sandbox \"801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001\"" May 13 23:40:56.952585 containerd[1456]: time="2025-05-13T23:40:56.952555033Z" level=info msg="StartContainer for \"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001\"" May 13 23:40:56.953540 containerd[1456]: time="2025-05-13T23:40:56.953505440Z" level=info msg="connecting to shim bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001" address="unix:///run/containerd/s/ccfb55a5659f9c4c9c4d93ae8ebd9fb42b0f7ef374605fea480c8d30dab015a7" protocol=ttrpc version=3 May 13 23:40:56.992608 systemd[1]: Started cri-containerd-bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001.scope - libcontainer container bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001. May 13 23:40:57.025208 containerd[1456]: time="2025-05-13T23:40:57.025160604Z" level=info msg="StartContainer for \"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001\" returns successfully" May 13 23:40:57.179932 containerd[1456]: time="2025-05-13T23:40:57.179620647Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001\" id:\"ac077af43a4c4ea53cfe61069f220a566e9e4d8b145c12b80dc6b6297b34af4c\" pid:3381 exited_at:{seconds:1747179657 nanos:179048923}" May 13 23:40:57.249740 kubelet[2692]: I0513 23:40:57.245914 2692 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 23:40:57.274205 kubelet[2692]: I0513 23:40:57.273066 2692 topology_manager.go:215] "Topology Admit Handler" podUID="faf9ac40-2011-42dc-b4a5-28d6779b5cb6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ptcmp" May 13 23:40:57.298246 kubelet[2692]: I0513 23:40:57.290780 2692 topology_manager.go:215] "Topology Admit Handler" podUID="f57a7df7-f547-4ac0-a9dc-04b79f1184bb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pvgk8" May 13 23:40:57.303854 systemd[1]: Created slice kubepods-burstable-podfaf9ac40_2011_42dc_b4a5_28d6779b5cb6.slice - libcontainer container kubepods-burstable-podfaf9ac40_2011_42dc_b4a5_28d6779b5cb6.slice. May 13 23:40:57.319765 systemd[1]: Created slice kubepods-burstable-podf57a7df7_f547_4ac0_a9dc_04b79f1184bb.slice - libcontainer container kubepods-burstable-podf57a7df7_f547_4ac0_a9dc_04b79f1184bb.slice. May 13 23:40:57.411802 kubelet[2692]: I0513 23:40:57.411756 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px5mq\" (UniqueName: \"kubernetes.io/projected/f57a7df7-f547-4ac0-a9dc-04b79f1184bb-kube-api-access-px5mq\") pod \"coredns-7db6d8ff4d-pvgk8\" (UID: \"f57a7df7-f547-4ac0-a9dc-04b79f1184bb\") " pod="kube-system/coredns-7db6d8ff4d-pvgk8" May 13 23:40:57.411802 kubelet[2692]: I0513 23:40:57.411805 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/faf9ac40-2011-42dc-b4a5-28d6779b5cb6-config-volume\") pod \"coredns-7db6d8ff4d-ptcmp\" (UID: \"faf9ac40-2011-42dc-b4a5-28d6779b5cb6\") " pod="kube-system/coredns-7db6d8ff4d-ptcmp" May 13 23:40:57.411971 kubelet[2692]: I0513 23:40:57.411832 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljntz\" (UniqueName: \"kubernetes.io/projected/faf9ac40-2011-42dc-b4a5-28d6779b5cb6-kube-api-access-ljntz\") pod \"coredns-7db6d8ff4d-ptcmp\" (UID: \"faf9ac40-2011-42dc-b4a5-28d6779b5cb6\") " pod="kube-system/coredns-7db6d8ff4d-ptcmp" May 13 23:40:57.411971 kubelet[2692]: I0513 23:40:57.411853 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f57a7df7-f547-4ac0-a9dc-04b79f1184bb-config-volume\") pod \"coredns-7db6d8ff4d-pvgk8\" (UID: \"f57a7df7-f547-4ac0-a9dc-04b79f1184bb\") " pod="kube-system/coredns-7db6d8ff4d-pvgk8" May 13 23:40:57.618328 containerd[1456]: time="2025-05-13T23:40:57.618275990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ptcmp,Uid:faf9ac40-2011-42dc-b4a5-28d6779b5cb6,Namespace:kube-system,Attempt:0,}" May 13 23:40:57.625774 containerd[1456]: time="2025-05-13T23:40:57.625656326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pvgk8,Uid:f57a7df7-f547-4ac0-a9dc-04b79f1184bb,Namespace:kube-system,Attempt:0,}" May 13 23:40:57.911319 kubelet[2692]: I0513 23:40:57.911134 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-65j5x" podStartSLOduration=5.495764459 podStartE2EDuration="15.911117715s" podCreationTimestamp="2025-05-13 23:40:42 +0000 UTC" firstStartedPulling="2025-05-13 23:40:42.701490929 +0000 UTC m=+16.999551181" lastFinishedPulling="2025-05-13 23:40:53.116844185 +0000 UTC m=+27.414904437" observedRunningTime="2025-05-13 23:40:57.910906273 +0000 UTC m=+32.208966525" watchObservedRunningTime="2025-05-13 23:40:57.911117715 +0000 UTC m=+32.209177927" May 13 23:40:59.348316 systemd-networkd[1390]: cilium_host: Link UP May 13 23:40:59.348446 systemd-networkd[1390]: cilium_net: Link UP May 13 23:40:59.348570 systemd-networkd[1390]: cilium_net: Gained carrier May 13 23:40:59.348692 systemd-networkd[1390]: cilium_host: Gained carrier May 13 23:40:59.444566 systemd-networkd[1390]: cilium_vxlan: Link UP May 13 23:40:59.444572 systemd-networkd[1390]: cilium_vxlan: Gained carrier May 13 23:40:59.669545 systemd[1]: Started sshd@8-10.0.0.42:22-10.0.0.1:60508.service - OpenSSH per-connection server daemon (10.0.0.1:60508). May 13 23:40:59.737199 sshd[3577]: Accepted publickey for core from 10.0.0.1 port 60508 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:40:59.739556 sshd-session[3577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:40:59.748106 systemd-logind[1433]: New session 9 of user core. May 13 23:40:59.750535 systemd-networkd[1390]: cilium_net: Gained IPv6LL May 13 23:40:59.759671 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:40:59.775458 kernel: NET: Registered PF_ALG protocol family May 13 23:40:59.904245 sshd[3591]: Connection closed by 10.0.0.1 port 60508 May 13 23:40:59.905732 sshd-session[3577]: pam_unix(sshd:session): session closed for user core May 13 23:40:59.910809 systemd[1]: sshd@8-10.0.0.42:22-10.0.0.1:60508.service: Deactivated successfully. May 13 23:40:59.916361 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:40:59.917637 systemd-logind[1433]: Session 9 logged out. Waiting for processes to exit. May 13 23:40:59.918795 systemd-logind[1433]: Removed session 9. May 13 23:41:00.262587 systemd-networkd[1390]: cilium_host: Gained IPv6LL May 13 23:41:00.429473 systemd-networkd[1390]: lxc_health: Link UP May 13 23:41:00.441120 systemd-networkd[1390]: lxc_health: Gained carrier May 13 23:41:00.518580 systemd-networkd[1390]: cilium_vxlan: Gained IPv6LL May 13 23:41:00.769234 systemd-networkd[1390]: lxc6ca0757b8abf: Link UP May 13 23:41:00.786634 kernel: eth0: renamed from tmpb8ff1 May 13 23:41:00.796800 kernel: eth0: renamed from tmp81224 May 13 23:41:00.802236 systemd-networkd[1390]: lxcff231896bf11: Link UP May 13 23:41:00.802490 systemd-networkd[1390]: lxc6ca0757b8abf: Gained carrier May 13 23:41:00.807241 systemd-networkd[1390]: lxcff231896bf11: Gained carrier May 13 23:41:01.734589 systemd-networkd[1390]: lxc_health: Gained IPv6LL May 13 23:41:01.862631 systemd-networkd[1390]: lxc6ca0757b8abf: Gained IPv6LL May 13 23:41:02.118593 systemd-networkd[1390]: lxcff231896bf11: Gained IPv6LL May 13 23:41:04.729764 containerd[1456]: time="2025-05-13T23:41:04.729712309Z" level=info msg="connecting to shim b8ff1bbf03ef21f665d99b86caf15fae508472de48a60e4512599b7acbb2c3e5" address="unix:///run/containerd/s/3378aa61324e71d15758c39d8177d36b6e216fbec534b56b7928b581d0ac7fc3" namespace=k8s.io protocol=ttrpc version=3 May 13 23:41:04.730662 containerd[1456]: time="2025-05-13T23:41:04.729735189Z" level=info msg="connecting to shim 81224f8fd3730f39d8bf8883db0ba948ea3ef51bd421c380ee98e1f2815a0b0d" address="unix:///run/containerd/s/65d745570c1836396f31d6e7c5543c372bd2f22920ddb8a04be19e1e35e7df2f" namespace=k8s.io protocol=ttrpc version=3 May 13 23:41:04.762627 systemd[1]: Started cri-containerd-81224f8fd3730f39d8bf8883db0ba948ea3ef51bd421c380ee98e1f2815a0b0d.scope - libcontainer container 81224f8fd3730f39d8bf8883db0ba948ea3ef51bd421c380ee98e1f2815a0b0d. May 13 23:41:04.766951 systemd[1]: Started cri-containerd-b8ff1bbf03ef21f665d99b86caf15fae508472de48a60e4512599b7acbb2c3e5.scope - libcontainer container b8ff1bbf03ef21f665d99b86caf15fae508472de48a60e4512599b7acbb2c3e5. May 13 23:41:04.778438 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:41:04.778989 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:41:04.812450 containerd[1456]: time="2025-05-13T23:41:04.812347105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ptcmp,Uid:faf9ac40-2011-42dc-b4a5-28d6779b5cb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8ff1bbf03ef21f665d99b86caf15fae508472de48a60e4512599b7acbb2c3e5\"" May 13 23:41:04.815648 containerd[1456]: time="2025-05-13T23:41:04.815539161Z" level=info msg="CreateContainer within sandbox \"b8ff1bbf03ef21f665d99b86caf15fae508472de48a60e4512599b7acbb2c3e5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:41:04.820776 containerd[1456]: time="2025-05-13T23:41:04.820312623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pvgk8,Uid:f57a7df7-f547-4ac0-a9dc-04b79f1184bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"81224f8fd3730f39d8bf8883db0ba948ea3ef51bd421c380ee98e1f2815a0b0d\"" May 13 23:41:04.824179 containerd[1456]: time="2025-05-13T23:41:04.824142482Z" level=info msg="CreateContainer within sandbox \"81224f8fd3730f39d8bf8883db0ba948ea3ef51bd421c380ee98e1f2815a0b0d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:41:04.833298 containerd[1456]: time="2025-05-13T23:41:04.832997084Z" level=info msg="Container 0d7aaa122f4b233d35cfe9395d1810cdde7648ab869f93d0ab496e1771ab4c50: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:04.836324 containerd[1456]: time="2025-05-13T23:41:04.836273380Z" level=info msg="Container 8f55d3ae4f36a29e434941e391ceefa62fd2e3be2b415c46cdee1c18543eaf0e: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:04.841276 containerd[1456]: time="2025-05-13T23:41:04.841235444Z" level=info msg="CreateContainer within sandbox \"b8ff1bbf03ef21f665d99b86caf15fae508472de48a60e4512599b7acbb2c3e5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d7aaa122f4b233d35cfe9395d1810cdde7648ab869f93d0ab496e1771ab4c50\"" May 13 23:41:04.842025 containerd[1456]: time="2025-05-13T23:41:04.841996727Z" level=info msg="StartContainer for \"0d7aaa122f4b233d35cfe9395d1810cdde7648ab869f93d0ab496e1771ab4c50\"" May 13 23:41:04.842955 containerd[1456]: time="2025-05-13T23:41:04.842929452Z" level=info msg="connecting to shim 0d7aaa122f4b233d35cfe9395d1810cdde7648ab869f93d0ab496e1771ab4c50" address="unix:///run/containerd/s/3378aa61324e71d15758c39d8177d36b6e216fbec534b56b7928b581d0ac7fc3" protocol=ttrpc version=3 May 13 23:41:04.847008 containerd[1456]: time="2025-05-13T23:41:04.846949991Z" level=info msg="CreateContainer within sandbox \"81224f8fd3730f39d8bf8883db0ba948ea3ef51bd421c380ee98e1f2815a0b0d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f55d3ae4f36a29e434941e391ceefa62fd2e3be2b415c46cdee1c18543eaf0e\"" May 13 23:41:04.847717 containerd[1456]: time="2025-05-13T23:41:04.847674355Z" level=info msg="StartContainer for \"8f55d3ae4f36a29e434941e391ceefa62fd2e3be2b415c46cdee1c18543eaf0e\"" May 13 23:41:04.848893 containerd[1456]: time="2025-05-13T23:41:04.848849080Z" level=info msg="connecting to shim 8f55d3ae4f36a29e434941e391ceefa62fd2e3be2b415c46cdee1c18543eaf0e" address="unix:///run/containerd/s/65d745570c1836396f31d6e7c5543c372bd2f22920ddb8a04be19e1e35e7df2f" protocol=ttrpc version=3 May 13 23:41:04.863598 systemd[1]: Started cri-containerd-0d7aaa122f4b233d35cfe9395d1810cdde7648ab869f93d0ab496e1771ab4c50.scope - libcontainer container 0d7aaa122f4b233d35cfe9395d1810cdde7648ab869f93d0ab496e1771ab4c50. May 13 23:41:04.868219 systemd[1]: Started cri-containerd-8f55d3ae4f36a29e434941e391ceefa62fd2e3be2b415c46cdee1c18543eaf0e.scope - libcontainer container 8f55d3ae4f36a29e434941e391ceefa62fd2e3be2b415c46cdee1c18543eaf0e. May 13 23:41:04.899202 containerd[1456]: time="2025-05-13T23:41:04.899157521Z" level=info msg="StartContainer for \"0d7aaa122f4b233d35cfe9395d1810cdde7648ab869f93d0ab496e1771ab4c50\" returns successfully" May 13 23:41:04.906775 containerd[1456]: time="2025-05-13T23:41:04.906688477Z" level=info msg="StartContainer for \"8f55d3ae4f36a29e434941e391ceefa62fd2e3be2b415c46cdee1c18543eaf0e\" returns successfully" May 13 23:41:04.927515 systemd[1]: Started sshd@9-10.0.0.42:22-10.0.0.1:47866.service - OpenSSH per-connection server daemon (10.0.0.1:47866). May 13 23:41:04.939374 kubelet[2692]: I0513 23:41:04.934083 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ptcmp" podStartSLOduration=22.934062489 podStartE2EDuration="22.934062489s" podCreationTimestamp="2025-05-13 23:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:41:04.93231428 +0000 UTC m=+39.230374532" watchObservedRunningTime="2025-05-13 23:41:04.934062489 +0000 UTC m=+39.232122741" May 13 23:41:05.011186 sshd[4036]: Accepted publickey for core from 10.0.0.1 port 47866 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:41:05.014322 sshd-session[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:41:05.026698 systemd-logind[1433]: New session 10 of user core. May 13 23:41:05.033613 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:41:05.207578 sshd[4051]: Connection closed by 10.0.0.1 port 47866 May 13 23:41:05.208035 sshd-session[4036]: pam_unix(sshd:session): session closed for user core May 13 23:41:05.211617 systemd[1]: sshd@9-10.0.0.42:22-10.0.0.1:47866.service: Deactivated successfully. May 13 23:41:05.213831 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:41:05.214669 systemd-logind[1433]: Session 10 logged out. Waiting for processes to exit. May 13 23:41:05.215463 systemd-logind[1433]: Removed session 10. May 13 23:41:05.704107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3635942118.mount: Deactivated successfully. May 13 23:41:05.925101 kubelet[2692]: I0513 23:41:05.925042 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pvgk8" podStartSLOduration=23.925026361 podStartE2EDuration="23.925026361s" podCreationTimestamp="2025-05-13 23:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:41:05.923933077 +0000 UTC m=+40.221993329" watchObservedRunningTime="2025-05-13 23:41:05.925026361 +0000 UTC m=+40.223086613" May 13 23:41:10.226523 systemd[1]: Started sshd@10-10.0.0.42:22-10.0.0.1:47870.service - OpenSSH per-connection server daemon (10.0.0.1:47870). May 13 23:41:10.305326 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 47870 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:41:10.307289 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:41:10.315598 systemd-logind[1433]: New session 11 of user core. May 13 23:41:10.327603 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:41:10.457928 sshd[4081]: Connection closed by 10.0.0.1 port 47870 May 13 23:41:10.459830 sshd-session[4079]: pam_unix(sshd:session): session closed for user core May 13 23:41:10.475361 systemd[1]: sshd@10-10.0.0.42:22-10.0.0.1:47870.service: Deactivated successfully. May 13 23:41:10.478050 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:41:10.481285 systemd-logind[1433]: Session 11 logged out. Waiting for processes to exit. May 13 23:41:10.482936 systemd[1]: Started sshd@11-10.0.0.42:22-10.0.0.1:47884.service - OpenSSH per-connection server daemon (10.0.0.1:47884). May 13 23:41:10.484334 systemd-logind[1433]: Removed session 11. May 13 23:41:10.534141 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 47884 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:41:10.535495 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:41:10.539927 systemd-logind[1433]: New session 12 of user core. May 13 23:41:10.549572 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 23:41:10.711531 sshd[4100]: Connection closed by 10.0.0.1 port 47884 May 13 23:41:10.714349 sshd-session[4097]: pam_unix(sshd:session): session closed for user core May 13 23:41:10.727756 systemd[1]: sshd@11-10.0.0.42:22-10.0.0.1:47884.service: Deactivated successfully. May 13 23:41:10.731692 systemd[1]: session-12.scope: Deactivated successfully. May 13 23:41:10.733604 systemd-logind[1433]: Session 12 logged out. Waiting for processes to exit. May 13 23:41:10.735282 systemd[1]: Started sshd@12-10.0.0.42:22-10.0.0.1:47898.service - OpenSSH per-connection server daemon (10.0.0.1:47898). May 13 23:41:10.737003 systemd-logind[1433]: Removed session 12. May 13 23:41:10.793809 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 47898 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:41:10.795369 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:41:10.799809 systemd-logind[1433]: New session 13 of user core. May 13 23:41:10.807590 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 23:41:10.927071 sshd[4115]: Connection closed by 10.0.0.1 port 47898 May 13 23:41:10.927499 sshd-session[4112]: pam_unix(sshd:session): session closed for user core May 13 23:41:10.931503 systemd[1]: sshd@12-10.0.0.42:22-10.0.0.1:47898.service: Deactivated successfully. May 13 23:41:10.933350 systemd[1]: session-13.scope: Deactivated successfully. May 13 23:41:10.934266 systemd-logind[1433]: Session 13 logged out. Waiting for processes to exit. May 13 23:41:10.935077 systemd-logind[1433]: Removed session 13. May 13 23:41:15.941366 systemd[1]: Started sshd@13-10.0.0.42:22-10.0.0.1:40232.service - OpenSSH per-connection server daemon (10.0.0.1:40232). May 13 23:41:16.000442 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 40232 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:41:16.001941 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:41:16.007207 systemd-logind[1433]: New session 14 of user core. May 13 23:41:16.025650 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 23:41:16.158494 sshd[4136]: Connection closed by 10.0.0.1 port 40232 May 13 23:41:16.158996 sshd-session[4134]: pam_unix(sshd:session): session closed for user core May 13 23:41:16.162573 systemd[1]: sshd@13-10.0.0.42:22-10.0.0.1:40232.service: Deactivated successfully. May 13 23:41:16.167117 systemd[1]: session-14.scope: Deactivated successfully. May 13 23:41:16.169656 systemd-logind[1433]: Session 14 logged out. Waiting for processes to exit. May 13 23:41:16.170812 systemd-logind[1433]: Removed session 14. May 13 23:41:21.170719 systemd[1]: Started sshd@14-10.0.0.42:22-10.0.0.1:40236.service - OpenSSH per-connection server daemon (10.0.0.1:40236). May 13 23:41:21.222457 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 40236 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:41:21.223314 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:41:21.227952 systemd-logind[1433]: New session 15 of user core. May 13 23:41:21.235585 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 23:41:21.352570 sshd[4151]: Connection closed by 10.0.0.1 port 40236 May 13 23:41:21.353294 sshd-session[4149]: pam_unix(sshd:session): session closed for user core May 13 23:41:21.367170 systemd[1]: sshd@14-10.0.0.42:22-10.0.0.1:40236.service: Deactivated successfully. May 13 23:41:21.369433 systemd[1]: session-15.scope: Deactivated successfully. May 13 23:41:21.370250 systemd-logind[1433]: Session 15 logged out. Waiting for processes to exit. May 13 23:41:21.372848 systemd[1]: Started sshd@15-10.0.0.42:22-10.0.0.1:40246.service - OpenSSH per-connection server daemon (10.0.0.1:40246). May 13 23:41:21.374139 systemd-logind[1433]: Removed session 15. May 13 23:41:21.427967 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 40246 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:41:21.429145 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:41:21.434061 systemd-logind[1433]: New session 16 of user core. May 13 23:41:21.454106 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 23:41:21.687139 sshd[4166]: Connection closed by 10.0.0.1 port 40246 May 13 23:41:21.687732 sshd-session[4163]: pam_unix(sshd:session): session closed for user core May 13 23:41:21.703087 systemd[1]: sshd@15-10.0.0.42:22-10.0.0.1:40246.service: Deactivated successfully. May 13 23:41:21.704880 systemd[1]: session-16.scope: Deactivated successfully. May 13 23:41:21.705949 systemd-logind[1433]: Session 16 logged out. Waiting for processes to exit. May 13 23:41:21.707985 systemd[1]: Started sshd@16-10.0.0.42:22-10.0.0.1:40248.service - OpenSSH per-connection server daemon (10.0.0.1:40248). May 13 23:41:21.708711 systemd-logind[1433]: Removed session 16. May 13 23:41:21.766459 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 40248 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:41:21.767227 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:41:21.771105 systemd-logind[1433]: New session 17 of user core. May 13 23:41:21.778612 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 23:41:22.982946 sshd[4179]: Connection closed by 10.0.0.1 port 40248 May 13 23:41:22.983776 sshd-session[4176]: pam_unix(sshd:session): session closed for user core May 13 23:41:22.994768 systemd[1]: sshd@16-10.0.0.42:22-10.0.0.1:40248.service: Deactivated successfully. May 13 23:41:22.996332 systemd[1]: session-17.scope: Deactivated successfully. May 13 23:41:22.999112 systemd-logind[1433]: Session 17 logged out. Waiting for processes to exit. May 13 23:41:23.001241 systemd[1]: Started sshd@17-10.0.0.42:22-10.0.0.1:46088.service - OpenSSH per-connection server daemon (10.0.0.1:46088). May 13 23:41:23.005667 systemd-logind[1433]: Removed session 17. May 13 23:41:23.048451 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 46088 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:41:23.049723 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:41:23.053458 systemd-logind[1433]: New session 18 of user core. May 13 23:41:23.062584 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 23:41:23.277825 sshd[4201]: Connection closed by 10.0.0.1 port 46088 May 13 23:41:23.277950 sshd-session[4198]: pam_unix(sshd:session): session closed for user core May 13 23:41:23.287155 systemd[1]: sshd@17-10.0.0.42:22-10.0.0.1:46088.service: Deactivated successfully. May 13 23:41:23.289235 systemd[1]: session-18.scope: Deactivated successfully. May 13 23:41:23.290874 systemd-logind[1433]: Session 18 logged out. Waiting for processes to exit. May 13 23:41:23.292764 systemd[1]: Started sshd@18-10.0.0.42:22-10.0.0.1:46098.service - OpenSSH per-connection server daemon (10.0.0.1:46098). May 13 23:41:23.293760 systemd-logind[1433]: Removed session 18. May 13 23:41:23.344398 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 46098 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:41:23.345238 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:41:23.352487 systemd-logind[1433]: New session 19 of user core. May 13 23:41:23.361554 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 23:41:23.470891 sshd[4215]: Connection closed by 10.0.0.1 port 46098 May 13 23:41:23.471237 sshd-session[4212]: pam_unix(sshd:session): session closed for user core May 13 23:41:23.475014 systemd[1]: sshd@18-10.0.0.42:22-10.0.0.1:46098.service: Deactivated successfully. May 13 23:41:23.478347 systemd[1]: session-19.scope: Deactivated successfully. May 13 23:41:23.479203 systemd-logind[1433]: Session 19 logged out. Waiting for processes to exit. May 13 23:41:23.479956 systemd-logind[1433]: Removed session 19. May 13 23:41:28.486627 systemd[1]: Started sshd@19-10.0.0.42:22-10.0.0.1:46112.service - OpenSSH per-connection server daemon (10.0.0.1:46112). May 13 23:41:28.547239 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 46112 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:41:28.548617 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:41:28.553749 systemd-logind[1433]: New session 20 of user core. May 13 23:41:28.563609 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 23:41:28.671473 sshd[4235]: Connection closed by 10.0.0.1 port 46112 May 13 23:41:28.671935 sshd-session[4233]: pam_unix(sshd:session): session closed for user core May 13 23:41:28.675825 systemd[1]: sshd@19-10.0.0.42:22-10.0.0.1:46112.service: Deactivated successfully. May 13 23:41:28.677888 systemd[1]: session-20.scope: Deactivated successfully. May 13 23:41:28.678776 systemd-logind[1433]: Session 20 logged out. Waiting for processes to exit. May 13 23:41:28.679627 systemd-logind[1433]: Removed session 20. May 13 23:41:33.683975 systemd[1]: Started sshd@20-10.0.0.42:22-10.0.0.1:41614.service - OpenSSH per-connection server daemon (10.0.0.1:41614). May 13 23:41:33.744903 sshd[4248]: Accepted publickey for core from 10.0.0.1 port 41614 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:41:33.746349 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:41:33.750208 systemd-logind[1433]: New session 21 of user core. May 13 23:41:33.762606 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 23:41:33.871729 sshd[4250]: Connection closed by 10.0.0.1 port 41614 May 13 23:41:33.872094 sshd-session[4248]: pam_unix(sshd:session): session closed for user core May 13 23:41:33.876003 systemd[1]: sshd@20-10.0.0.42:22-10.0.0.1:41614.service: Deactivated successfully. May 13 23:41:33.878534 systemd[1]: session-21.scope: Deactivated successfully. May 13 23:41:33.879391 systemd-logind[1433]: Session 21 logged out. Waiting for processes to exit. May 13 23:41:33.880359 systemd-logind[1433]: Removed session 21. May 13 23:41:38.883866 systemd[1]: Started sshd@21-10.0.0.42:22-10.0.0.1:41622.service - OpenSSH per-connection server daemon (10.0.0.1:41622). May 13 23:41:38.948535 sshd[4264]: Accepted publickey for core from 10.0.0.1 port 41622 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:41:38.949819 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:41:38.954535 systemd-logind[1433]: New session 22 of user core. May 13 23:41:38.965644 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 23:41:39.087302 sshd[4266]: Connection closed by 10.0.0.1 port 41622 May 13 23:41:39.087702 sshd-session[4264]: pam_unix(sshd:session): session closed for user core May 13 23:41:39.097038 systemd[1]: sshd@21-10.0.0.42:22-10.0.0.1:41622.service: Deactivated successfully. May 13 23:41:39.102993 systemd[1]: session-22.scope: Deactivated successfully. May 13 23:41:39.104675 systemd-logind[1433]: Session 22 logged out. Waiting for processes to exit. May 13 23:41:39.106083 systemd[1]: Started sshd@22-10.0.0.42:22-10.0.0.1:41630.service - OpenSSH per-connection server daemon (10.0.0.1:41630). May 13 23:41:39.109134 systemd-logind[1433]: Removed session 22. May 13 23:41:39.165072 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 41630 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:41:39.165718 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:41:39.172071 systemd-logind[1433]: New session 23 of user core. May 13 23:41:39.180644 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 23:41:40.978522 containerd[1456]: time="2025-05-13T23:41:40.978470937Z" level=info msg="StopContainer for \"85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3\" with timeout 30 (s)" May 13 23:41:40.985098 containerd[1456]: time="2025-05-13T23:41:40.985055810Z" level=info msg="Stop container \"85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3\" with signal terminated" May 13 23:41:40.998348 systemd[1]: cri-containerd-85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3.scope: Deactivated successfully. May 13 23:41:41.001877 containerd[1456]: time="2025-05-13T23:41:41.000574092Z" level=info msg="TaskExit event in podsandbox handler container_id:\"85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3\" id:\"85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3\" pid:3239 exited_at:{seconds:1747179701 nanos:30789}" May 13 23:41:41.001877 containerd[1456]: time="2025-05-13T23:41:41.001181917Z" level=info msg="received exit event container_id:\"85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3\" id:\"85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3\" pid:3239 exited_at:{seconds:1747179701 nanos:30789}" May 13 23:41:41.012741 containerd[1456]: time="2025-05-13T23:41:41.012395369Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001\" id:\"ccd42bdf041a32b77adf5ad941a53fba43132c87273944bfb839ec4e91e630a8\" pid:4309 exited_at:{seconds:1747179701 nanos:12022954}" May 13 23:41:41.019557 containerd[1456]: time="2025-05-13T23:41:41.019508095Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:41:41.026154 containerd[1456]: time="2025-05-13T23:41:41.026118161Z" level=info msg="StopContainer for \"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001\" with timeout 2 (s)" May 13 23:41:41.026780 containerd[1456]: time="2025-05-13T23:41:41.026557979Z" level=info msg="Stop container \"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001\" with signal terminated" May 13 23:41:41.030905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3-rootfs.mount: Deactivated successfully. May 13 23:41:41.035322 systemd-networkd[1390]: lxc_health: Link DOWN May 13 23:41:41.035325 systemd-networkd[1390]: lxc_health: Lost carrier May 13 23:41:41.054126 containerd[1456]: time="2025-05-13T23:41:41.054076527Z" level=info msg="StopContainer for \"85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3\" returns successfully" May 13 23:41:41.055310 containerd[1456]: time="2025-05-13T23:41:41.055277096Z" level=info msg="received exit event container_id:\"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001\" id:\"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001\" pid:3352 exited_at:{seconds:1747179701 nanos:55100208}" May 13 23:41:41.055347 systemd[1]: cri-containerd-bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001.scope: Deactivated successfully. May 13 23:41:41.056035 containerd[1456]: time="2025-05-13T23:41:41.055537106Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001\" id:\"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001\" pid:3352 exited_at:{seconds:1747179701 nanos:55100208}" May 13 23:41:41.056035 containerd[1456]: time="2025-05-13T23:41:41.055653751Z" level=info msg="StopPodSandbox for \"0fe0aa4e001614010e077c9f39d523530f99ad65a528af2c6a353d9fb9087bd8\"" May 13 23:41:41.055667 systemd[1]: cri-containerd-bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001.scope: Consumed 7.008s CPU time, 121.8M memory peak, 180K read from disk, 12.9M written to disk. May 13 23:41:41.064253 containerd[1456]: time="2025-05-13T23:41:41.064179774Z" level=info msg="Container to stop \"85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:41:41.073755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001-rootfs.mount: Deactivated successfully. May 13 23:41:41.074492 systemd[1]: cri-containerd-0fe0aa4e001614010e077c9f39d523530f99ad65a528af2c6a353d9fb9087bd8.scope: Deactivated successfully. May 13 23:41:41.078212 containerd[1456]: time="2025-05-13T23:41:41.078063373Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0fe0aa4e001614010e077c9f39d523530f99ad65a528af2c6a353d9fb9087bd8\" id:\"0fe0aa4e001614010e077c9f39d523530f99ad65a528af2c6a353d9fb9087bd8\" pid:2926 exit_status:137 exited_at:{seconds:1747179701 nanos:75747560}" May 13 23:41:41.099133 containerd[1456]: time="2025-05-13T23:41:41.099079459Z" level=info msg="StopContainer for \"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001\" returns successfully" May 13 23:41:41.099651 containerd[1456]: time="2025-05-13T23:41:41.099624081Z" level=info msg="StopPodSandbox for \"801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7\"" May 13 23:41:41.099712 containerd[1456]: time="2025-05-13T23:41:41.099682644Z" level=info msg="Container to stop \"d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:41:41.099712 containerd[1456]: time="2025-05-13T23:41:41.099694884Z" level=info msg="Container to stop \"e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:41:41.099712 containerd[1456]: time="2025-05-13T23:41:41.099703885Z" level=info msg="Container to stop \"f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:41:41.099712 containerd[1456]: time="2025-05-13T23:41:41.099711805Z" level=info msg="Container to stop \"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:41:41.099813 containerd[1456]: time="2025-05-13T23:41:41.099720525Z" level=info msg="Container to stop \"50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:41:41.106166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fe0aa4e001614010e077c9f39d523530f99ad65a528af2c6a353d9fb9087bd8-rootfs.mount: Deactivated successfully. May 13 23:41:41.110611 systemd[1]: cri-containerd-801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7.scope: Deactivated successfully. May 13 23:41:41.116932 containerd[1456]: time="2025-05-13T23:41:41.116896577Z" level=info msg="shim disconnected" id=0fe0aa4e001614010e077c9f39d523530f99ad65a528af2c6a353d9fb9087bd8 namespace=k8s.io May 13 23:41:41.117143 containerd[1456]: time="2025-05-13T23:41:41.116927058Z" level=warning msg="cleaning up after shim disconnected" id=0fe0aa4e001614010e077c9f39d523530f99ad65a528af2c6a353d9fb9087bd8 namespace=k8s.io May 13 23:41:41.117143 containerd[1456]: time="2025-05-13T23:41:41.116956459Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:41:41.140243 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7-rootfs.mount: Deactivated successfully. May 13 23:41:41.145093 containerd[1456]: time="2025-05-13T23:41:41.145037990Z" level=info msg="TaskExit event in podsandbox handler container_id:\"801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7\" id:\"801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7\" pid:2854 exit_status:137 exited_at:{seconds:1747179701 nanos:110552121}" May 13 23:41:41.145600 containerd[1456]: time="2025-05-13T23:41:41.145571772Z" level=info msg="TearDown network for sandbox \"0fe0aa4e001614010e077c9f39d523530f99ad65a528af2c6a353d9fb9087bd8\" successfully" May 13 23:41:41.145702 containerd[1456]: time="2025-05-13T23:41:41.145686176Z" level=info msg="StopPodSandbox for \"0fe0aa4e001614010e077c9f39d523530f99ad65a528af2c6a353d9fb9087bd8\" returns successfully" May 13 23:41:41.146708 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0fe0aa4e001614010e077c9f39d523530f99ad65a528af2c6a353d9fb9087bd8-shm.mount: Deactivated successfully. May 13 23:41:41.148769 containerd[1456]: time="2025-05-13T23:41:41.148580973Z" level=info msg="shim disconnected" id=801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7 namespace=k8s.io May 13 23:41:41.148769 containerd[1456]: time="2025-05-13T23:41:41.148607174Z" level=warning msg="cleaning up after shim disconnected" id=801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7 namespace=k8s.io May 13 23:41:41.148769 containerd[1456]: time="2025-05-13T23:41:41.148634255Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:41:41.150007 containerd[1456]: time="2025-05-13T23:41:41.149743700Z" level=info msg="TearDown network for sandbox \"801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7\" successfully" May 13 23:41:41.150007 containerd[1456]: time="2025-05-13T23:41:41.149777381Z" level=info msg="StopPodSandbox for \"801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7\" returns successfully" May 13 23:41:41.151608 containerd[1456]: time="2025-05-13T23:41:41.150851264Z" level=info msg="received exit event sandbox_id:\"0fe0aa4e001614010e077c9f39d523530f99ad65a528af2c6a353d9fb9087bd8\" exit_status:137 exited_at:{seconds:1747179701 nanos:75747560}" May 13 23:41:41.151608 containerd[1456]: time="2025-05-13T23:41:41.150881185Z" level=info msg="received exit event sandbox_id:\"801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7\" exit_status:137 exited_at:{seconds:1747179701 nanos:110552121}" May 13 23:41:41.206031 kubelet[2692]: I0513 23:41:41.205992 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/084b6c39-b361-4fe0-96e2-0ecbf480d1de-cilium-config-path\") pod \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " May 13 23:41:41.206611 kubelet[2692]: I0513 23:41:41.206513 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-etc-cni-netd\") pod \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " May 13 23:41:41.206611 kubelet[2692]: I0513 23:41:41.206558 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgqzp\" (UniqueName: \"kubernetes.io/projected/084b6c39-b361-4fe0-96e2-0ecbf480d1de-kube-api-access-bgqzp\") pod \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " May 13 23:41:41.206611 kubelet[2692]: I0513 23:41:41.206618 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-bpf-maps\") pod \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " May 13 23:41:41.207384 kubelet[2692]: I0513 23:41:41.206636 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-host-proc-sys-net\") pod \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " May 13 23:41:41.207384 kubelet[2692]: I0513 23:41:41.206707 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-hostproc\") pod \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " May 13 23:41:41.207384 kubelet[2692]: I0513 23:41:41.206727 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/084b6c39-b361-4fe0-96e2-0ecbf480d1de-clustermesh-secrets\") pod \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " May 13 23:41:41.207384 kubelet[2692]: I0513 23:41:41.206743 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-xtables-lock\") pod \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " May 13 23:41:41.207384 kubelet[2692]: I0513 23:41:41.206773 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-cilium-cgroup\") pod \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " May 13 23:41:41.207384 kubelet[2692]: I0513 23:41:41.206793 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e69cf628-0e94-4cf2-a53c-bd993019423b-cilium-config-path\") pod \"e69cf628-0e94-4cf2-a53c-bd993019423b\" (UID: \"e69cf628-0e94-4cf2-a53c-bd993019423b\") " May 13 23:41:41.207548 kubelet[2692]: I0513 23:41:41.206810 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-lib-modules\") pod \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " May 13 23:41:41.207548 kubelet[2692]: I0513 23:41:41.206825 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-cni-path\") pod \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " May 13 23:41:41.207548 kubelet[2692]: I0513 23:41:41.206840 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-host-proc-sys-kernel\") pod \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " May 13 23:41:41.207548 kubelet[2692]: I0513 23:41:41.206868 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjbf4\" (UniqueName: \"kubernetes.io/projected/e69cf628-0e94-4cf2-a53c-bd993019423b-kube-api-access-rjbf4\") pod \"e69cf628-0e94-4cf2-a53c-bd993019423b\" (UID: \"e69cf628-0e94-4cf2-a53c-bd993019423b\") " May 13 23:41:41.207548 kubelet[2692]: I0513 23:41:41.206885 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/084b6c39-b361-4fe0-96e2-0ecbf480d1de-hubble-tls\") pod \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " May 13 23:41:41.207548 kubelet[2692]: I0513 23:41:41.206899 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-cilium-run\") pod \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\" (UID: \"084b6c39-b361-4fe0-96e2-0ecbf480d1de\") " May 13 23:41:41.210520 kubelet[2692]: I0513 23:41:41.210490 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "084b6c39-b361-4fe0-96e2-0ecbf480d1de" (UID: "084b6c39-b361-4fe0-96e2-0ecbf480d1de"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:41:41.210659 kubelet[2692]: I0513 23:41:41.210643 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "084b6c39-b361-4fe0-96e2-0ecbf480d1de" (UID: "084b6c39-b361-4fe0-96e2-0ecbf480d1de"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:41:41.210734 kubelet[2692]: I0513 23:41:41.210721 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "084b6c39-b361-4fe0-96e2-0ecbf480d1de" (UID: "084b6c39-b361-4fe0-96e2-0ecbf480d1de"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:41:41.210800 kubelet[2692]: I0513 23:41:41.210789 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "084b6c39-b361-4fe0-96e2-0ecbf480d1de" (UID: "084b6c39-b361-4fe0-96e2-0ecbf480d1de"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:41:41.210869 kubelet[2692]: I0513 23:41:41.210857 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-hostproc" (OuterVolumeSpecName: "hostproc") pod "084b6c39-b361-4fe0-96e2-0ecbf480d1de" (UID: "084b6c39-b361-4fe0-96e2-0ecbf480d1de"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:41:41.211669 kubelet[2692]: I0513 23:41:41.211610 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "084b6c39-b361-4fe0-96e2-0ecbf480d1de" (UID: "084b6c39-b361-4fe0-96e2-0ecbf480d1de"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:41:41.215434 kubelet[2692]: I0513 23:41:41.215382 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e69cf628-0e94-4cf2-a53c-bd993019423b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e69cf628-0e94-4cf2-a53c-bd993019423b" (UID: "e69cf628-0e94-4cf2-a53c-bd993019423b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 23:41:41.215525 kubelet[2692]: I0513 23:41:41.215451 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "084b6c39-b361-4fe0-96e2-0ecbf480d1de" (UID: "084b6c39-b361-4fe0-96e2-0ecbf480d1de"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:41:41.215525 kubelet[2692]: I0513 23:41:41.215470 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-cni-path" (OuterVolumeSpecName: "cni-path") pod "084b6c39-b361-4fe0-96e2-0ecbf480d1de" (UID: "084b6c39-b361-4fe0-96e2-0ecbf480d1de"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:41:41.215525 kubelet[2692]: I0513 23:41:41.215485 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "084b6c39-b361-4fe0-96e2-0ecbf480d1de" (UID: "084b6c39-b361-4fe0-96e2-0ecbf480d1de"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:41:41.216391 kubelet[2692]: I0513 23:41:41.216137 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/084b6c39-b361-4fe0-96e2-0ecbf480d1de-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "084b6c39-b361-4fe0-96e2-0ecbf480d1de" (UID: "084b6c39-b361-4fe0-96e2-0ecbf480d1de"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 23:41:41.216391 kubelet[2692]: I0513 23:41:41.216138 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/084b6c39-b361-4fe0-96e2-0ecbf480d1de-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "084b6c39-b361-4fe0-96e2-0ecbf480d1de" (UID: "084b6c39-b361-4fe0-96e2-0ecbf480d1de"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 23:41:41.216391 kubelet[2692]: I0513 23:41:41.216168 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "084b6c39-b361-4fe0-96e2-0ecbf480d1de" (UID: "084b6c39-b361-4fe0-96e2-0ecbf480d1de"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:41:41.217823 kubelet[2692]: I0513 23:41:41.217794 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e69cf628-0e94-4cf2-a53c-bd993019423b-kube-api-access-rjbf4" (OuterVolumeSpecName: "kube-api-access-rjbf4") pod "e69cf628-0e94-4cf2-a53c-bd993019423b" (UID: "e69cf628-0e94-4cf2-a53c-bd993019423b"). InnerVolumeSpecName "kube-api-access-rjbf4". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:41:41.218252 kubelet[2692]: I0513 23:41:41.218217 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/084b6c39-b361-4fe0-96e2-0ecbf480d1de-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "084b6c39-b361-4fe0-96e2-0ecbf480d1de" (UID: "084b6c39-b361-4fe0-96e2-0ecbf480d1de"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:41:41.220594 kubelet[2692]: I0513 23:41:41.220565 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/084b6c39-b361-4fe0-96e2-0ecbf480d1de-kube-api-access-bgqzp" (OuterVolumeSpecName: "kube-api-access-bgqzp") pod "084b6c39-b361-4fe0-96e2-0ecbf480d1de" (UID: "084b6c39-b361-4fe0-96e2-0ecbf480d1de"). InnerVolumeSpecName "kube-api-access-bgqzp". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:41:41.308049 kubelet[2692]: I0513 23:41:41.308007 2692 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 23:41:41.308049 kubelet[2692]: I0513 23:41:41.308041 2692 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bgqzp\" (UniqueName: \"kubernetes.io/projected/084b6c39-b361-4fe0-96e2-0ecbf480d1de-kube-api-access-bgqzp\") on node \"localhost\" DevicePath \"\"" May 13 23:41:41.308049 kubelet[2692]: I0513 23:41:41.308056 2692 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 23:41:41.308214 kubelet[2692]: I0513 23:41:41.308064 2692 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 23:41:41.308214 kubelet[2692]: I0513 23:41:41.308072 2692 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 23:41:41.308214 kubelet[2692]: I0513 23:41:41.308081 2692 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/084b6c39-b361-4fe0-96e2-0ecbf480d1de-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 23:41:41.308214 kubelet[2692]: I0513 23:41:41.308091 2692 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 23:41:41.308214 kubelet[2692]: I0513 23:41:41.308099 2692 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 23:41:41.308214 kubelet[2692]: I0513 23:41:41.308107 2692 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e69cf628-0e94-4cf2-a53c-bd993019423b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 23:41:41.308214 kubelet[2692]: I0513 23:41:41.308113 2692 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 23:41:41.308214 kubelet[2692]: I0513 23:41:41.308130 2692 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 23:41:41.308381 kubelet[2692]: I0513 23:41:41.308138 2692 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 23:41:41.308381 kubelet[2692]: I0513 23:41:41.308147 2692 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rjbf4\" (UniqueName: \"kubernetes.io/projected/e69cf628-0e94-4cf2-a53c-bd993019423b-kube-api-access-rjbf4\") on node \"localhost\" DevicePath \"\"" May 13 23:41:41.308381 kubelet[2692]: I0513 23:41:41.308154 2692 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/084b6c39-b361-4fe0-96e2-0ecbf480d1de-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 23:41:41.308381 kubelet[2692]: I0513 23:41:41.308161 2692 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/084b6c39-b361-4fe0-96e2-0ecbf480d1de-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 23:41:41.308381 kubelet[2692]: I0513 23:41:41.308168 2692 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/084b6c39-b361-4fe0-96e2-0ecbf480d1de-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 23:41:41.795621 systemd[1]: Removed slice kubepods-besteffort-pode69cf628_0e94_4cf2_a53c_bd993019423b.slice - libcontainer container kubepods-besteffort-pode69cf628_0e94_4cf2_a53c_bd993019423b.slice. May 13 23:41:41.797441 systemd[1]: Removed slice kubepods-burstable-pod084b6c39_b361_4fe0_96e2_0ecbf480d1de.slice - libcontainer container kubepods-burstable-pod084b6c39_b361_4fe0_96e2_0ecbf480d1de.slice. May 13 23:41:41.797560 systemd[1]: kubepods-burstable-pod084b6c39_b361_4fe0_96e2_0ecbf480d1de.slice: Consumed 7.174s CPU time, 122.2M memory peak, 288K read from disk, 12.9M written to disk. May 13 23:41:41.995175 kubelet[2692]: I0513 23:41:41.994437 2692 scope.go:117] "RemoveContainer" containerID="bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001" May 13 23:41:41.998065 containerd[1456]: time="2025-05-13T23:41:41.998030460Z" level=info msg="RemoveContainer for \"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001\"" May 13 23:41:42.030008 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-801ace722aefaaf18777a6fe9b56c2b174db1aa913ebf29f8b989bc0ffd105b7-shm.mount: Deactivated successfully. May 13 23:41:42.030209 systemd[1]: var-lib-kubelet-pods-e69cf628\x2d0e94\x2d4cf2\x2da53c\x2dbd993019423b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drjbf4.mount: Deactivated successfully. May 13 23:41:42.030319 systemd[1]: var-lib-kubelet-pods-084b6c39\x2db361\x2d4fe0\x2d96e2\x2d0ecbf480d1de-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbgqzp.mount: Deactivated successfully. May 13 23:41:42.030386 systemd[1]: var-lib-kubelet-pods-084b6c39\x2db361\x2d4fe0\x2d96e2\x2d0ecbf480d1de-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 23:41:42.030458 systemd[1]: var-lib-kubelet-pods-084b6c39\x2db361\x2d4fe0\x2d96e2\x2d0ecbf480d1de-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 23:41:42.058291 containerd[1456]: time="2025-05-13T23:41:42.058222383Z" level=info msg="RemoveContainer for \"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001\" returns successfully" May 13 23:41:42.058718 kubelet[2692]: I0513 23:41:42.058562 2692 scope.go:117] "RemoveContainer" containerID="f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003" May 13 23:41:42.059995 containerd[1456]: time="2025-05-13T23:41:42.059968251Z" level=info msg="RemoveContainer for \"f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003\"" May 13 23:41:42.104869 containerd[1456]: time="2025-05-13T23:41:42.104599640Z" level=info msg="RemoveContainer for \"f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003\" returns successfully" May 13 23:41:42.105012 kubelet[2692]: I0513 23:41:42.104942 2692 scope.go:117] "RemoveContainer" containerID="e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3" May 13 23:41:42.110465 containerd[1456]: time="2025-05-13T23:41:42.109564715Z" level=info msg="RemoveContainer for \"e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3\"" May 13 23:41:42.159749 containerd[1456]: time="2025-05-13T23:41:42.159692799Z" level=info msg="RemoveContainer for \"e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3\" returns successfully" May 13 23:41:42.159981 kubelet[2692]: I0513 23:41:42.159942 2692 scope.go:117] "RemoveContainer" containerID="50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4" May 13 23:41:42.161526 containerd[1456]: time="2025-05-13T23:41:42.161481789Z" level=info msg="RemoveContainer for \"50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4\"" May 13 23:41:42.240496 containerd[1456]: time="2025-05-13T23:41:42.240436844Z" level=info msg="RemoveContainer for \"50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4\" returns successfully" May 13 23:41:42.240815 kubelet[2692]: I0513 23:41:42.240786 2692 scope.go:117] "RemoveContainer" containerID="d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c" May 13 23:41:42.242344 containerd[1456]: time="2025-05-13T23:41:42.242304717Z" level=info msg="RemoveContainer for \"d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c\"" May 13 23:41:42.331475 containerd[1456]: time="2025-05-13T23:41:42.331353767Z" level=info msg="RemoveContainer for \"d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c\" returns successfully" May 13 23:41:42.331777 kubelet[2692]: I0513 23:41:42.331620 2692 scope.go:117] "RemoveContainer" containerID="bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001" May 13 23:41:42.331939 containerd[1456]: time="2025-05-13T23:41:42.331860147Z" level=error msg="ContainerStatus for \"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001\": not found" May 13 23:41:42.339267 kubelet[2692]: E0513 23:41:42.339214 2692 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001\": not found" containerID="bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001" May 13 23:41:42.339349 kubelet[2692]: I0513 23:41:42.339272 2692 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001"} err="failed to get container status \"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc6350db48a5347cff0cf7965a57769d06eee976328b91a269775c615a6c8001\": not found" May 13 23:41:42.339378 kubelet[2692]: I0513 23:41:42.339353 2692 scope.go:117] "RemoveContainer" containerID="f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003" May 13 23:41:42.340116 containerd[1456]: time="2025-05-13T23:41:42.340073268Z" level=error msg="ContainerStatus for \"f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003\": not found" May 13 23:41:42.340259 kubelet[2692]: E0513 23:41:42.340225 2692 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003\": not found" containerID="f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003" May 13 23:41:42.340300 kubelet[2692]: I0513 23:41:42.340267 2692 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003"} err="failed to get container status \"f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003\": rpc error: code = NotFound desc = an error occurred when try to find container \"f81df2d58679a3c7037f7f5e038893cbe38aa9020362cd6631308e2dd8563003\": not found" May 13 23:41:42.340326 kubelet[2692]: I0513 23:41:42.340306 2692 scope.go:117] "RemoveContainer" containerID="e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3" May 13 23:41:42.342700 containerd[1456]: time="2025-05-13T23:41:42.342654410Z" level=error msg="ContainerStatus for \"e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3\": not found" May 13 23:41:42.343269 kubelet[2692]: E0513 23:41:42.343130 2692 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3\": not found" containerID="e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3" May 13 23:41:42.343269 kubelet[2692]: I0513 23:41:42.343170 2692 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3"} err="failed to get container status \"e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"e1199efd0ffafa39492c55e4cdef6b023c1f4a83a6ea89b12b4de088e08a57a3\": not found" May 13 23:41:42.343269 kubelet[2692]: I0513 23:41:42.343189 2692 scope.go:117] "RemoveContainer" containerID="50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4" May 13 23:41:42.343969 containerd[1456]: time="2025-05-13T23:41:42.343937980Z" level=error msg="ContainerStatus for \"50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4\": not found" May 13 23:41:42.344530 kubelet[2692]: E0513 23:41:42.344482 2692 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4\": not found" containerID="50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4" May 13 23:41:42.344610 kubelet[2692]: I0513 23:41:42.344527 2692 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4"} err="failed to get container status \"50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4\": rpc error: code = NotFound desc = an error occurred when try to find container \"50619dd689a3ae297a332eab229e342b669a77bf3d99874d99e3cf8f9e644dc4\": not found" May 13 23:41:42.344610 kubelet[2692]: I0513 23:41:42.344547 2692 scope.go:117] "RemoveContainer" containerID="d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c" May 13 23:41:42.344958 containerd[1456]: time="2025-05-13T23:41:42.344856776Z" level=error msg="ContainerStatus for \"d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c\": not found" May 13 23:41:42.345062 kubelet[2692]: E0513 23:41:42.345019 2692 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c\": not found" containerID="d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c" May 13 23:41:42.345062 kubelet[2692]: I0513 23:41:42.345044 2692 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c"} err="failed to get container status \"d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c\": rpc error: code = NotFound desc = an error occurred when try to find container \"d531e436754679fa936d32ba22f122419ba073c195ba543525fdef71ff54f33c\": not found" May 13 23:41:42.345062 kubelet[2692]: I0513 23:41:42.345060 2692 scope.go:117] "RemoveContainer" containerID="85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3" May 13 23:41:42.347911 containerd[1456]: time="2025-05-13T23:41:42.347871254Z" level=info msg="RemoveContainer for \"85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3\"" May 13 23:41:42.385199 containerd[1456]: time="2025-05-13T23:41:42.385151595Z" level=info msg="RemoveContainer for \"85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3\" returns successfully" May 13 23:41:42.385584 kubelet[2692]: I0513 23:41:42.385457 2692 scope.go:117] "RemoveContainer" containerID="85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3" May 13 23:41:42.385753 containerd[1456]: time="2025-05-13T23:41:42.385699177Z" level=error msg="ContainerStatus for \"85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3\": not found" May 13 23:41:42.385879 kubelet[2692]: E0513 23:41:42.385833 2692 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3\": not found" containerID="85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3" May 13 23:41:42.385879 kubelet[2692]: I0513 23:41:42.385865 2692 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3"} err="failed to get container status \"85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"85d577688454a4cb11b75e4f77ade12e4277224416d291875353718dae7344c3\": not found" May 13 23:41:42.846825 sshd[4282]: Connection closed by 10.0.0.1 port 41630 May 13 23:41:42.846371 sshd-session[4279]: pam_unix(sshd:session): session closed for user core May 13 23:41:42.857813 systemd[1]: sshd@22-10.0.0.42:22-10.0.0.1:41630.service: Deactivated successfully. May 13 23:41:42.859725 systemd[1]: session-23.scope: Deactivated successfully. May 13 23:41:42.861495 systemd[1]: session-23.scope: Consumed 1.011s CPU time, 24.5M memory peak. May 13 23:41:42.862023 systemd-logind[1433]: Session 23 logged out. Waiting for processes to exit. May 13 23:41:42.863858 systemd[1]: Started sshd@23-10.0.0.42:22-10.0.0.1:42238.service - OpenSSH per-connection server daemon (10.0.0.1:42238). May 13 23:41:42.864653 systemd-logind[1433]: Removed session 23. May 13 23:41:42.922932 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 42238 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:41:42.924307 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:41:42.928797 systemd-logind[1433]: New session 24 of user core. May 13 23:41:42.939622 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 23:41:43.794105 kubelet[2692]: I0513 23:41:43.794052 2692 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="084b6c39-b361-4fe0-96e2-0ecbf480d1de" path="/var/lib/kubelet/pods/084b6c39-b361-4fe0-96e2-0ecbf480d1de/volumes" May 13 23:41:43.794678 kubelet[2692]: I0513 23:41:43.794647 2692 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e69cf628-0e94-4cf2-a53c-bd993019423b" path="/var/lib/kubelet/pods/e69cf628-0e94-4cf2-a53c-bd993019423b/volumes" May 13 23:41:44.034490 sshd[4434]: Connection closed by 10.0.0.1 port 42238 May 13 23:41:44.036332 sshd-session[4431]: pam_unix(sshd:session): session closed for user core May 13 23:41:44.050822 systemd[1]: sshd@23-10.0.0.42:22-10.0.0.1:42238.service: Deactivated successfully. May 13 23:41:44.053383 systemd[1]: session-24.scope: Deactivated successfully. May 13 23:41:44.057016 systemd-logind[1433]: Session 24 logged out. Waiting for processes to exit. May 13 23:41:44.062677 kubelet[2692]: I0513 23:41:44.062630 2692 topology_manager.go:215] "Topology Admit Handler" podUID="e47c952c-2ea7-45fb-a81c-d685522e5f42" podNamespace="kube-system" podName="cilium-mf9vt" May 13 23:41:44.062777 kubelet[2692]: E0513 23:41:44.062749 2692 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="084b6c39-b361-4fe0-96e2-0ecbf480d1de" containerName="cilium-agent" May 13 23:41:44.062777 kubelet[2692]: E0513 23:41:44.062760 2692 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="084b6c39-b361-4fe0-96e2-0ecbf480d1de" containerName="mount-cgroup" May 13 23:41:44.062777 kubelet[2692]: E0513 23:41:44.062765 2692 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="084b6c39-b361-4fe0-96e2-0ecbf480d1de" containerName="mount-bpf-fs" May 13 23:41:44.062777 kubelet[2692]: E0513 23:41:44.062770 2692 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="084b6c39-b361-4fe0-96e2-0ecbf480d1de" containerName="clean-cilium-state" May 13 23:41:44.062777 kubelet[2692]: E0513 23:41:44.062778 2692 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="084b6c39-b361-4fe0-96e2-0ecbf480d1de" containerName="apply-sysctl-overwrites" May 13 23:41:44.062892 kubelet[2692]: E0513 23:41:44.062784 2692 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e69cf628-0e94-4cf2-a53c-bd993019423b" containerName="cilium-operator" May 13 23:41:44.062892 kubelet[2692]: I0513 23:41:44.062808 2692 memory_manager.go:354] "RemoveStaleState removing state" podUID="e69cf628-0e94-4cf2-a53c-bd993019423b" containerName="cilium-operator" May 13 23:41:44.062892 kubelet[2692]: I0513 23:41:44.062814 2692 memory_manager.go:354] "RemoveStaleState removing state" podUID="084b6c39-b361-4fe0-96e2-0ecbf480d1de" containerName="cilium-agent" May 13 23:41:44.063566 systemd[1]: Started sshd@24-10.0.0.42:22-10.0.0.1:42254.service - OpenSSH per-connection server daemon (10.0.0.1:42254). May 13 23:41:44.065589 systemd-logind[1433]: Removed session 24. May 13 23:41:44.069014 kubelet[2692]: W0513 23:41:44.068951 2692 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 13 23:41:44.069014 kubelet[2692]: E0513 23:41:44.068991 2692 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 13 23:41:44.088848 systemd[1]: Created slice kubepods-burstable-pode47c952c_2ea7_45fb_a81c_d685522e5f42.slice - libcontainer container kubepods-burstable-pode47c952c_2ea7_45fb_a81c_d685522e5f42.slice. May 13 23:41:44.127926 kubelet[2692]: I0513 23:41:44.127885 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e47c952c-2ea7-45fb-a81c-d685522e5f42-cilium-config-path\") pod \"cilium-mf9vt\" (UID: \"e47c952c-2ea7-45fb-a81c-d685522e5f42\") " pod="kube-system/cilium-mf9vt" May 13 23:41:44.129543 kubelet[2692]: I0513 23:41:44.129496 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e47c952c-2ea7-45fb-a81c-d685522e5f42-bpf-maps\") pod \"cilium-mf9vt\" (UID: \"e47c952c-2ea7-45fb-a81c-d685522e5f42\") " pod="kube-system/cilium-mf9vt" May 13 23:41:44.129543 kubelet[2692]: I0513 23:41:44.129542 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e47c952c-2ea7-45fb-a81c-d685522e5f42-hostproc\") pod \"cilium-mf9vt\" (UID: \"e47c952c-2ea7-45fb-a81c-d685522e5f42\") " pod="kube-system/cilium-mf9vt" May 13 23:41:44.129680 kubelet[2692]: I0513 23:41:44.129562 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e47c952c-2ea7-45fb-a81c-d685522e5f42-cilium-cgroup\") pod \"cilium-mf9vt\" (UID: \"e47c952c-2ea7-45fb-a81c-d685522e5f42\") " pod="kube-system/cilium-mf9vt" May 13 23:41:44.129680 kubelet[2692]: I0513 23:41:44.129593 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e47c952c-2ea7-45fb-a81c-d685522e5f42-lib-modules\") pod \"cilium-mf9vt\" (UID: \"e47c952c-2ea7-45fb-a81c-d685522e5f42\") " pod="kube-system/cilium-mf9vt" May 13 23:41:44.129680 kubelet[2692]: I0513 23:41:44.129627 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e47c952c-2ea7-45fb-a81c-d685522e5f42-host-proc-sys-net\") pod \"cilium-mf9vt\" (UID: \"e47c952c-2ea7-45fb-a81c-d685522e5f42\") " pod="kube-system/cilium-mf9vt" May 13 23:41:44.129680 kubelet[2692]: I0513 23:41:44.129648 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e47c952c-2ea7-45fb-a81c-d685522e5f42-cilium-run\") pod \"cilium-mf9vt\" (UID: \"e47c952c-2ea7-45fb-a81c-d685522e5f42\") " pod="kube-system/cilium-mf9vt" May 13 23:41:44.129774 kubelet[2692]: I0513 23:41:44.129681 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e47c952c-2ea7-45fb-a81c-d685522e5f42-cni-path\") pod \"cilium-mf9vt\" (UID: \"e47c952c-2ea7-45fb-a81c-d685522e5f42\") " pod="kube-system/cilium-mf9vt" May 13 23:41:44.129774 kubelet[2692]: I0513 23:41:44.129711 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgpw7\" (UniqueName: \"kubernetes.io/projected/e47c952c-2ea7-45fb-a81c-d685522e5f42-kube-api-access-vgpw7\") pod \"cilium-mf9vt\" (UID: \"e47c952c-2ea7-45fb-a81c-d685522e5f42\") " pod="kube-system/cilium-mf9vt" May 13 23:41:44.129774 kubelet[2692]: I0513 23:41:44.129734 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e47c952c-2ea7-45fb-a81c-d685522e5f42-hubble-tls\") pod \"cilium-mf9vt\" (UID: \"e47c952c-2ea7-45fb-a81c-d685522e5f42\") " pod="kube-system/cilium-mf9vt" May 13 23:41:44.129774 kubelet[2692]: I0513 23:41:44.129763 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e47c952c-2ea7-45fb-a81c-d685522e5f42-xtables-lock\") pod \"cilium-mf9vt\" (UID: \"e47c952c-2ea7-45fb-a81c-d685522e5f42\") " pod="kube-system/cilium-mf9vt" May 13 23:41:44.129859 kubelet[2692]: I0513 23:41:44.129788 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e47c952c-2ea7-45fb-a81c-d685522e5f42-host-proc-sys-kernel\") pod \"cilium-mf9vt\" (UID: \"e47c952c-2ea7-45fb-a81c-d685522e5f42\") " pod="kube-system/cilium-mf9vt" May 13 23:41:44.129859 kubelet[2692]: I0513 23:41:44.129819 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e47c952c-2ea7-45fb-a81c-d685522e5f42-cilium-ipsec-secrets\") pod \"cilium-mf9vt\" (UID: \"e47c952c-2ea7-45fb-a81c-d685522e5f42\") " pod="kube-system/cilium-mf9vt" May 13 23:41:44.129859 kubelet[2692]: I0513 23:41:44.129849 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e47c952c-2ea7-45fb-a81c-d685522e5f42-etc-cni-netd\") pod \"cilium-mf9vt\" (UID: \"e47c952c-2ea7-45fb-a81c-d685522e5f42\") " pod="kube-system/cilium-mf9vt" May 13 23:41:44.129925 kubelet[2692]: I0513 23:41:44.129869 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e47c952c-2ea7-45fb-a81c-d685522e5f42-clustermesh-secrets\") pod \"cilium-mf9vt\" (UID: \"e47c952c-2ea7-45fb-a81c-d685522e5f42\") " pod="kube-system/cilium-mf9vt" May 13 23:41:44.135823 sshd[4448]: Accepted publickey for core from 10.0.0.1 port 42254 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:41:44.139257 sshd-session[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:41:44.148487 systemd-logind[1433]: New session 25 of user core. May 13 23:41:44.157732 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 23:41:44.211896 sshd[4451]: Connection closed by 10.0.0.1 port 42254 May 13 23:41:44.213795 sshd-session[4448]: pam_unix(sshd:session): session closed for user core May 13 23:41:44.244340 systemd[1]: sshd@24-10.0.0.42:22-10.0.0.1:42254.service: Deactivated successfully. May 13 23:41:44.248189 systemd[1]: session-25.scope: Deactivated successfully. May 13 23:41:44.250860 systemd-logind[1433]: Session 25 logged out. Waiting for processes to exit. May 13 23:41:44.254859 systemd[1]: Started sshd@25-10.0.0.42:22-10.0.0.1:42258.service - OpenSSH per-connection server daemon (10.0.0.1:42258). May 13 23:41:44.261985 systemd-logind[1433]: Removed session 25. May 13 23:41:44.321769 sshd[4460]: Accepted publickey for core from 10.0.0.1 port 42258 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:41:44.324002 sshd-session[4460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:41:44.329554 systemd-logind[1433]: New session 26 of user core. May 13 23:41:44.335692 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 23:41:45.293094 containerd[1456]: time="2025-05-13T23:41:45.293045128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mf9vt,Uid:e47c952c-2ea7-45fb-a81c-d685522e5f42,Namespace:kube-system,Attempt:0,}" May 13 23:41:45.316490 containerd[1456]: time="2025-05-13T23:41:45.316437054Z" level=info msg="connecting to shim cb24fc80f22d38e875a16fca804952d1bad954b7c9e4cbac649b8a0acfcff2b6" address="unix:///run/containerd/s/09f265182ac82cd5e1dbd3a3dc52ddd3dc4a661eb1e2bdcbaa595ae91e17c4a0" namespace=k8s.io protocol=ttrpc version=3 May 13 23:41:45.353720 systemd[1]: Started cri-containerd-cb24fc80f22d38e875a16fca804952d1bad954b7c9e4cbac649b8a0acfcff2b6.scope - libcontainer container cb24fc80f22d38e875a16fca804952d1bad954b7c9e4cbac649b8a0acfcff2b6. May 13 23:41:45.384693 containerd[1456]: time="2025-05-13T23:41:45.384644840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mf9vt,Uid:e47c952c-2ea7-45fb-a81c-d685522e5f42,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb24fc80f22d38e875a16fca804952d1bad954b7c9e4cbac649b8a0acfcff2b6\"" May 13 23:41:45.387790 containerd[1456]: time="2025-05-13T23:41:45.387725471Z" level=info msg="CreateContainer within sandbox \"cb24fc80f22d38e875a16fca804952d1bad954b7c9e4cbac649b8a0acfcff2b6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 23:41:45.413240 containerd[1456]: time="2025-05-13T23:41:45.412522208Z" level=info msg="Container a4167087d607907bd7e5851d1b428e112615be86e473b7ca6ef5f82b7f187339: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:45.419258 containerd[1456]: time="2025-05-13T23:41:45.419210129Z" level=info msg="CreateContainer within sandbox \"cb24fc80f22d38e875a16fca804952d1bad954b7c9e4cbac649b8a0acfcff2b6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a4167087d607907bd7e5851d1b428e112615be86e473b7ca6ef5f82b7f187339\"" May 13 23:41:45.420086 containerd[1456]: time="2025-05-13T23:41:45.420052160Z" level=info msg="StartContainer for \"a4167087d607907bd7e5851d1b428e112615be86e473b7ca6ef5f82b7f187339\"" May 13 23:41:45.421156 containerd[1456]: time="2025-05-13T23:41:45.421098598Z" level=info msg="connecting to shim a4167087d607907bd7e5851d1b428e112615be86e473b7ca6ef5f82b7f187339" address="unix:///run/containerd/s/09f265182ac82cd5e1dbd3a3dc52ddd3dc4a661eb1e2bdcbaa595ae91e17c4a0" protocol=ttrpc version=3 May 13 23:41:45.440197 systemd[1]: Started cri-containerd-a4167087d607907bd7e5851d1b428e112615be86e473b7ca6ef5f82b7f187339.scope - libcontainer container a4167087d607907bd7e5851d1b428e112615be86e473b7ca6ef5f82b7f187339. May 13 23:41:45.478448 containerd[1456]: time="2025-05-13T23:41:45.476276752Z" level=info msg="StartContainer for \"a4167087d607907bd7e5851d1b428e112615be86e473b7ca6ef5f82b7f187339\" returns successfully" May 13 23:41:45.514068 systemd[1]: cri-containerd-a4167087d607907bd7e5851d1b428e112615be86e473b7ca6ef5f82b7f187339.scope: Deactivated successfully. May 13 23:41:45.518076 containerd[1456]: time="2025-05-13T23:41:45.518025182Z" level=info msg="received exit event container_id:\"a4167087d607907bd7e5851d1b428e112615be86e473b7ca6ef5f82b7f187339\" id:\"a4167087d607907bd7e5851d1b428e112615be86e473b7ca6ef5f82b7f187339\" pid:4530 exited_at:{seconds:1747179705 nanos:517051747}" May 13 23:41:45.518277 containerd[1456]: time="2025-05-13T23:41:45.518249910Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a4167087d607907bd7e5851d1b428e112615be86e473b7ca6ef5f82b7f187339\" id:\"a4167087d607907bd7e5851d1b428e112615be86e473b7ca6ef5f82b7f187339\" pid:4530 exited_at:{seconds:1747179705 nanos:517051747}" May 13 23:41:45.846882 kubelet[2692]: E0513 23:41:45.846800 2692 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 23:41:46.013596 containerd[1456]: time="2025-05-13T23:41:46.013549486Z" level=info msg="CreateContainer within sandbox \"cb24fc80f22d38e875a16fca804952d1bad954b7c9e4cbac649b8a0acfcff2b6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 23:41:46.022231 containerd[1456]: time="2025-05-13T23:41:46.022165909Z" level=info msg="Container 2f486e84e6301d87332ba11c4d494d0890c876ddc9985bfa8af7fcc5c038e18f: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:46.043383 containerd[1456]: time="2025-05-13T23:41:46.043335894Z" level=info msg="CreateContainer within sandbox \"cb24fc80f22d38e875a16fca804952d1bad954b7c9e4cbac649b8a0acfcff2b6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2f486e84e6301d87332ba11c4d494d0890c876ddc9985bfa8af7fcc5c038e18f\"" May 13 23:41:46.044380 containerd[1456]: time="2025-05-13T23:41:46.043965796Z" level=info msg="StartContainer for \"2f486e84e6301d87332ba11c4d494d0890c876ddc9985bfa8af7fcc5c038e18f\"" May 13 23:41:46.047037 containerd[1456]: time="2025-05-13T23:41:46.047004223Z" level=info msg="connecting to shim 2f486e84e6301d87332ba11c4d494d0890c876ddc9985bfa8af7fcc5c038e18f" address="unix:///run/containerd/s/09f265182ac82cd5e1dbd3a3dc52ddd3dc4a661eb1e2bdcbaa595ae91e17c4a0" protocol=ttrpc version=3 May 13 23:41:46.071645 systemd[1]: Started cri-containerd-2f486e84e6301d87332ba11c4d494d0890c876ddc9985bfa8af7fcc5c038e18f.scope - libcontainer container 2f486e84e6301d87332ba11c4d494d0890c876ddc9985bfa8af7fcc5c038e18f. May 13 23:41:46.105999 containerd[1456]: time="2025-05-13T23:41:46.105187271Z" level=info msg="StartContainer for \"2f486e84e6301d87332ba11c4d494d0890c876ddc9985bfa8af7fcc5c038e18f\" returns successfully" May 13 23:41:46.109268 systemd[1]: cri-containerd-2f486e84e6301d87332ba11c4d494d0890c876ddc9985bfa8af7fcc5c038e18f.scope: Deactivated successfully. May 13 23:41:46.110534 containerd[1456]: time="2025-05-13T23:41:46.110489338Z" level=info msg="received exit event container_id:\"2f486e84e6301d87332ba11c4d494d0890c876ddc9985bfa8af7fcc5c038e18f\" id:\"2f486e84e6301d87332ba11c4d494d0890c876ddc9985bfa8af7fcc5c038e18f\" pid:4576 exited_at:{seconds:1747179706 nanos:110212368}" May 13 23:41:46.110680 containerd[1456]: time="2025-05-13T23:41:46.110660744Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f486e84e6301d87332ba11c4d494d0890c876ddc9985bfa8af7fcc5c038e18f\" id:\"2f486e84e6301d87332ba11c4d494d0890c876ddc9985bfa8af7fcc5c038e18f\" pid:4576 exited_at:{seconds:1747179706 nanos:110212368}" May 13 23:41:47.016749 containerd[1456]: time="2025-05-13T23:41:47.016703547Z" level=info msg="CreateContainer within sandbox \"cb24fc80f22d38e875a16fca804952d1bad954b7c9e4cbac649b8a0acfcff2b6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 23:41:47.049438 containerd[1456]: time="2025-05-13T23:41:47.047039027Z" level=info msg="Container 41a4dac0c6c87ae7b2f43e058663bd14eb292b1998a37ff843e1646c1749d7c5: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:47.054025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3304792942.mount: Deactivated successfully. May 13 23:41:47.058632 containerd[1456]: time="2025-05-13T23:41:47.058583623Z" level=info msg="CreateContainer within sandbox \"cb24fc80f22d38e875a16fca804952d1bad954b7c9e4cbac649b8a0acfcff2b6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"41a4dac0c6c87ae7b2f43e058663bd14eb292b1998a37ff843e1646c1749d7c5\"" May 13 23:41:47.059153 containerd[1456]: time="2025-05-13T23:41:47.059116761Z" level=info msg="StartContainer for \"41a4dac0c6c87ae7b2f43e058663bd14eb292b1998a37ff843e1646c1749d7c5\"" May 13 23:41:47.061095 containerd[1456]: time="2025-05-13T23:41:47.061040147Z" level=info msg="connecting to shim 41a4dac0c6c87ae7b2f43e058663bd14eb292b1998a37ff843e1646c1749d7c5" address="unix:///run/containerd/s/09f265182ac82cd5e1dbd3a3dc52ddd3dc4a661eb1e2bdcbaa595ae91e17c4a0" protocol=ttrpc version=3 May 13 23:41:47.087642 systemd[1]: Started cri-containerd-41a4dac0c6c87ae7b2f43e058663bd14eb292b1998a37ff843e1646c1749d7c5.scope - libcontainer container 41a4dac0c6c87ae7b2f43e058663bd14eb292b1998a37ff843e1646c1749d7c5. May 13 23:41:47.129852 systemd[1]: cri-containerd-41a4dac0c6c87ae7b2f43e058663bd14eb292b1998a37ff843e1646c1749d7c5.scope: Deactivated successfully. May 13 23:41:47.133470 containerd[1456]: time="2025-05-13T23:41:47.133179380Z" level=info msg="received exit event container_id:\"41a4dac0c6c87ae7b2f43e058663bd14eb292b1998a37ff843e1646c1749d7c5\" id:\"41a4dac0c6c87ae7b2f43e058663bd14eb292b1998a37ff843e1646c1749d7c5\" pid:4619 exited_at:{seconds:1747179707 nanos:132246388}" May 13 23:41:47.133470 containerd[1456]: time="2025-05-13T23:41:47.133219781Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41a4dac0c6c87ae7b2f43e058663bd14eb292b1998a37ff843e1646c1749d7c5\" id:\"41a4dac0c6c87ae7b2f43e058663bd14eb292b1998a37ff843e1646c1749d7c5\" pid:4619 exited_at:{seconds:1747179707 nanos:132246388}" May 13 23:41:47.134783 containerd[1456]: time="2025-05-13T23:41:47.134689472Z" level=info msg="StartContainer for \"41a4dac0c6c87ae7b2f43e058663bd14eb292b1998a37ff843e1646c1749d7c5\" returns successfully" May 13 23:41:47.154644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41a4dac0c6c87ae7b2f43e058663bd14eb292b1998a37ff843e1646c1749d7c5-rootfs.mount: Deactivated successfully. May 13 23:41:47.374844 kubelet[2692]: I0513 23:41:47.373130 2692 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T23:41:47Z","lastTransitionTime":"2025-05-13T23:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 23:41:48.020357 containerd[1456]: time="2025-05-13T23:41:48.020308616Z" level=info msg="CreateContainer within sandbox \"cb24fc80f22d38e875a16fca804952d1bad954b7c9e4cbac649b8a0acfcff2b6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 23:41:48.029442 containerd[1456]: time="2025-05-13T23:41:48.027785745Z" level=info msg="Container 5107c181f3d0dcde3efafd24217bed6d40071294e14214648dd5c2be39f4ff0c: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:48.039532 containerd[1456]: time="2025-05-13T23:41:48.039479776Z" level=info msg="CreateContainer within sandbox \"cb24fc80f22d38e875a16fca804952d1bad954b7c9e4cbac649b8a0acfcff2b6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5107c181f3d0dcde3efafd24217bed6d40071294e14214648dd5c2be39f4ff0c\"" May 13 23:41:48.041085 containerd[1456]: time="2025-05-13T23:41:48.041053348Z" level=info msg="StartContainer for \"5107c181f3d0dcde3efafd24217bed6d40071294e14214648dd5c2be39f4ff0c\"" May 13 23:41:48.042319 containerd[1456]: time="2025-05-13T23:41:48.042170945Z" level=info msg="connecting to shim 5107c181f3d0dcde3efafd24217bed6d40071294e14214648dd5c2be39f4ff0c" address="unix:///run/containerd/s/09f265182ac82cd5e1dbd3a3dc52ddd3dc4a661eb1e2bdcbaa595ae91e17c4a0" protocol=ttrpc version=3 May 13 23:41:48.066663 systemd[1]: Started cri-containerd-5107c181f3d0dcde3efafd24217bed6d40071294e14214648dd5c2be39f4ff0c.scope - libcontainer container 5107c181f3d0dcde3efafd24217bed6d40071294e14214648dd5c2be39f4ff0c. May 13 23:41:48.093925 systemd[1]: cri-containerd-5107c181f3d0dcde3efafd24217bed6d40071294e14214648dd5c2be39f4ff0c.scope: Deactivated successfully. May 13 23:41:48.096200 containerd[1456]: time="2025-05-13T23:41:48.096164268Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5107c181f3d0dcde3efafd24217bed6d40071294e14214648dd5c2be39f4ff0c\" id:\"5107c181f3d0dcde3efafd24217bed6d40071294e14214648dd5c2be39f4ff0c\" pid:4658 exited_at:{seconds:1747179708 nanos:95929700}" May 13 23:41:48.096302 containerd[1456]: time="2025-05-13T23:41:48.096276112Z" level=info msg="received exit event container_id:\"5107c181f3d0dcde3efafd24217bed6d40071294e14214648dd5c2be39f4ff0c\" id:\"5107c181f3d0dcde3efafd24217bed6d40071294e14214648dd5c2be39f4ff0c\" pid:4658 exited_at:{seconds:1747179708 nanos:95929700}" May 13 23:41:48.096758 containerd[1456]: time="2025-05-13T23:41:48.096735767Z" level=info msg="StartContainer for \"5107c181f3d0dcde3efafd24217bed6d40071294e14214648dd5c2be39f4ff0c\" returns successfully" May 13 23:41:48.116353 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5107c181f3d0dcde3efafd24217bed6d40071294e14214648dd5c2be39f4ff0c-rootfs.mount: Deactivated successfully. May 13 23:41:49.025292 containerd[1456]: time="2025-05-13T23:41:49.025226031Z" level=info msg="CreateContainer within sandbox \"cb24fc80f22d38e875a16fca804952d1bad954b7c9e4cbac649b8a0acfcff2b6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 23:41:49.040172 containerd[1456]: time="2025-05-13T23:41:49.038959118Z" level=info msg="Container 1b777e2335f389ff6bcaa90f18dcedc217dcb1979eb2d350b085555b39897a8c: CDI devices from CRI Config.CDIDevices: []" May 13 23:41:49.047794 containerd[1456]: time="2025-05-13T23:41:49.047746284Z" level=info msg="CreateContainer within sandbox \"cb24fc80f22d38e875a16fca804952d1bad954b7c9e4cbac649b8a0acfcff2b6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1b777e2335f389ff6bcaa90f18dcedc217dcb1979eb2d350b085555b39897a8c\"" May 13 23:41:49.048242 containerd[1456]: time="2025-05-13T23:41:49.048212459Z" level=info msg="StartContainer for \"1b777e2335f389ff6bcaa90f18dcedc217dcb1979eb2d350b085555b39897a8c\"" May 13 23:41:49.049514 containerd[1456]: time="2025-05-13T23:41:49.049486620Z" level=info msg="connecting to shim 1b777e2335f389ff6bcaa90f18dcedc217dcb1979eb2d350b085555b39897a8c" address="unix:///run/containerd/s/09f265182ac82cd5e1dbd3a3dc52ddd3dc4a661eb1e2bdcbaa595ae91e17c4a0" protocol=ttrpc version=3 May 13 23:41:49.072584 systemd[1]: Started cri-containerd-1b777e2335f389ff6bcaa90f18dcedc217dcb1979eb2d350b085555b39897a8c.scope - libcontainer container 1b777e2335f389ff6bcaa90f18dcedc217dcb1979eb2d350b085555b39897a8c. May 13 23:41:49.106966 containerd[1456]: time="2025-05-13T23:41:49.106924689Z" level=info msg="StartContainer for \"1b777e2335f389ff6bcaa90f18dcedc217dcb1979eb2d350b085555b39897a8c\" returns successfully" May 13 23:41:49.160292 containerd[1456]: time="2025-05-13T23:41:49.160197901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b777e2335f389ff6bcaa90f18dcedc217dcb1979eb2d350b085555b39897a8c\" id:\"b7f56b6abb75a384f3eba68786713ff67c7d719826731c80c7c9a845a9aedbcf\" pid:4727 exited_at:{seconds:1747179709 nanos:159897211}" May 13 23:41:49.392441 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 13 23:41:50.042504 kubelet[2692]: I0513 23:41:50.042388 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mf9vt" podStartSLOduration=6.042370881 podStartE2EDuration="6.042370881s" podCreationTimestamp="2025-05-13 23:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:41:50.041128562 +0000 UTC m=+84.339188814" watchObservedRunningTime="2025-05-13 23:41:50.042370881 +0000 UTC m=+84.340431133" May 13 23:41:50.823908 containerd[1456]: time="2025-05-13T23:41:50.823851406Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b777e2335f389ff6bcaa90f18dcedc217dcb1979eb2d350b085555b39897a8c\" id:\"a6624ca0dddafffb192c2722e3bd063b96c3baf5156cd3981fc25e2271af5b92\" pid:4813 exit_status:1 exited_at:{seconds:1747179710 nanos:823391671}" May 13 23:41:52.492502 systemd-networkd[1390]: lxc_health: Link UP May 13 23:41:52.492729 systemd-networkd[1390]: lxc_health: Gained carrier May 13 23:41:53.009476 containerd[1456]: time="2025-05-13T23:41:53.009430198Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b777e2335f389ff6bcaa90f18dcedc217dcb1979eb2d350b085555b39897a8c\" id:\"0a3a00d05d66b004f24e4cb6b8f98a8b419bdc5176bb319d5c18038c09884aae\" pid:5263 exited_at:{seconds:1747179713 nanos:9112989}" May 13 23:41:54.214626 systemd-networkd[1390]: lxc_health: Gained IPv6LL May 13 23:41:55.150003 containerd[1456]: time="2025-05-13T23:41:55.149945550Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b777e2335f389ff6bcaa90f18dcedc217dcb1979eb2d350b085555b39897a8c\" id:\"2e6bae8ed9774d2d4080fa6d61d29d44eaa6ccbd5575071fe80ece5af3afc9f6\" pid:5301 exited_at:{seconds:1747179715 nanos:149592340}" May 13 23:41:57.294295 containerd[1456]: time="2025-05-13T23:41:57.294069399Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b777e2335f389ff6bcaa90f18dcedc217dcb1979eb2d350b085555b39897a8c\" id:\"948e687610742584343e70dbb6e54fa1eb994403eec462f067de823be6a2c8ee\" pid:5331 exited_at:{seconds:1747179717 nanos:293714389}" May 13 23:41:59.447582 containerd[1456]: time="2025-05-13T23:41:59.447476604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b777e2335f389ff6bcaa90f18dcedc217dcb1979eb2d350b085555b39897a8c\" id:\"4b3cb93753d8527ee8f752ca52e32a10471e0c529555b27929b47d6127b132ee\" pid:5356 exited_at:{seconds:1747179719 nanos:446942751}" May 13 23:41:59.478978 sshd[4463]: Connection closed by 10.0.0.1 port 42258 May 13 23:41:59.479596 sshd-session[4460]: pam_unix(sshd:session): session closed for user core May 13 23:41:59.484225 systemd[1]: sshd@25-10.0.0.42:22-10.0.0.1:42258.service: Deactivated successfully. May 13 23:41:59.486226 systemd[1]: session-26.scope: Deactivated successfully. May 13 23:41:59.487167 systemd-logind[1433]: Session 26 logged out. Waiting for processes to exit. May 13 23:41:59.488226 systemd-logind[1433]: Removed session 26.